Remedies for algorithmic tacit collusion

There is growing evidence that tacit collusion can be autonomously achieved by machine learning technology, at least in some real-life examples identiﬁed in the literature and experimental settings. Although more work needs to be done to assess the competitive risks of widespread adoption of autonomous pricing agents, this is still an appropriate time to examine which possible remedies can be used in case competition law shifts towards the prohibition of tacit collusion. This is because outlawing such conduct is pointless unless there are suitable remedies that can be used to address the social harm. This article explores how ﬁnes and structural and behavioural remedies can serve to discourage collusive results while preserving the incentives to use efﬁciency-enhancing algorithms. We ﬁnd that this could be achieved if ﬁnes and remedies can target structural conditions that facilitate collusion. In addition, the problem of unfeasibility of injunctions to remedy traditional price coordination changes with the use of pricing software, which in theory can be programmed to avoid collusive outcomes. Finally, machine-learning methods can be used by the authorities themselves as a tool to test the effects of any given combination of remedies and to estimate a more accurate competitive benchmark for the calculation of the appropriate ﬁne.


I. INTRODUCTION
Antitrust scholars, authorities, and practitioners have been dealing for some years with the issue of whether the use of deep learning methods (encompassed within the more comprehensive term of artificial intelligence, or AI) in pricing decisions enhances the risk of collusive prices without the need of resorting to overt communications. 1 In a previous paper, we address this issue in depth, taking into consideration the nature of the technology and how it affects the likelihood that a given market will achieve interdependent prices. In essence, the capacity of AI pricing software to solve uncertainty in contexts where there is sufficient data does indeed enhance the risk that competitors will be able to overcome some of the problems associated with interdependent equilibrium instability. If this would be the case, the problem of supra-competitive prices would be more widespread than it was assumed before the advent of algorithmic pricing and social harm would be greater. On the other hand, one of the main problems with assigning liability to price coordination between algorithmic agents is detection. 2 However, if authorities can themselves harness the power of the technology then this problem may be alleviated. 3 While the literature has discussed at length the issues of liability and detection of algorithmic collusion, the present article focuses on the issue of possible antitrust remedies in order to most effectively deter and correct social harm caused by algorithmic interdependent pricing. A common question that could come to mind is why focus on remedies before the issue of liability is settled. As can be remembered, the problem here is price coordination without overt communications, which is not punishable under EU competition law. However, the exercise is still necessary in the sense that there is no point in outlawing conduct for which there are no suitable remedies. The costs of litigation would not be justified if there is no reasonable prospect of deterring and correcting social harm. Therefore, in order to decide whether EU competition law should be changed in order to cover price interdependence by autonomous algorithmic agents, one has to determine whether there are feasible remedies for the conduct. 4 In addition, the article is based on the notion that a remedy should be tailored in order to address the harm that has been diagnosed. Accordingly, the remedy should undo, discontinue and prevent harm. Therefore, the discussion of an effective remedy primarily requires knowledge about the inherent harm and is less dependent on what the optimal prohibition provision would look like.
Also, it should be noted that the question of tailoring an optimal remedy even arises when there is no individual prohibition provision at all. This is the case, for example, after sector inquiries in some jurisdictions where the competition authority has the power to order measures to address the harm or the negative effects that have been diagnosed. 5  in Germany where the authority was unable to prove an infringement and still an elevated price level was diagnosed as a harm. Thus, § 47k GWB (the German antitrust act) was enacted to require gas stations to notify price changes to the competition authority. In other words, there is a remedy regarding the transparency of the pricing strategy even if there is no infringement. In addition, remedies in merger control in oligopolistic markets often address tacit collusion without it being an anticompetitive conduct under EU law. This is because it is recognized that interdependent pricing causes harm. A remedy like introducing asymmetries into the market through a divestiture does not relate to the behaviour of an undertaking which is prohibited but to the structure of the market, that is, the cause of the behaviour. In this way, remedies that address the structural problem can by-pass the difficulty of addressing a specific firm conduct.
This article is structured as follows. Section II provides a brief overview of the properties of deep learning methods and their applications to pricing decisions in order to set the context for the discussion on remedies. Section III covers the theory of harm that serves as a basis for the analysis of suitable remedies. Section IV deals with the main issue of this article, which is the identification of corrective measures that can be effective in addressing social harm caused by algorithmic collusion. Section V concludes.

II. COMPLIANCE AND ALGORITHMS-A BRIEF TECHNICAL
OVERVIEW Deep learning methods, which are an application of artificial neural networks, have some traits that need to be discussed in order to put the issue of collusion and remedies into context. 6 At the outset, it is pertinent to point out what the output of the technology is: through data processing it can solve different types of problems such as clustering (finding similarities between objects), forecasting, and finding rules for optimizing a given variable such as profits. These problems can be solved by different methods of machine learning, which are unsupervised learning, supervised learning, and reinforcement learning, 7 respectively, all of which are useful in pricing decisions. Such problems can be solved by traditional statistical methods. However, what deep learning brings to the table is the ability to automate this task and use hundreds or thousands of variables, which human constraints would otherwise render impossible. In most cases, this ability will lead to better results. having to find an infringement by a particular company-impose market wide remedies on companies in order to address adverse effects on competition. See Section 138 (2) Enterprise Act 2002. Further, the European Commission has launched a consultation on a possible new competition tool. This instrument would-in the context of a market investigation-empower the competition authority to impose behavioural and structural remedies outside the scope of individual infringement proceedings. See <https://ec. europa.eu/info/law/better-regulation/have-your-say/initiatives/12416-New-competition-tool> accessed 7 July 2020. 6 For an overview of the relevant AI-related concepts and their applicability in the context of pricing algorithms see Beneke and Mackenrodt (n 4) section 2. 7 For a non-technical explanation in greater detail of how these methods work, the interested reader is referred to Gopinath Rebala and Ajay Ravi and Sanjay Churiwala, An Introduction to Machine Learning (Springer 2019) 19-22.
Briefly explained in relation to our problem of interest, collusive prices, some theoretical risks introduced by these learning methods include the following: clustering with unsupervised deep learning can be used to better segment customers and find those who seem to be more passive and therefore more susceptible to collusive prices; demand forecasting done by supervised learning algorithms can help firms to estimate demand elasticities more accurately and thus enable firms to better estimate the reservation price of certain customer groups; finally, through reinforcement learning, the method by which an algorithms learns from past data and by engaging in experimentation itself, firms can automate pricing strategies according to many variables such as reactions by competitors and the impact this has on the variable to be optimized, such as profits or market share. As this problem-solving capabilities improve, then market prices will tend to be more stable and converge to a level upwards of the competitive benchmark. The output of algorithms, in general, improve with the availability of data. That means that situations for which firms do not have much past information, such as reactions to entry, will still be able to destabilize supra-competitive equilibria. However, for more common and routine scenarios, the use of deep learning methods can lead to more stable oligopoly prices. 8 If certain predictions and rules for establishing a price by an algorithm can lead to social harm, a reasonable approach would be to code the software in such a way that certain variables are left out of the model. A common example cited outside of the antitrust arena is that of discrimination based on personal characteristics such as ethnicity. 9 In this case, the problem could be avoided if the algorithm is not fed with data on this variable. The problem is, as pointed out by Harrington, that the algorithm could still zero in on a consumer's ethnicity by association with other variables such as geographical location. 10 In the same way, the interaction of algorithms could lead to stable interdependent pricing by inferring likely reactions of rivals from indirect data such as changes in the firm's demand.
If the model is not intuitive, which can be the case given that the learning algorithms can process a large variety of different variables and combine them in random ways, then the fact that an algorithm is making socially harmful choices might not become clear from a simple reading of the code. A possible solution to the problem is inferring harm from a counterfactual. That is, authorities could estimate market outcomes based on what would happen if there was no racial discrimination or, more to the point of this article, if firms were pricing in an uncoordinated way.
Finally, another property of deep learning methods relevant for the analysis of remedies is related to their learning process. Algorithms learn based on a reward function, which is embedded in the code. Pricing software can therefore look to past data in order to choose, given current demand and cost conditions, the price that maximizes expected profit. If a given output is illegal, the reward function could take the consequent penalty into consideration and render a given choice not optimal from the algorithm's point of view. Based on the above, and after giving an overview of the arguments for outlawing interdependent pricing by autonomous algorithmic agents, Section IV discusses in greater depth the impact that remedies such as transparency requirements, injunctions aimed at the code, financial penalties (fines and damages), and divestitures can have on correcting and deterring socially harmful behaviour. Other options from the antitrust enforcement toolkit such as market inquiries and merger control are also discussed in relation to their comparative advantages and disadvantages to address our problem of interest.

III. ALGORITHMS AND TACIT COLLUSION
As a starting point, it is important to state that tacit collusion is not illegal under EU competition law. As stated in the introduction, this article still addresses the issue of appropriate remedies, partly, because one of the most contentious points in the debate on outlawing tacit collusion is whether there are suitable remedies to address the conduct. Even if problems of detection can be solved, there is no point in prohibiting a conduct for which no remedies can be successfully applied. In this section we address the current state of the law, sketch the main points of the debate over the legal status of interdependent pricing, and analyse what changes with the advent of algorithmic pricing. The analysis regarding remedies is presented in a separate section.

EU law on tacit collusion
Article 101 TFEU prohibits coordination between undertakings and defines three categories of unlawful collusion: agreements, decisions by associations of undertakings, and concerted practices. None of these concepts, as currently understood in EU competition law, applies to the phenomenon of interest in the present article: tacit collusion-also called interdependent pricing-achieved by AI-powered pricing software. This is hardly a debated point and the issue can be easily disposed of. 11 The concept of agreement requires an expression or the joint intention of the undertakings to conduct themselves in the market in a specific way 12 and that there exists a concurrence of will. The form in which this is expressed is of no relevance. 13 This kind of coordination, as it is classically understood, involves more than price signalling. It requires other forms of communicating the intent of jointly elevating prices.
The prohibition of concerted practices, on its part, seeks to apply to coordination between enterprises that falls short of being qualified as an agreement. 14 However, as this concept is understood in EU law, it does not apply to mere interdependence, let alone one that involves the use of autonomous pricing agents. It is a concept designed to encompass cooperation that cannot be regarded as an agreement but nevertheless establishes a practical coordination between firms in order to avoid the risks of competition. 15 Unlike a mere agreement, and as indicated by the term 'practice', a concerted practice requires a concertation and a subsequent conduct, as well as a causal relationship between these two elements. 16 With regard to causation, there is a presumption which can be rebutted by the undertaking, for example by publicly distancing itself from that practice, reporting it to the competition authority, or by other means. 17 One example of this conduct can be found in the Eturas 18 case. There, the administrator of an information system licensed and administered online booking systems for travel packages to travel agencies. Eturas informed the travel agencies that the discounts on services sold through that system would be capped in the future and modified the software in order to implement this measure. Higher rebates would automatically be reduced to 3 per cent by the software unless a travel agency took additional technical steps. The European Court of Justice held that travel agencies that were aware of the message could be presumed to have participated in a concerted practice unless they had distanced themselves from this conduct. 19 However, the European Court of Justice also held that-absent further evidence and relying on the presumption of innocence-the mere sending of the message would not justify a presumption that the travel agencies ought to have been aware of the content of the message. 20 More generally, it should be noted that the concept of concerted practices does not capture cases where either a concertation or a subsequent practice or a causation cannot be found. Accordingly, autonomous parallel behaviour does not qualify as a concerted practice and is not forbidden under Art. 101 TFEU. 21 The specific facts in the Eturas case allowed for an application of the concept of concerted practice. However, the independent use of pricing algorithms is clearly distinguishable from these facts and, therefore, would in most instances not be covered by Article 101 TFEU. Even though it is therefore clear that EU law does not outlaw tacit collusion, distinguishing such conduct from unlawful coordination within judicial proceedings is on many occasions a difficult task. This difficulty has been one of the reasons why with regard to oligopoly markets it has been argued that from an economic policy perspective, the legal distinction should be abolished and all types of oligopoly pricing should be outlawed. 22 The classic debate on interdependent pricing and how algorithms might change it In a traditional scenario, where no autonomous pricing agents are present, economic theory has identified conditions under which oligopolistic markets can lead to supracompetitive prices without a need for overt communications between the undertakings involved. The discussion on which conditions may be affected by the use of pricing algorithms is covered later in this subsection. Contrary to what happens in markets with perfect competition or with a monopolist, the profits of an oligopolist depend on the strategies chosen by the oligopolist and by its competitors. 23 In contrast, a monopolist-or a dominant firm-can maximize its profits by unilaterally limiting the output or setting the price. 24 Under perfect competition a single firm has no influence on the market price but is a price taker. 25 Thus, strategic pricing happens only in oligopoly situations.
Strategic behaviour in an oligopoly market and its effects on total welfare have been analysed by a large volume of economic literature that has tried to identify which factors enable an oligopolistic market to sustain stable supra-competitive prices. 26 The greatest progress seems to have been achieved in the field of game theory. 27 The Bertrand model, which focuses on price coordination, predicts that the equilibrium price will be set at marginal cost if there is only one period. 28 This static model has been extended to multi-period games with infinite repetitions. In such a setting, a price above the competitive level can be an equilibrium depending on the strategy used by each firm and other exogenous factors. 29 For example, a tit-for-tat strategy (where a firm reproduces its rival's strategy from the previous period) can lead to a cooperative equilibrium. 30 The same can occur when a trigger strategy is implemented. 31 Economic theory has also identified conditions and market characteristics that favour supra-competitive price levels in oligopolistic markets. 32 Price deviations must be detectable and the threat of retaliation has to be credible. 33  transparency, dispersed demand, frequent repeated interactions, stable demand and cost structure, and product homogeneity make any form of collusion more likely. 34 Recent developments in modern economic theory have led to highly complex models, which account for different strategies, market conditions, and assumptions. These models allow for more realistic predictions of strategies and market outcomes. In the theory of supergames, the players know the strategy of other players from previous rounds and can use this information for designing their future strategy. 35 In addition, multiple equilibria may exist if firms care enough about their future profits. The problem of multiple equilibria can be solved if there is a focal price to which firms are likelier to converge (for example, the monopoly price). However, asymmetric conditions between firms can effectively prevent the market from tacitly finding such a price. 36 With this context, now we can proceed to the analysis of the debate regarding the legal status if interdependent pricing. The points on which the classic controversy on interdependent pricing focuses are the following: whether it is rational to ask oligopolists to not act in their best interest, whether there is a legal standard that does not lead to a suboptimal number of false positives, whether tacit collusion should be tolerated as monopoly profits are, and whether there could be a feasible remedy for tacit collusion. 37 The first and second points are intricately related and therefore their analysis can be made jointly. The question is, if oligopolists are prohibited from tacitly colluding, how are they required to price? A broad prohibition on interdependent pricing would require firms to set a price that earns them normal profits. The problem with this, which is related to the issue of detection, is that it is hard to establish during litigation what the marginal cost is. 38 Therefore, courts could be ill-suited to determine how much higher the price really is. In short, even if one were to admit that the harm from mere interdependence is not distinguishable from that caused by hardcore cartels, the inherent difficulties of administering a broad prohibition should lead to the conclusion that competition law should only outlaw instances in which something more than price signalling has occurred. 39 On the other hand, requiring that there have been communications to make an agreement unlawful carries its own administrative difficulties. It is well-established practice that proving an agreement does not require direct evidence. Since the conduct is illegal, instances where colluders have not hidden concrete evidence of their mutual assurances will be the exception. Therefore, enforcers and courts conduct the analysis in two steps. First, there must be evidence that proves that behaviour patterns are inconsistent with competitive pricing. Second, the evidence has to show that the behaviour is the result of unlawful forms of cooperation. At a minimum, this second step imposes additional costs that would not be present under a broad-sweeping rule that punishes all forms of oligopoly behaviour that lead to supra-competitive prices. 40 There are ways in which instances of tacit collusion can be distinguished from parallel pricing, which is important given that the latter is perfectly consistent with competitive behaviour. For example, interdependent pricing should be less responsive to cost and demand changes in order to avoid conduct that could destabilize the oligopoly price. 41 A price decrease in response to lower costs could be interpreted by the rest of the market participants as cheating. If firms are pricing competitively, as is the case during price wars, lower costs should be reflected with greater frequency in price decreases.
The issue of rational constraints given by the profit maximization objective does not necessarily mean that oligopolists can only be expected to price in a tacitly coordinated way. According to game theory, each player determines which of the available strategies is rational based on their respective payoffs. If one can alter this calculation and make coordinated pricing less attractive, say, by imposing a fine, then pricing in a competitive pattern can be a rational strategy. 42 Regarding the effects of the conduct, the argument for maintaining a communications-based approach is that society should tolerate oligopoly profits for the same reason that monopoly profits are not illegal. However, the fact that oligopolists need to take the reaction of their competitors into consideration, a point already explained in this subsection, also means that they have different incentives in product or cost-reducing innovations than monopolists. Since the profits of oligopolists depend on weak competition from rivals, their incentive to compete on any dimension can be diminished. 43 Cost-reducing investments will, for example, give the incentive of undercutting the joint profit-maximizing price, which will destabilize the interdependent equilibrium. Therefore, if expected oligopoly profits are high enough, firms will be discouraged from making such investments. For these reasons, even if one can equate oligopoly profits to monopoly profits in the sense that they are both ways in which winners obtain their rewards, 44 it is not justified to treat their causes equally. The use of pricing algorithms changes the considerations made around some of these issues. As a starting point, it is important to mention the pro-competitive potential of AI in order to have a more balanced approach, since the purpose of the analysis is to develop a policy that deters social harm but not welfare-enhancing conduct.
Many industries implement a pricing strategy called yield (or revenue) management, in order to maximize the economic benefit derived from perishable goods, such as airline seats or hotel rooms. In the past, this task was performed by revenue management departments where staff was in charge of adjusting prices according to changes in demand, cost, and rivals' strategies. 45 AI automates some of these tasks such as calibrating the models according to changes in underlying conditions. Therefore, one evident benefit from this technology is lower costs of making optimal price predictions. 46 From the demand side there can also be benefits derived from algorithmic pricing. A better prediction on what a consumer wants can help convey useful information and therefore lower search costs. 47 Another advantage is that algorithmic pricing can allow firms to expand output by segmenting demand. In addition, AI-powered pricing software can allow competitors to more aggressively compete for non-loyal customers by making better predictions on how to identify them. 48 That said, algorithmic pricing also entails risks associated with the issues discussed in this subsection. In addition, algorithms used in experimental settings have been shown to be able to learn collusive strategies without being specifically programmed to do so. 49 Regarding detectability, fast price responses from algorithms may make oligopoly pricing patterns look more similar to competitive behaviour. As explained above, sticky prices can be a factor to distinguish whether prices are the result of tacit collusion or a more competitive dynamic. If AI aids firms in making a better prediction on whether a price change by a rival is in response to a decrease in demand or costs, unnecessary price wars may be avoided. That would cause prices to be more stable and higher on average but also responsive to cost and demand changes. This would make it hard to distinguish whether the game played by the algorithms is cooperative or rather competitive. 45 Anthony W Donovan, 'Yield Management in the Airline Industry' (2005) 14 J Aviation/Aerospace Educ & Res 11, 11. American Airlines pioneered this pricing practice. Its approach was then quickly adopted by competitors and spread to other industries. 46 Agrawal and Gans and Goldfarb (n 9) 13. 47 This is, however, pointed out as a risk to consumer welfare by other commentators. See Ezrachi and Stucke (n 1a) chapter 12. The authors are concerned that nudging consumers induces them to buy goods they might not even want in the first place. These concerns, from a consumer welfare perspective, are incorrect from a theoretical point of view. Consumer nudging merely pushes the demand curve upwards. This has the unambiguous effect of increasing total welfare. Consumer nudging can however be problematic on other economic grounds such as income inequality and to the extent that it lowers the savings rate. See Beneke and Mackenrodt (n 4) 125, n. 88. 48 On the other hand, AI can also aid firms to attract purchasers by offering the lowest discount possible that induces them to switch. See Ezrachi and Stucke (n 1 (b)) section I.v. 49 Emilio Calvano and Giacomo Calzolari and Vincenzo Denicolò and Sergio Pastorello, Artificial Intelligence, Algorithmic Pricing and Collusion (2019) <https://ssrn.com/abstract¼3304991> accessed 7 July 2020.
On the other hand, detection could also be carried out if one can estimate the competitive benchmark price in the market, which can be done with routinely used quantitative methods. 50 The reliability of such calculations will depend on many factors such as the model and data used. If authorities make use of AI technology themselves, they could also improve their competitive benchmark estimations. 51 There is in addition an issue of detectability that is specific to the use of artificial neural networks: the transparency of the algorithm. Since the model can include a myriad of variables combined in random ways, it might not be straightforward to determine whether prices are set in a tacitly coordinated way. To solve this problem, the algorithm can be tested to determine whether the results are consistent with tacit collusion outcomes. Obligations of transparency can also be imposed, a topic further discussed below in the in section 'Behavioural remedies'. 52 In addition to the issues mentioned, there is one point that was hardly debated before the advent of algorithmic pricing. The point concerns the perceived prevalence of instances of pure interdependent pricing. Even proponents of a broad prohibition concede that this phenomenon is circumscribed to a rare set of circumstances that include highly concentrated industries, homogeneous goods, symmetric cost structures across firms, and price transparency, among others. 53 This is consistent with the theory of supergames, which makes strong assumptions in order for firms to be able to find a focal price above the competitive benchmark, an issue already explained in this subsection.
One common debated question around the use of AI in pricing applications is whether instances of mere interdependence can become more widespread in the economy. The way the technology works can affect the following issues related to the stability of oligopoly profits: the speed with which prices change, irrationality in pricing decisions, and market uncertainty.
Algorithms augment the frequency with which a price can move. If this is coupled with greater price transparency, a consequence of the digitization of economic activity and not necessarily of the algorithms, price lags will disappear. As a consequence, the benefits of lowering prices will also vanish, as any decrease can be matched instantaneously. 54 The algorithm also introduces greater rationality in price decision-making because a computer does not respond in anger. 55 Algorithms can also be more rational than humans regarding the discount rate of future profits. As previously explained,  1(b)) 51. The authors propose an algorithmic pricing incubator to make simulations and discern whether firms are pricing cooperatively. 52 The lack of transparency does not however seem to be a problem in this AI application. Some companies that offer pricing and sales software emphasize the fact that they do not sell a black box, in order for their customers to be able to obtain valuable market insights, in other words, to know what drives certain results. 53 Tirole (n 32) 240-42. Kaplow argues that there is little empirical evidence to support the assertion made by Kaysen and Turner that successful coordination is extremely likely even in moderately concentrated industries. Kaplow points to research on the structure-conduct-performance paradigm that suggests a low likelihood of mere interdependent pricing in highly concentrated markets. See Kaplow (n 22) 476, n. 46. 54 The decrease in time lags will be achieved due to both transparent market conditions and the speed at which algorithms can react. See Ezrachi and Stucke (n 1(a)) chapter 7. 55 Ezrachi and Stucke (n 1(b)) 5.
patience is a prerequisite to be able to achieve a price above marginal cost in equilibrium. If the use of algorithms eliminates human hyper-discounting, oligopoly profits would be more stable. 56 Another obstacle that oligopolists face in achieving stable supra-competitive prices is that of uncertainty. For an assessment of the optimal joint profit-maximizing price, each firm needs to have a good idea regarding competitors' costs and the structure of demand. Algorithms can perform well in terms of modeling relationships between variables that affect supply and demand when there is enough data of good quality. For routine changes in demand and supply conditions, AI-powered software can do well in predicting whether changes in prices by other firms in the market are cheating attempts or just adjustments to new conditions. 57 If there are fewer mistakes regarding instances when a firm could believe it is being undercut, then the frequency of price wars would decrease. 58 Another point where oligopolists can face uncertainty, even under strictly symmetric conditions, is that of finding a focal price to which everyone can converge. The problem was explained above. In Bertrand infinite-period games, any price between marginal cost and the monopoly price can be an equilibrium. 59 Some commentators argue that given this circumstance, overt communications might be needed in most instances in order to overcome this obstacle. 60 However, provided that the necessary data is at hand, artificial neural networks could perform well in predicting the price to which all oligopolists in a market are likely to converge. Therefore, the express exchange of mutual assurances might not be needed.
On the other hand, deep learning methods applications of pricing enable greater segmentation of customers and therefore more price dispersion can be expected. This is indeed a factor that can adversely impact firms' ability to tacitly collude. 61 Although price discrimination makes tacit collusion more difficult, they can still 56 Mehra (n 1) 1328. As explained above, low impatience is a necessary condition for firms to break free from the competitive price trap in an infinitely repeated Bertrand game. If firms do not care for the future, the game de facto becomes a one-shot game and the only equilibrium is the competitive price. 57 Beneke and Mackenrodt (n 4) 126, 127. However, if market changes are for example of a kind for which the algorithm has not had enough data to learn from, such as the entry of new firms, the use of AI might not be able to stabilize oligopoly profits. 58 A similar point is made by Gal regarding firm asymmetries. AI can also aid oligopolists in tacitly finding a joint profit-maximizing price when cost structures across market participants are not homogeneous. See Gal (n 1) 84. 59 Based on this, Tirole remarks that in a way the theory of repeated games is 'too successful in explaining tacit collusion" and that "the large set of equilibria is an embarrassment of riches'. See Tirole (n 32) 247. 60 Page (n 39) 190. In addition, the fact that evidence of overt communications also appears in cases related to industries that appear prone to tacit collusion indicates that express forms of conveying mutual assur- co-exist. Firms may compete by giving discounts to more active consumers while maintaining collusive prices for more passive ones. 62 The question of the increased pervasiveness of oligopoly profits through time and across industries is also one that can be better settled by empirical analysis. Some commentators provide instances of algorithms stabilizing prices in certain industries. 63 This is however not enough. There is a rich history of revenue management pricing in industries such as airlines and hotels that could provide good natural experiments to isolate the effect of the use of algorithms. Ittoo and Petit point out that average prices in the airline industry have been steadily decreasing since the introduction of revenue management and the deregulation era. 64 However, this is only a first approximation, at best. The time trend of average prices does not say anything about the effect that one particular variable may have had. Such a trend can also be the result of other changes like demand and cost fluctuations, deregulation of the industry and ease of entry at the time. Such changes have in particularly be observed in the airline industry in the period analysed by Ittoo and Petit. 65 Even with little empirical evidence on how pervasive tacit collusion has become with the advent of algorithmic pricing, one can still analyse whether there is a legal standard that can be set, which in theory should discourage socially harmful uses of the technology without hampering the incentives to use it in welfare-enhancing ways. 66 The issue rests heavily on the detection issues analysed in this subsection but also on suitable remedies, a question that is analysed next.

IV. REMEDIES AND INSTRUMENTS TO CORRECT AND PREVENT
ALGORITHMIC COLLUSION The preceding sections have identified the competitive harm that can be caused by the use of price algorithms and ascertained that the use of price algorithms can lead to a supra-competitive price level in the market. Also, the difficulties and options involved in applying the substantive competition law as spelled out in Article 101 TFEU have been pointed out. Assuming now that the competition authority succeeds in striking an optimal balance when collecting enough evidence to show a breach of Article 101 TFEU, the question arises which remedies and instruments are at the disposal of the competition authority for such cases. The standard for an optimal remedy would be that it is suited to effectively address and counter the particular harm which is caused by the pricing algorithm.
In order to better identify such a remedy the above characterization of harm in algorithmic pricing cases outlined in Section III has tried to distinguish between the harm which can be attributed to a traditional price cartel from the harm which is inherently caused by the use of pricing algorithms. For this reason, the analysis has not focused on the category of cases where the algorithm simply serves to implement, monitor, and enforce a price agreement concluded between two undertakings independently and separately from the use of a pricing algorithm. In such cases, the algorithm serves to complement a traditional price cartel and the algorithm mainly carries the role of extending and intensifying the harm which can be attributed to this traditional competition law infringement.
In contrast, the above characterization of the harm which is to be attributed to pricing algorithms has pointed to the theory of oligopolistic markets and the nature of the technology. Oligopolistic markets are more likely to be affected by tacit collusion and, as a consequence, to exhibit a supra-competitive price level. The use of price algorithms, it has been argued, can-beyond the scenario where oligopolistic structures can be observed-render markets more likely to be prone to tacit collusion and, accordingly, to a supra-competitive price level. In other words, the use of pricing algorithms extends and multiplies the scenarios in which tacit collusion weakens the functioning of competition. A second feature of the harm that is caused by pricing algorithms is that it is linked to the technological design of the particular pricing algorithms.
Accordingly, with regard to remedies in algorithmic pricing cases the experiences from oligopolistic markets and from markets which are shaped by technologies can be drawn upon. However, it has to be noted that both the design of remedies for oligopoly cases as well as for technology-driven markets is burdened with high complexity. This is due to the fact that in oligopolistic markets it is difficult to attribute the negative effects of tacit collusion to a single undertaking. A similar problem arises with regard to the algorithmic pricing cases which are discussed in this article. Further, competition authorities are in general reluctant to interfere with the design of a technology and to implement measures which require ongoing supervision and thereby exhibit a regulatory character. However, the possible drawbacks of a particular remedy have to be balanced against the scenario where no measure is taken at all.
The legal standards for remedies in cases where an infringement of Article 101 TFEU has been found are set out in Regulation 1/2003. In addition, measures to alleviate negative effects of tacit collusion can be adopted within a merger decision based on the European Merger Regulation. Further instruments to remedy negative effects include conducting sector inquiries.

Remedies under Article 101 TFEU and Regulation 1/2003
Articles 7 to 10 of Regulation 1/2003 specify the remedies available to the European Commission. The following subsections analyse their possible application to instances of algorithmic tacit collusion.

Fines
The use of fines is supported by the theoretical underpinnings of oligopolistic pricing explained above in Section 'The classic debate on interdependent pricing and how algorithms might change it'. If the decision on which strategy to follow depends on the relative pay offs, then fines address this variable directly. Reinforcement learning, which works based on a function of rewards, 67 would be particularly sensitive to such incentives if the prospect of fines is embedded in the algorithm.
If fines are the preferred remedy, the incentives to use artificial neural networks will depend on their severity and whether the expected pecuniary liability changes because AI pricing software was employed. If authorities consider the use of this technology as a plus factor or at the very least as an aspect to take into consideration when setting enforcement priorities, then there is a risk that interventions will deter the use of AI pricing software even in instances where there has been no social harm due to the increased likelihood of false positives.
If, as advanced in subsection III, governments can use artificial neural networks themselves to make an accurate prediction of the competitive benchmark, then this problem would be largely solved. Assuming then that the fine is calculated at the right amount-a strong assumption since this also depends on an estimate regarding the likelihood of detection-artificial neural networks will still be used in pricing decisions to the extent that they confer other competitive advantages, such as better client segmentation or identifying variables associated with increases in revenue. 68 Courts and authorities are no strangers to the use of quantitative methods in antitrust litigation. Therefore, the use of tools like deep learning methods on the part of enforcers, although no easy task, should not be impossible. In addition, it must be borne in mind that authorities like the Commission have the power to request information from relevant undertakings. Therefore, the quality of demand forecasting and simulations run by the EC would be less constrained by data limitations compared to the investigated firms-that is, assuming that the Commission has the necessary assets and skilled labour.
Another point that needs to be addressed regarding fines is given by the nature of algorithmic pricing and how the industry of dynamic pricing is organized as a result. Developing useful pricing tools based on machine learning requires a vast amount of data and expertise. Therefore, this economic activity is usually carried out by highly specialized companies that gather data from multiple sources to develop commercial solutions for their clients, who are the direct market participants that sell products and services to end consumers or other businesses. Direct market participants therefore do not usually design the pricing software themselves but purchase a licence from a supplier. Therefore, if the direct market participants are the only ones fined for collusive pricing achieved by the algorithm, one important category of actors is left out. The issue of whether the software suppliers should be liable and fined needs to be discussed as well. Since these are the market players that design the source of the harm, economic efficiency would be an argument in favor of assigning financial liability to them as well. Given that both suppliers and customers have a role in the design of a product, it can be desirable to establish a dual-liability system. The legal basis of such a system can be similar to that under which facilitators of horizontal agreements are held liable, as confirmed by the CJEU in AC Treuhand. 69 Under such a system, direct market participants would have the incentive to demand and monitor whether the dynamic pricing software is biased towards collusive outcomes and software suppliers would have the incentive to design an algorithm that avoids anticompetitive models.
In this respect, there are technical aspects that should be taken into consideration. In cases where reinforcement learning is used, the algorithm learns optimization rules by itself. One could therefore argue that in such cases it is hard to speak of fault on the side of either the software provider or, even to a lesser degree, of the direct market participant. That is why Ezrachi and Stucke suggest to consider strict liability rules, 70 under which firms would be liable regardless of the issue of intent or negligence. The advantage of strict liability is that it saves litigation costs on fault once the harm has been identified. The downside is overdeterrence of the use of software that optimizes pricing, which can have a substantial impact on firm profitability. Based on this, there is merit in analysing the benefits and costs of other rules of liability to allocate fines. One fact to take into consideration is that firms might not be so helpless in avoiding collusive pricing. When one browses through offers of dynamic pricing, one of the selling points is that the algorithms help to avoid price wars, 71 which is another way of saying that prices can remain stable at higher levels. If software developers can actively take steps to design the algorithm in this way, at the very least a fine would discourage them to take such steps and increase the risk of collusive outcomes. Along the same line, it could also be argued that direct market participants could be expected to reasonably take measures if not to prevent collusive outcomes to monitor them in time, since they are better placed to analyse pricing patterns in their markets in real time.
Finally, an advantage of fines is that they provide incentives to change a behaviour without the need to address certain technical difficulties. Specifically, by imposing fines, the authorities do not need to articulate in detail the way in which the software needs to be coded. The fine rather sends the message that steps need to be taken to avoid collusive results, otherwise risk financial penalties.

Structural remedies
According to Article 7(1) of Regulation 1/2003 structural remedies can only be imposed in the absence of equally effective behavioural remedies or where an equally effective behavioural remedy would be more burdensome than a structural remedy.
This requirement shows that structural remedies are generally considered to be a stronger intervention than a behavioural remedy in a proceeding applying Article 101 TFEU. In contrast, the European Merger Regulation does not spell out a general preference for behavioural remedies. This can be explained by the fact that the merger rules seek to guarantee the structural preconditions for effective competition in the markets. Structural remedies like divestitures can be used to address concerns of tacit collusion in several ways. A divestiture can aim either at creating a higher number of competitors or at rearranging the competition dynamics between present market participants so that tacit collusion is made more difficult to achieve. Even though finding a buyer for the assets outside of the market to create a new competitor could be more difficult than simply redistributing them between incumbent firms, the former option can still be considered. In addition, when designing a structural divestiture remedy, the competition authority should intend to alleviate the negative effects of tacit collusion, for example by creating asymmetric market participants. In many oligopoly models, symmetry is an assumption that alleviates the strategic interaction of companies. In addition, the optimal pricing decision of a company is not only determined by the pricing behaviour of the competing companies but by a wide range of factors. These factors include the cost structure of an enterprise, distribution channels, and cooperation with other enterprises. Therefore, the same pricing algorithm might arrive at a different optimal price when it is applied to different companies. Asymmetries in the market might therefore prevent or lower the likelihood of collusive market outcomes through the use of pricing algorithms.
In addition, such structural changes may not only prevent future violations in the market but deter them from occurring in the first place. This is because the fear of subsequently facing a more competitive market structure where profits will be lower will weigh in on the firms' decisions to price interdependently. 72 For this to be true firms are assumed to be patient-in other words, that they value future profits sufficiently. This is also a requisite assumption for stable interdependent pricing and therefore the remedy would have greater effects on markets that are more prone to collusion. 73 For a structural remedy to apply, there must be assets that can be divested. 74 This is not feasible in a scenario where all cartel members have but one production plant. In addition, even when there are assets to be divested one should consider the changes in cost that the remedy can cause. For example, in the presence of important scale economies, the divestiture of capacity will cause an upward pressure in prices that could offset any gain to be had from a more competitive game in the market. 75 In the case of algorithmic pricing, structural remedies could apply to markets such as airlines. The use of automated price decisions is common practice in this industry with documented cases of horizontal agreements around the world. 76 In the case of 72 Joseph E Harrington Jr, 'A Proposal for a Structural Remedy for Illegal Collusion' (2017) 82 Antitrust L J 335, 341, 342. 73 ibid. 74 ibid 347. 75 ibid. 76 One could also think of the retail gasoline cases pointed out by Ezrachi and Stucke (n 1(b)) section I.ii.
In this case, stations could be sold to entrants in order to promote a more competitive market structure.
airlines, the assets that could be divested are slots at the airports that could be sold to airlines not operating the route in question. 77 When talking about divestitures as remedies for interdependent pricing, one aspect that has to be addressed is the unclear relationship between the number of competitors and the incentives to collude. 78 On the other hand, an aspect that is to a great extent settled is that of asymmetries. Therefore, as has been argued in the present article, economic theory would indeed support the use of divestitures when asymmetric cost conditions are introduced by the structural remedy. In addition, when considering the use of deep learning methods, the algorithms would, under these conditions, find it more difficult to find a focal price.
As can be seen, structural remedies are an appealing option because, as mentioned in the introduction, they address the market characteristics that facilitate collusive results by algorithms without the need to address specific parts of the code, similar to what was explained above in the case of fines. That is not to say that a behavioural remedy that orders a firm to program the software to not play collusive games does not have its advantages as well, as will be explained below. Here the point is just to point out the relative cost advantages of other remedies that can be assessed on a case-by-case basis.
There are some aspects that can make structural remedies a more attractive choice than fines and damages. 79 In order to set the optimal amount of financial penalties, enforcers need information on marginal cost (or at least the relevant conditions under the most competitive scenario that can be achieved given the prevailing market structure) and the likelihood of detection. Whereas predictions on the former can be refined with the use of deep learning methods by the authorities, it is not clear that the latter could so easily be estimated. This second estimation does not need to be carried out in the case of a structural remedy since the financial penalty associated with it is the decrease of future profits. Therefore, the problem of excessive fines and damages would be largely alleviated.

Behavioural remedies
Before the advent of artificial neural networks, the discussion centered on the infeasibility of remedies and the irrationality of an order to make price decisions without regard to competitors' reactions. Behavioural remedies have only been discussed regarding conduct that involves facilitating practices. 80 With AI algorithms new options appear that should be addressed. For example, one remedy option could be on how the algorithm is programmed. An injunction would be on clearer terms if it 77 Harrington proposes that in this specific case, the slots could be sold to low-cost carriers. See Harrington (n 72) 357. This would introduce asymmetric conditions in this market, which as explained above would have a greater effect in correcting and deterring harm in the market. 78 Harrington argues that this could lead to error costs when applying the remedy, since the underlying theory is not settled. See Harrington (n 72) 346. 79 One relative advantage of divestitures that has been pointed out is that they do not affect a firm's liquidity. If a fine is particularly harsh it could jeopardize the viability of the firm. If capital markets were perfect, and if the firm is expected to be profitable in the future, this would not be a problem. Capital markets are, however, not perfect. Harrington (n 72) 341. 80 Page (n 39); Turner (n 38) 655. addresses specific aspects of the code. Authorities could order firms to program their algorithms so as to play competitive instead of cooperative games. Simulations carried out in algorithmic incubators of the kind proposed by Ezrachi and Stucke could in theory be used to police the type of game being played. 81 As already mentioned at the beginning of this section, authorities are usually wary of interventions aimed at the design of the product. In the case of algorithmic pricing, however, the design in question is not that of the firm's product. It rather concerns how the firm automates its price decisions. Nonetheless, caution is still warranted so that the intervention does not discourage firms from using a technology that has efficiency potential.
If the benign properties of algorithms can be left unharmed in the case that the software is programmed in such a way as to avoid collusive outcomes, then an injunction or an order to not price interdependently can be feasible. The algorithm could be ordered to avoid profit-maximizing strategies that involve, for example, matching price hikes or raising price because of an expectation that other companies will follow suit.
A problem with such a solution has to do with the fact that in some circumstances the use of anticompetitive rules in the algorithm might not be straightforward, which may increase the costs and efforts of designing and monitoring the remedy. 82 That is the case when deep learning networks are used not only to make market forecasts but also to automate the final decision on which price to charge. However, even in this case the algorithm can be made static (instructed to cease learning so as not to change) and tested. In this way, by observing its price outputs based on hypothetical data with which it is fed one could reconstruct the decision rules and identify whether they are anticompetitive. 83 A related point would be developing guidelines to make the functioning of algorithms more transparent. This does not mean that providers of AI solutions should disclose their algorithms to the public. Rather, transparency should be ensured in the sense that the authorities are in a position to verify whether the software complies with best-practices programming standards. In other words, such transparency measures should aim at eliminating the black box problem.
Solutions focusing on transparency have been analysed in other areas where algorithmic decision-making can also lead to liability, such as anti-discrimination law infringements. 84 The main mechanism of the proposed regulations is to establish a transparency obligation so that authorities can audit algorithms and establish whether factors such as race and religion played a role. The same logic can apply to pricing decisions, where the variable of interest would be strategic reactions to competitors' prices. 85 The specific proposal here is to enforce transparency obligations by establishing rebuttable presumptions against algorithms when the role of certain variables cannot be explained. The system could work as follows: when verifying within an investigation whether the algorithm in question is anticompetitive, after the investigating authority makes a prima facie case of collusion-based on evidence regarding pricing patterns, cost fluctuations, and other variables that rule out mere parallelism-a rebuttable presumption could be created so that the firm would have the burden of proving that the algorithm does not employ collusive strategies. 86 Technical means of deciphering complex models created by the algorithm already exist 87 and such a presumption would give the incentive to use them. The testing of algorithms can be performed by experts designated by both parties, to avoid perverse incentives on the side of the investigated firms, and the results could be introduced as expert testimony.
Here a point of caution is in order. Transparency of the algorithm could lead to it being more vulnerable to be reconstructed by rivals, which makes algorithmic collusion more likely. 88 Therefore, in formulating enforcement policy authorities should take this likelihood of increased social harm in question. The risks of transparency could be compensated, for example, by increasing the associated penalties of law infringements or stepping up policing efforts. Both options have their own associated administrative costs, which should naturally be weighed in order to strike the right balance.
In addition, transparency obligations could also be delimited to variables that aid the authority to make a determination of collusion. For example, transparency in relation to the authority does not necessarily need to include a disclosure of the source code of the algorithm but could pertain to information which might serve as direct or circumstantial evidence to determine the possible role of the algorithm in a collusion scenario. 89 This might include information on the time and reasons for the initial implementation of the algorithm and subsequent changes to adapt its operation. Also, information on the input, training data, calibrations of the algorithm, on the output and similarities with other algorithms in the market might be useful for the authority to assess the risk of an algorithmic collusion in the market. 90 Transparency might also relate to providing the results of testing the algorithms through different methods like comparing input/output couples, simulations with predefined inputs, or in controlled settings. 91 Mergers and algorithmic pricing The application of Article 101 TFEU to tacit collusion in oligopolistic markets is burdened with complexities and insecurities. One reason is the difficulty to attribute the supra-competitive conditions in oligopolistic markets to a behaviour of an individual undertaking and to qualify this behaviour as a concerted practice in the sense of Article 101 TFEU. Therefore, it has been argued that merger control would be a more effective instrument to address oligopolistic pricing. 92 The European Merger Regulation prohibits mergers which are likely to significantly impede effective competition. 93 A merger could, therefore, be stopped if it could make the market more conducive to coordinated effects. 94 Quite similarly, the use of pricing algorithms can make markets more prone to exhibit supra-competitive price levels. Accordingly, in such situations the use of particular pricing algorithms should be considered as a factor in merger decisions in order to have less need for applying Article 101 TFEU once the merger is completed. Mergers which raise competitive concerns can be approved if commitments are adopted which render the merger compatible with the common market. The Commitment decisions can be based on Article 6(2) or Article 8(2) of the Merger Regulation 139/2004. The Commission notice on remedies 95 spells out general principles for the design of commitments. Commitment decisions can be of behavioural or of structural character.
A structural remedy such as a divestment is preferable in so far as some behavioural remedies require an ongoing supervision. Further, the purpose of merger control is to safeguard the structural preconditions for a competitive market. The analysis on the viability of structural remedies for algorithmic pricing made in Section 'Structural remedies'. is applicable here, with the difference that divestiture or blocking a transaction within merger control proceedings acquires a particular importance in that it may prevent harm from occurring in the first place and is therefore a good complement to the deterrent function of remedies.

Sector inquiries
It is important to note that to address tacit coordination by algorithms some authors propose alternative competition law routes other than including them in a price-fixing prohibition. 96 In this regard, Ezrachi and Stucke analyse other options because of the lacuna in the law on horizontal agreements, but do not propose that provisions such as Section 1 of the Sherman Act should be read so as to include mere interdependence achieved by pricing software. 97 In any case, such intervention tools still result (in some jurisdictions) in the use of structural and behavioural remedies and, therefore, the discussion of their suitability to solve the problem is similar to that in the context of the law on price fixing. With that in mind, some of the methods that Ezrachi and Stucke propose-and which can be viewed as remedies-are the following: (i) reducing the frequency of price changes in order to make price cuts more profitable, (ii) reducing price transparency for the algorithms but not for consumers, and (iii) testing whether market structure influences the ability to collude (relevant for merger review). The authors suggest that these measures could be tested in the algorithmic pricing incubator that they propose. 98 Mehra, on his part, argues that since the use of algorithms carries its own benefits, 99 a per se rule would be ill-suited to approach the problem. On the other hand, because of the same benefits, the author also concludes that a rule-of-reason analysis would not be appropriate either. Although Mehra has a point regarding the lack of merits of a blanket prohibition, his argument on the ill-suitability of a rule-of-reason approach does not take into consideration that such a rule originated precisely due to the need of carrying out a balancing of trade-offs when the net effect of a conduct is not a priori clear. The author's argument may be grounded in the fact that such balancing is in practice not particularly manageable when direct evidence on effects is not at hand (and as a consequence the authorities are required to perform a qualitative analysis and make a subjective judgement on whether the benefits compensate the harm). Mehra therefore argues that the best approach may be a 'proactive shaping of industry behaviour through dialogue with stakeholders, targeted regulation, and/or norm generation', in other words, a combination of regulation and competition advocacy programs like the ones used by the FTC when addressing consumer privacy issues. 100 Article 17 of Regulation 1/2003 empowers the European Commission to conduct market inquiries into a particular sector of the economy or a particular type of agreement if competition may be restricted in these areas. In the course of such a sector, inquiry undertakings can be requested to supply the Competition Authority with information, in particular with regard to agreements, decisions, and concerted practices. To conduct sector inquiries more efficiently, the European Commission even disposes of the powers to request information, to take statements, and to conduct inspections, pursuant to Articles 18 to 21 of Regulation 1/2003. According to Articles 23 and 24 of Regulation 1/2003 the European Commission can even impose fines, for example for the provision of incorrect information. These powers serve to guarantee the effective execution of a sector inquiry. Some national competition authorities dispose of similar competencies to conduct sector inquiries. 101 However, the Commission has not been conferred powers to remedy market defects which have been identified as a result of a sector inquiry on a general basis. 98 ibid. 99 Mehra points out benefits from the use of algorithms in areas other than pricing. It is important to stress that when analysing a case on algorithmic pricing the authorities should focus only on the benefits of the pricing software and not on the benefits of artificial intelligence in general in the market. The reason is that a rule that addresses only the pricing aspect will not have an effect in the adoption of artificial intelligence in other business areas. Therefore, the benefits in these other aspects should not enter into the balancing analysis. 100 Mehra (n 1) 1371. 101 The Bundeskartellamt, for example, can conduct market inquiries according to Section 32e of the Act Against Restraints of Competition (GWB).
The lack of such general remedial powers has been criticized. 102 In particular with regard to oligopolistic markets, where it is difficult to attribute the competitive defects of a market to individual market participants, such general remedial measures might be a useful instrument. Similarly, the European Commission could, for example, not prohibit the use of a particular kind of pricing algorithms on a general basis even if a sector inquiry has concluded that these algorithms have a general pernicious effect on the competitive process in the market. However, the insights from a sector inquiry can be used for subsequent enforcement actions which are to be directed against individual undertakings.
A sector inquiry can deliver information on what kind of pricing algorithms and to what degree pricing algorithms are used in a market and on the competitive effect. This information can be used to devise a theory of harm for an individual enforcement action. Also, market studies can lead to recommendations regarding the design of algorithms.
The point of bringing up these alternatives to law enforcement is to emphasize that a discussion of remedies should not be undertaken in isolation, without regard for the relative merits of other competition policy responses that are available. Market inquiries and proactive regulatory and advocacy programs could either coexist with or exclude prosecution of anticompetitive behaviour. Should the authorities choose to expand the interpretation of what an illegal agreement between competitors is and cover mere interdependence, the issue remains regarding what parts of the toolkit authorities should draw from in order to achieve better results.
Can the market take care of itself? Another valid question is whether the market can take care of the problem more efficiently than public interventions. In this respect, Gal and Elkin-Koren argue that the use of AI on the demand side to aid consumption decisions may develop as a counterbalance to market power that may be enhanced by algorithmic pricing on the supply side. Not only that, but algorithmic consumers-that is, software that automates purchasing decisions-can introduce parameters aimed at destabilizing oligopolistic market structures and even detect cartels. 103 However, as Gal herself admits, algorithmic consumers are only a partial solution. 104 The author points to the following limitations of digital butlers: (i) they can also enter into illegal agreements or abuse their market power, (ii) suppliers of consumer algorithms may be dominated by firms that may not have the consumers' best interest at heart, and (iii) the supply can take countermeasures to algorithmic consumers. 105 In addition, algorithmic consumers may not reach many markets were consumers do not behave according to the neoclassic passive consumer model. In markets where consumers are more active or preferences are not fixed, algorithmic consumers might never become widespread. As Gal and Elkin-Koren recognize, we humans may never be willing to rely on algorithms to make jewelry purchasing decisions. 106 It has also been noted that even if the use of algorithms on the side of the consumers would accelerate the consumers' response this would not necessarily be sufficient to destabilize the algorithmic coordination on the side of the sellers, since faster consumer responses do not necessarily increase the profitability of cheating on the collusive equilibrium. 107 V. CONCLUSIONS Taking into account the nature of artificial neural networks, it is indeed likely that their use can expand the settings under which a joint-maximizing price can be set without the need of resorting to overt communications. In addition to the risks commonly described in the literature, these circumstances are those in which there is enough market data that the algorithms can use to make good predictions on competitors' costs, the cause of price changes (which could be either attempts to cheat or a response to changes in the market), and on a focal price among a wide array of options.
Therefore, before making a policy choice to outlaw price interdependence achieved by algorithms one has to take into consideration whether there are feasible remedies and how they can complement each other. Fines are a good candidate because they enter into the firm's calculation of profitability. Algorithms can be used in welfare-enhancing ways without necessarily introducing anticompetitive properties. Therefore, setting fines can indeed reduce social harm caused by algorithmic collusion by disincentivizing the use of software that leads to collusive outcomes without deterring beneficial uses of the technology.
Another remedy option that could be used in conjunction with fines is divestitures. Their relative advantage when compared to fines is that they can modify the structure of the market that made collusion possible. The focus of structural remedies should be to introduce asymmetries, since economic theory supports a relationship between collusion stability and symmetric conditions. That is not the case with market concentration. The relationship between the latter and performance is still a debated topic in the industrial organization literature. Another relative advantage of structural remedies is that they simplify the authority's estimation of counterfactuals. For example, the likelihood of detection ceases to be relevant for imposing the remedy. On the other hand, damages serve the purpose of compensating harmed consumers. Structural remedies only compensate consumers in so far as they make purchases in the future. Given these different functions, a reasonable approach is to use these remedies in complement to each other.
Before the advent of AI-powered pricing software, one of the main points of disagreement in the debate on whether to outlaw interdependent pricing was the feasibility of an order to refrain from tacitly colluding. In the case of algorithmic pricing this changes. The algorithm can be programmed so as not to employ collusive strategies. In order for this remedy to be feasible, there should be transparency standards on artificial neural networks used to automate price decisions. Since this has an 106 Gal and Elkin-Koren (n 103) 318. On the other hand, algorithmic consumers may have other limitations that may hinder their adoption even in markets like grocery shopping. The value of autonomy may not depend on the goods in question since it can touch on sensitive topics (to what extent are we willing to let machines take over). 107 Jonathan B Baker, The Antitrust Paradigm (Harvard University Press 2019) 106. associated risk of increasing collusion, authorities should take this into consideration when adjusting the fine (upwards). This could also justify increasing policing efforts.
There are other options in the antitrust policy toolkit that can be used to address concerns of price coordination by algorithms. These are merger control and market inquiries. The former can prevent market structures that make it more likely for algorithms to find a focal price. Market inquiries can give the authorities precious knowledge before undertaking enforcement actions.