This site would like to set some non-essential temporary cookies. Some cookies we use are essential to make our site work.
Others such as Google Analytics help us to improve the site or provide additional but non-essential features to you.
No behavioural or tracking cookies are used.
To change your consent settings, read about the cookies we set and your privacy, please see our Privacy Policy

Digital Business Lawyer
DON'T FORGET: Subscribers can download the latest issue in PDF format - click here to access your account and downloads.

Algorithmic pricing - the new competition law frontier?

The last year has seen a surge in interest in artificial intelligence (‘AI’), especially algorithmic pricing, and what it means for antitrust law, with competition regulators such as the UK Competition and Markets Authority (‘CMA’) and EU Competition Commissioner Margrethe Vestager taking an interest in this area, typically as part of ongoing work in relation to the digital world more generally. The headline-grabbing concern here is the possibility that AI and pricing algorithms could collude in order to automatically fix prices. A number of regulators submitted their views on this issue to the OECD’s June 2017 roundtable on Algorithms and Collusion. In this article, Stephen Wisking and Molly Herron of Herbert Smith Freehills LLP consider the regulatory stances taken so far and take an indepth look at the views submitted by the European Commission to the OECD roundtable, with the EC position presenting some detail on the issues algorithmic pricing could cause for antitrust law.

In the last year artificial intelligence (‘AI’), and algorithmic pricing in particular, has become the hottest of hot topics for the antitrust community - debated at conferences around the world and filling the pages of journals. Competition enforcers have also focussed their attention on this area, with EU Competition Commissioner Margrethe Vestager warning that businesses “need to know that when they decide to use an automated system, they will be held responsible for what it does,” and chair of the CMA David Currie declaring that the CMA will work to “ensure that the rise of algorithms works to enhance competition, not close it down.”

Why has this interest arisen?

Firstly, digital commerce, mass data collection and algorithmic decision making (in particular allowing for rapid and automated price tracking and price resetting, as well as personalised offerings) has become commonplace in the business world (and is only expected to rise further). Software that can automatically change product prices based on observed competitor prices is now widely available. For example, the European Commission’s May 2017 E-Commerce Sector Inquiry final report1 noted that 50% of retailers tracked their competitors’ prices online, including through the use of ‘spider’ software which ‘crawls’ the internet. Of those retailers, 78% also adjust their own prices in response, in some cases using software to do so automatically. Software is also utilised in vertical relationships: the Commission reported that 38% of manufacturers who track resale prices use software to do so automatically.

Secondly, competition enforcers have been focussing on the digital world generally, and on the collection and use of data in particular, as illustrated inter alia by the European Commission’s Sector Inquiry and its various investigations into Google, the May 2016 Franco-German Report on Big Data and an ongoing study on this topic by the Italian competition authority, and the German competition authority’s ongoing investigation into Facebook’s data practices2. The actions of private claimants have also thrown a spotlight onto this area, including the challenge to Uber’s ‘surge’ pricing algorithm in the US, which the claimants allege amounts to an implied horizontal price-fixing agreement.

Finally, academic attention on AI, algorithms and competition, in particular work by Professors Ariel Ezrachi and Maurice Stucke (whose recent text Virtual Competition warned of the “end of competition as we know it” and the inability of current antitrust tools to deal fully with the risks raised by AI), has spilled over into the mainstream and grabbed the attention of enforcers and practitioners. Many have started to question whether traditional competition law concepts and enforcement mechanisms, which assume human action, can be effectively applied to computerised systems (which have no fear of fines or imprisonment), and the impact on consumer welfare if antitrust law is falling short.

Others have argued that this level of interest is not warranted and that the concern about AI and algorithms - and the ability of the current competition framework to deal with their perceived anti-competitive effects - is overhyped (and/or that in any event that such technology can have pro-competitive effects, and can be used to disrupt and destabilise attempts at collusion or exploitative practices).

What are the potential competition law concerns?

It is the prospect of collusion through AI and pricing algorithms - where algorithms are used to automatically fix prices, horizontally (i.e. cartel behaviour) or vertically (i.e. resale price maintenance (‘RPM’)) - which has created the most headlines. In particular, the technological developments have raised a concern that competitors will be able to achieve and sustain collusion without any human contact. The legal question which arises is whether and when such behaviour would fall within the traditional EU law concepts of ‘agreement’ and ‘concerted practice’ (and equivalents in other jurisdictions), such to fall within the scope of the prohibition on anti-competitive agreements within Article 101 of the Treaty on the Functioning of the European Union (‘TFEU’) and similar national rules. Where it does not (or would not without doing violence to the provisions), the policy question is whether these concepts should be extended to cover such practices and/or whether other regulatory responses are needed. Attention has also been given to a second set of concerns centring on the use of personalised pricing (which has been a topic of interest for the CMA in particular). This includes where a company collects personal data and quotes different prices to different people at different times based on an algorithmic analysis of this data (for example in light of what the consumer is judged as prepared to pay), which some have categorised as behavioural discrimination. Such practices might be covered by existing laws on abuse of dominance (such as Article 102 TFEU), for example where this amounts to illegitimate price discrimination or excessive pricing, but this would only be the case where the company has market power. The remainder of this article focusses on the first set of concerns.

What is the view of the competition law enforcers?

Competition authority officials have been increasingly commenting on the issues, both within Europe (for example Margrethe Vestager’s March 2017 speech3 and David Currie’s February 2017 speech4) and the US (see the diverging levels of concern expressed by Commissioners Maureen Ohlhausen5 and Terrell McSweeny6 of the Federal Trade Commission (‘FTC’)).

However, the most detailed indications to date of the key authorities’ positions are those made for the purposes of the roundtable Algorithms and Collusion held by the OECD’s Competition Division in June 20177, to which (amongst others) the European Commission, the CMA and the FTC (together with the Department of Justice (‘DoJ’)) all submitted contributions.

The authorities, whilst recognising that the issue needs to be watched closely, do not appear to have dramatic concerns about the advent of AI and algorithmic pricing, nor that competition authorities need to rush to overhaul their existing enforcement tool boxes. They all have slightly different areas of focus. We outline and comment below on the European Commission’s contribution.

What position does the European Commission take in its OECD submission?

Vertical issues

Reflecting the observations in its E-Commerce Sector Inquiry final report, the Commission highlights the fact that price monitoring algorithms can be used by suppliers to detect deviations from fixed, minimum or recommended resale prices, and then retaliate, thus reducing deviations and contributing to the effectiveness of RPM practices.

It also flagged the potential for higher prices to spread to retailers not engaged in RPM, due to price monitoring or matching algorithms tracking the prices of those retailers engaged in RPM. It is notable in this respect that the Commission is currently investigating Asus, Denon & Marantz, Philips and Pioneer for alleged online RPM practices, including whether this may be aggravated due to the use by online retailers of automated repricing software8.

The Commission therefore appears to consider that pricing algorithms may make RPM practices more effective and/or aggravate their effects, rather than that their use could give rise to novel methods of fixing retail prices that may not be caught by the existing legal provisions.

Horizontal issues

The Commission’s focus is on practices between competitors. It first reiterates the traditional distinction between explicit collusion through agreements or concerted practices (involving some element of communication or contact between competitors), and tacit collusion through companies intelligently and unilaterally adapting to the conduct of their competitors. Although the latter can also lead to higher prices (at least in oligopolistic markets), only the former is caught by Article 101 TFEU. The Commission concludes that, “to a large extent” pricing algorithms can be analysed by reference to this traditional categorisation, and applies this to four potential types of practice.

Algorithms that are used to monitor prices agreed between competitors: The Commission states that use of price monitoring algorithms to detect cartel deviations could form part of the Article 101 TFEU infringement, and may lead to an increase in fine due to “rigorous” implementation of the agreement. This is an example of one of many practices that could cause issues under Article 101 TFEU if carried out by algorithms, as it would if carried out (less efficiently) offline.

Algorithms that are used to implement pre-existing explicit collusion: The Commission states that if firms were to engage in explicit collusion (through any means of communication), and then use their own pricing algorithms to implement the agreement, this would be no different from setting prices manually to implement agreed prices.

Indeed, although these are not examples cited by the Commission, this reflects the facts of the first antitrust investigation concerning price algorithms, conducted by the DoJ and the CMA into the ‘wall décor’ industry. In this case, sellers of posters on Amazon Marketplace agreed not to undercut each other’s pricing, and to implement this agreement via the use of the same algorithmic repricing software (essentially agreeing to programme rules into the software to ensure price matching). The CMA fined one of the companies involved - Trod - for participating in an anti-competitive agreement, and disqualified Trod’s Managing Director Daniel Ashton from acting as a director9. In the US, the DoJ secured a guilty plea from Daniel Ashton and his opposite number David Topkins for entering into a price fixing conspiracy10,11.

The Commission refers to the 2016 ruling of the EU Court of Justice in Eturas12, which further demonstrates that the existing rules will catch a scenario where there is contact between firms, even where this is via automated means.

This case concerned an online booking system used by 30 Lithuanian travel agencies. The administrator implemented a technical restriction in the system limiting the discount rates the agents could offer (unless overridden), and sent an electronic notice announcing this to the users’ system inbox. The Court made it clear that while the dispatch of the message was not, itself, sufficient to prove that the agents were aware of its contents, and thus that concertation had occurred, where other evidence demonstrated that an agency was aware of the message it would have engaged in a concerted practice if it continued to utilise the system (without ‘public distancing’). The Court, consistent with current law, focussed on the firms’ awareness of the pricing restriction, not its implementation through the system.

Algorithms that are used to engage in explicit collusion: The Commission states that competitors colluding about using particular repricing parameters and strategies in their pricing algorithms could fall within Article 101 TFEU, whether directly or through a third party (i.e. in a ‘hub and spoke’ scenario), which appears clear.

The Commission further states that if competitors were to outsource their pricing decisions to one and the same third party, this would also raise Article 101 TFEU concerns; however, it can be queried whether this would hold absent at least awareness that the others were doing so.

The Commission also flags the possibility that pricing algorithms might be used to signal pricing intentions or proposals to competitors. Public price signalling is another current ‘hot topic’ in competition law, as exemplified by the Commission’s recent investigation into container shipping companies announcing intended rate increases on their websites/in the press13. The Commission states that if signalling were to take place through pricing algorithms, for example by coded messages that are ‘understood’ by the other algorithm, this would also raise Article 101 TFEU concerns.

This, however, begs the question of in what precise circumstances such signalling would constitute a concerted practice. In the offline world it is also challenging to identify when price announcements cross the line from genuinely unilateral action (and intelligent adaption) into concertation (perhaps reflected in the fact that the container shipping probe was settled by commitments rather than with an infringement decision). This will be a key question on the facts, including in light of how the algorithms are introduced and how these have been programmed, how they interact, and the awareness between companies of their competitors’ use of and reactions to the algorithms. As the Commission later observes, it is “not obvious that more sophisticated tools through which a firm merely observes another firm’s price and draws its own conclusion would qualify as ‘communication’ for Article 101 purposes.”

The Commission finally asks whether pricing algorithms could, without directions to do so from their ‘humans,’ engage in explicit collusion with each other (the possibility of AI leading to algorithms ‘self-learning’ to engage in ‘robo-collusion,’ for example as a result of a profit maximising instruction, being one of the key concerns raised in the literature)14. It raises the prospect that this could lead to a concerned practice or agreement being reached between algorithms.

Reflecting the position previously laid down by Margrethe Vestager, the Commission states that it is up to the firms using algorithms to ensure that their algorithms do not engage in illegal behaviour (presumably by writing in code preventing this). This is effectively treating the software as akin to a rogue employee - the Commission concluding that “like an employee or an outside consultant working under a firm’s ‘direction or control,’ an algorithm remains under the firm’s control, and therefore the firm is liable for its actions.” However, it is not clear whether this could be sustained legally - can a computer programme acting without its owner’s instruction or awareness or even anticipation be said to act for the company in the same way?

Algorithms that are used to engage in tacit collusion: Finally, the Commission queries whether algorithmic pricing is making tacit collusion more pervasive and more effective, and if so what is the appropriate response (given that, absent concertation, this would not fall within Article 101 TFEU, despite reducing the incentives to lower prices).

The Commission recognises that algorithmic pricing can enable speedy and stable price matching, reducing incentives to cut prices, including in markets where it would otherwise be unlikely to take hold. On the other hand it flags factors which may defeat tacit collusion depending on the facts, for example where products (including delivery conditions) are heterogeneous, or where firms decide that competitive pricing and larger sales volumes is preferable to higher pricing. It also raises the possibility that the market may throw up technological solutions allowing consumers to defeat algorithm enabled tacit collusion (although warning that this may come too late).

The Commission is distinctly lukewarm about extending the concept of concerted practices to cover such practices (although not ruling out the “possibility that more creative and novel types of interactions could in certain situations meet the definition of ‘communication’”), and of reconsidering the current position under which tacit collusion is deemed legal. Other authorities have taken a similar view, although flagging the possible use of other tools to deal with negative effects on competition (including market studies, consumer protection laws, and merger control).

Where does this leave us?

Many would agree with Margrethe Vestager’s conclusion that we “certainly shouldn’t panic about the way algorithms are affecting markets,” but rather watch closely how these are developing and be vigilant to their potential use for anti-competitive purposes.

Clearly, where algorithms are used to implement collusion which would be problematic if implemented manually, this should be enforced and sanctioned. Equally, it is difficult to see why, if a practice is legal if implemented manually (for example a company tracking a competitor’s prices by driving to its stores or checking its websites, and then adjusting its own), this should differ if implemented - albeit in real time - by price monitoring and resetting software. There will nevertheless be a host of practices which fall into a ‘grey area’ in between, and companies will need to be particularly careful in disclosing the use of algorithmic solutions to competitors, and of any aspect of a programme’s design which may be characterised as price signalling.

In the meantime it can be expected that the Commission, the CMA and other enforcers will invest time and resources in understanding the software and other technology being used by businesses, and will be keen to bring enforcement action in this area15.

Stephen Wisking Global Head of Practice - Competition, Regulation and Trade

Molly Herron Senior Associate

Herbert Smith Freehills, London


2. There are also a multitude of other recent or ongoing investigations of various forms at EU and national level, including those concerning ebooks, online hotel booking, and digital comparison tools.












14. The DoJ/FTC, on the other hand, in their submission dismiss these scenarios as too speculative to consider at this time.

15. It is notable in this regard that the FTC has established an Office of Technology Research and Investigation, charged in part with overseeing algorithm powered commerce (albeit within the consumer protection rather than the competition bureau). The CMA has also announced its plans to invest in improving its technological expertise and in new digital forensic tools and investigative technologies.

Search Publication Archives

Our publication archives contain all of our articles, dating back to 1999.
Can’t find what you are looking for?
Try an Advanced Search

Log out of digital business lawyer
Download latest issue PDF
E-Law Alerts
digital business lawyer Pricing

Social Media

Follow digital business lawyer on Twitterdigital business lawyer on LinkedIndigital business lawyer RSS Feed