Sectors

Read about our portfolio of sectors and the type of work we conduct in each

About

Find out more about what we do in economic consulting and our company history

Careers

Learn more about working at CEPA, our latest vacancies and how you can contribute to our work

News & insights

Date: December 2017 | Sector: Communications, media & payment systems | Energy retail & consumers | Energy | Expertise: Regulation & competition

What should regulators make of big data?

By Lewis Heather

In almost every market one can think of, people are talking about the application of big data analytics and machine learning. Energy and other utilities are no different. In this article we highlight some issues that we think regulators should keep in mind.

Context

Energy and other utilities may not traditionally be viewed as data-rich industries. Yet innovative ways of using big data sets and big analytics are already starting to emerge. In 2016, Oracle announced its acquisition of Opower, a US-based software firm. Opower developed a business accessing large volumes of data from smart meters, and sharing insights with electricity companies to help them target consumers with personalised tariff offerings (1). In New York, Drift (2) charges customers a flat fee and uses machine learning approaches to predict how much electricity they will need, buying power day ahead to match the identified demand profiles.

Many UK-based energy and software firms are also building up their data capabilities as they anticipate the opportunities emerging from the roll-out of smart meters. Energy companies and the wider energy ecosystem will soon be collecting and using data in a multitude of ways.

The significant potential to benefit consumers is, rightly, making regulators think about where they might need to remove barriers. For example, Ofgem’s move to principles-based regulation, as well as its new ‘Innovation Link’, show its awareness that some of the existing, overly prescriptive rules may obstruct innovation.


While there may be numerous and significant opportunities for consumers to benefit from the ‘datafication’ of the energy sector, this article considers possible characteristics of big data which could potentially lead to consumer harm. Regulators and competition authorities should look for opportunities to remove or amend regulation that may obstruct innovation. However, they should simultaneously assess the potential for consumer harm in order to effectively guard against it.

In this article, we do not add to the debate on data ownership and protection, on which much has already been said. Instead, we focus on concerns that are more socio-economic in nature. While we draw on examples from the energy market, our analysis can equally be applied to other utilities and more widely.

The dark side of big-data

Data training and 'the algorithmic trap'

Much of the value of big data, and in particular machine learning, is in its ability to identify relationships and forecast or infer outcomes that may have been outside the capabilities of human processing. However, this fundamental advantage also raises an important concern. It can be extremely difficult to identify and interpret a causal pathway explaining the established relationship. Big data algorithms are often a black box, even to their designers.

This lack of transparency can have consequences. While the relationships which the algorithm identifies exist, they may be the result of pre-existing biases or of the designer’s imperfect specifications. For example, evaluation of the use of algorithms in the American judicial system has identified the potential for racial prejudice in determining parole outcomes—holding all else equal, the algorithm is less likely to recommend parole for black criminals (3). The algorithm is not inherently racist by design. Instead, it has been trained on historical data in which a racial bias in granting parole already existed. Even more disconcerting is that as these algorithms gather more data, they will be trained on outcomes that they have previously determined. In this way, the bias could be circular, and reinforced or even exacerbated over time. This could create an ‘algorithmic trap’ which catches those individuals who were initially victims of human bias. These individuals risk remaining subject to ever-increasing bias, but this time hidden deep within an opaque decision-making algorithm.

In cases such as the American parole system, where the bias is identified, the problem can be resolved—although possibly after harm has already been caused. However, identifying such biases, let alone the reasons behind them, can be much more challenging. Thus, when the ‘computer says no’ we may not even realise that the bias is present.

Returning to the energy industry, one can also see the potential for an algorithmic trap. For example, suppliers may start to use algorithms to determine which customers they should target to encourage switching. In doing so, they may try to avoid customers that they consider at high risk of getting into financial difficulty. The algorithm developed for such an activity may identify a characteristic, such as geographic location or age, as an influencing factor, thus making it less likely (all else equal) that people with such characteristics are engaged within the market. The lower likelihood of switching to better deals may make it more likely that they get into financial difficulty on their current tariff, thus reinforcing the algorithm’s data and creating a similar circular bias as witnessed in the American parole system. Again, no one may notice this happening.

Pricing algorithms and collusion

Algorithms can also be developed to optimise the prices that companies charge to their customers. Among other things, these pricing algorithms may take into account the pricing strategies of their competitors. As Ariel Ezrachi and Maurice E. Stucke point out in their book ‘Virtual Competition’ (4), this could lead to new forms of explicit or tacit collusion between competitors. Those who are willing to collude explicitly could ‘delegate’ collusion to their algorithms. By collecting data on the prices of fellow cartel members in close to real time, these algorithms will be able to effectively monitor and punish deviations from the collusive agreement. In turn, this would dramatically reduce any incentive which may have previously existed to deviate from the agreement in the first place, strengthening the collusive equilibrium. Furthermore, the ‘black box’ nature of the algorithms may make it difficult for competition authorities to identify, observe and punish collusive agreements.

Perhaps even more worrying is the possibility that algorithms may naturally fall into collusive outcomes, even without the express intent of their designers. This could happen because a number of competitors source price setting algorithms with similar designs (e.g. because they all use the best algorithm provider). Trained on very similar data, these algorithms may develop almost identical pricing strategies. This may prevent truly competitive outcomes.

Clever algorithms that use machine learning and have a profit maximising objective may even learn to collude themselves, without their designer’s intent. Through learning over time and reacting to competitors (who may also use machine learning), these algorithms may learn that the best way to maximise profit is to align with competitors’ pricing strategies and to set a collusive price. In turn, and perhaps with a bit of trial and error, the competitors’ algorithms may do likewise. In this example, even the designer may not realise that their algorithm has established a collusive outcome. If inputs are complex and opaque, the price may appear from the outside to constitute a competitive equilibrium. Such algorithms may also learn to evade the detection algorithms that are developed by competition authorities (who may not have sufficient funding to invest in the same level of sophistication as the company). What is more, even if detected, existing competition law may not deem collusion established without express intent of a human designer as illegal.

In the energy market, where disengaged customers are relatively prevalent, and where companies can share a number of input attributes, inadvertent collusion may be a real possibility.

Behavioural price discrimination

Algorithms also increase the potential for more dynamic and personalised pricing. In some markets, we are already losing the sense that a product or service has any kind of ‘true price’. We may become increasingly used to pricing that is dependent on the time, context and even our personal characteristics and behaviours. In energy, smart metering and Ofgem’s half hourly settlement programme (5) may open the door for suppliers to offer dynamic and bespoke energy tariffs.

Of course, there are also upsides to this. Consumers may face energy prices that are more reflective of the costs they impose on the energy system. They may have strong incentives to consume in ways that can lower costs for all. Those who are able to respond to system needs may have a significant opportunity to gain a discount on their bill.


But regulators should again be mindful of the full range of potential impacts. Algorithms will become increasingly able to work out our willingness to pay for energy in real time. This will allow suppliers to extract more and more of our consumer surplus. Moreover, dynamic pricing may not only be designed to be cost-reflective but to also reflect our behavioural and situational context, charging us more at times when we most value it. Taking this to its profit-maximising conclusion, we might find that our electricity prices are hiked as the anthems are being sung at the World Cup final or that our gas prices go up just as we are about to cook dinner.

Just as importantly, to maximise profit, the algorithm would establish the highest prices for those customers with the lowest price elasticity. It is not unreasonable to expect that those with low elasticities may be the more vulnerable in society, either because they are more dependent on certain services which require energy, or because they are unaware of, less able to afford, or unable to make use of, outside options (such as self-supply or storage). Some of us may be able to switch to our home batteries as prices spike, while those without alternatives will have to pay up or miss out. This raises serious distributional questions, and could intensify inequality.

Considerations for regulators

Regulators, competition authorities and policy makers have much to consider as data is increasingly used to set prices, design tariffs, classify consumers and optimise business strategy. In response, they may be drawn to some of the following policy options:

  • Requiring designers to be held accountable for the outcomes of their algorithms and requiring that the causal pathway is sufficiently transparent (or capable of being made so). This may help avoid the ‘algorithmic trap’ issue in which algorithms are inadvertently driven by features that are ethically undesirable. However, ensuring that algorithms are understandable may require a sacrifice of much of their predictive power. A trade-off may be required between transparency and effectiveness. 
  • Requiring prices which are set by algorithms to be demonstrably cost-reflective. This would mitigate the problem of algorithm-driven collusion, and would limit the ability of behavioural price discrimination to disadvantage less price elastic consumers. However, it would represent a further step change in regulatory price intervention. The transparency and simplicity needed to be able to demonstrate cost-reflectivity would also temper the potential benefits that algorithms could bring. It would also bring monitoring and enforcement challenges.
  • A more targeted intervention may limit behavioural price discrimination and ensure that the benefits of big data are passed onto all, regardless of their ability to engage through technologies, tariffs and behavioural preferences. A regulator may be inclined to adopt such an approach as it would limit the ability of firms to extract consumer surplus. But this would come at the expense of economic efficiency, thus constraining total surplus. An alternative would be to allow behavioural price discrimination but to re-distribute benefits through other means such as social security measures or well-designed broader taxation. This would allow the maximisation of total surplus while protecting against undesirable distributive consequences.

The clear scope for consumer benefit as well as harm should lead regulators to tread a careful path between protecting consumers and facilitating innovation. Some of the policy options that may appear attractive may restrict innovation, limit its potential or have wider impacts on market design. The ‘black box’ nature of algorithms reinforces the need for regulators to be proactive in developing their strategies. Once consumer harm starts to materialise, it may become increasingly difficult and expensive to correct. We may not even know about it.

References:

(1) https://www.oracle.com/corporate/acquisitions/opower/index.html

(2) https://www.joindrift.com/

(3) https://www.washingtonpost.com/opinions/big-data-may-be-reinforcing-racial-bias-in-the-criminal-justice-system/2017/02/10/d63de518-ee3a-11e6-9973-c5efb7ccfb0d_story.html?utm_term=.92f668f058f0

(4) http://www.hup.harvard.edu/catalog.php?isbn=9780674545472

(5) https://www.ofgem.gov.uk/electricity/retail-market/market-review-and-reform/smarter-markets-programme/electricity-settlement

To find out more, please contact our experts listed below.