Category Archives: MPhil in Technology Policy

Algorithmic Bias and Facial Recognition

As I mentioned in my last-minute slide about Clearview AI and facial recognition technology (FRT), I think this is a great case of an emergent issue where, for better or worse, firms have had both considerable ability to shape the future of the technology but also the potential to help bury or at least undermine or delay rollout in this up-and-coming sector.

The problem is that the firms most capable of recognising the value in finding regulatory advantage of their proprietary technologies and of influencing policy-makers and the wider public debate will shy away from involvement in an area they deem problematic and that may lead to wider reputational damage.  In 2018, Amazon began to market its cloud-based facial recognition to law enforcement based on databases containing tens of millions of faces, but quite infamously, their technology misidentified 28 members of the US Congress as belonging to a mugshot database (those that have been arrested).

The use of FRT for law enforcement and security has raised the greatest concerns related to possible misuse by police, privacy, considerations (which relates back to our discussion over data safety in Week 6) and bias. There have been longstanding worries, for example, that the databases can be trained in a way that encourages or reinforces bias because of selection effects in the databases themselves or by the humans designing the algorithms.  For example, an MIT study in early 2018 demonstrated that market-leading firms such as Microsoft, Amazon, IBM, and Megvii (FACE++) were more likely to produce more accurate results when identifying white or lighter-skin males but were less accurate when identifying darker-skin women, and then shown to be industry-wide in a comprehensive 2019 study by the National Institute of Standards and Technology.

Despite efforts by the technology giants such as Microsoft to improve their technology to reduce bias, over the intervening two years, public concerns were not eased (and indeed were reinforced by the bad behaviour of actors such as Clearview AI as discussed in class.  By mid-2020 (timing not unrelated to the BLM protests), many firms concerned about their wider reputation began to withdrawAmazon imposed a one-year moratorium on the use of its technology by police, Microsoft stopped providing its facial recognition solutions to US law enforcement agencies and IBM went even further by withdrawing its FRT offerings and ceasing all R&D activities on the topic. In a landmark case in the UK, Bridges v South Wales, the Court of Appeals found that the use of automated facial recognition technology (AFT) by South Wales Police violated Article 8 of the European Convention on Human Rights (the right to respect for private and family life).  Thus, an area once touted as offering great promise has almost ground to a halt (at least for Big Tech) whereas some of the less reputable actors continue to seek other markets around the world leading to a response by regulatory authorities.  Even in China, which has often been touted (for better or worse) for its widespread use of facial recognition, the situation in terms of responsible use of facial recognition is actually more nuanced and there is evidence of pushback.

In a great bit of timing, Ghazi Ahamat one of our alums who now works at the UK’s Centre for Data Ethics and Innovation (CDEI) got in touch to let us know that CDEI just this week published a major review into bias in algorithmic decision-makingThe review concentrates on four sectors (financial services, local government, policing and recruitment), and proposes significant measures for government, regulators and industry to tackle the risks of algorithmic bias. They also issued a specific briefing note on FRT earlier this year.  So we are left with some final questions:

  • What might firms have done differently to try to avoid the situation they currently find themselves in? Are there better or worse examples of firms trying to be more proactive? Would the situation have improved if they were able to coordinate more effectively?
  • How problematic are the biases in algorithms?  The actual bias in the human-led judicial system is rife and mistaken eyewitness identification has been shown to be common, so would the outcome of an AI-led system not be better, particularly with firms continually pressured to strive to eliminate bias? Is the issue bigger than simply whether the system produces better outcomes in terms of false conviction rates?
  • Because of the involvement of US tech giants and the interplay with existing systemic bias in the US, the discussion has been dominated by the American debate, but the issues remain or will emerge in different guises around the world. How do you feel your country could or should handle these sorts of questions?
  • The use of FRT for surveillance is understandably the most sensitive and politically salient, but there are other uses and other areas for facial recognition and a wide range of areas where concerns over algorithmic bias arise.  Where do you feel the greatest dangers or opportunities lie?

Green Industrial Revolutions, Ten Point Plans, Goals and Agenda Setting

This week, the British Government released its 10-point plan for a ‘green industrial revolution’ (you can read the press release, which was actually the only information available for the first 24 hours!).  The plan claims to ‘mobilise £12 billion of government investment, and potentially 3 times as much from the private sector, to create and… Continue Reading

Will the vaccine(s) be successful?

Vaccines have had a transformative impact on global public health.  Despite many decades of progress, however, there remain important challenges associated with immunisation. Determining whether a vaccine will be successful involves resolving important questions regarding vaccine effectiveness, distribution and uptake.  There are currently at least seven COVID-19 vaccine candidates at the phase-three stage, which involves… Continue Reading

How safe is our data?

In 2017, The Economist famously highlighted an oft-cited metaphor of Data as the New Oil to describe the growing centrality of data to the global economy (others assert it is not).  More recently, The Economist have re-evaluated and asked whether data is more like oil or sunlight. Whatever the appropriate metaphor, there is little doubt… Continue Reading

The Role of Institutions in Shaping Economic and Climate Outcomes

This month, the International Monetary Fund released its revised World Economic Outlook for 2020.  It is worth taking a look through their current release, which although describing a dire forecast for 2020 is actually significantly improved on its expectations from just a few months earlier. In Europe, the hardest-hit countries were Spain (-12.8%), Italy (-10.6%),… Continue Reading

Regulating Big Tech

Christos sent along a recent blog post he recently wrote with other leading European economists on the Google-Fitbit deal as the topic for this week’s discussion since we will be covering competition policy in both TP1 and TP2 (and will return to in TP6 in Easter term).  What is particularly worrisome they point out is… Continue Reading

Do Prizes Work?

On Thursday (8 October), Prince William, the Duke of Cambridge, and Sir David Attenborough launched a £50m “Earthshot Prize”, which, they claim to be “the biggest environmental award ever”, and which, they hope, will become the equivalent of a “Nobel Prize for environmentalism”.  The initial commitment is for five £1m prizes every year for 10… Continue Reading

Ethics and Technology

Famously, Google’s unofficial motto was ‘Don’t Be Evil’ (sometimes misdescribed as ‘Do no evil’) but any such corporate claim will inevitably lead to tensions since corporations, especially those that span the globe with a professed interest in having an impact on a wide range of end uses, will need to make difficult decisions about where… Continue Reading