As I mentioned in my last-minute slide about Clearview AI and facial recognition technology (FRT), I think this is a great case of an emergent issue where, for better or worse, firms have had both considerable ability to shape the future of the technology but also the potential to help bury or at least undermine or delay rollout in this up-and-coming sector.
The problem is that the firms most capable of recognising the value in finding regulatory advantage of their proprietary technologies and of influencing policy-makers and the wider public debate will shy away from involvement in an area they deem problematic and that may lead to wider reputational damage. In 2018, Amazon began to market its cloud-based facial recognition to law enforcement based on databases containing tens of millions of faces, but quite infamously, their technology misidentified 28 members of the US Congress as belonging to a mugshot database (those that have been arrested).
The use of FRT for law enforcement and security has raised the greatest concerns related to possible misuse by police, privacy, considerations (which relates back to our discussion over data safety in Week 6) and bias. There have been longstanding worries, for example, that the databases can be trained in a way that encourages or reinforces bias because of selection effects in the databases themselves or by the humans designing the algorithms. For example, an MIT study in early 2018 demonstrated that market-leading firms such as Microsoft, Amazon, IBM, and Megvii (FACE++) were more likely to produce more accurate results when identifying white or lighter-skin males but were less accurate when identifying darker-skin women, and then shown to be industry-wide in a comprehensive 2019 study by the National Institute of Standards and Technology.
Despite efforts by the technology giants such as Microsoft to improve their technology to reduce bias, over the intervening two years, public concerns were not eased (and indeed were reinforced by the bad behaviour of actors such as Clearview AI as discussed in class. By mid-2020 (timing not unrelated to the BLM protests), many firms concerned about their wider reputation began to withdraw — Amazon imposed a one-year moratorium on the use of its technology by police, Microsoft stopped providing its facial recognition solutions to US law enforcement agencies and IBM went even further by withdrawing its FRT offerings and ceasing all R&D activities on the topic. In a landmark case in the UK, Bridges v South Wales, the Court of Appeals found that the use of automated facial recognition technology (AFT) by South Wales Police violated Article 8 of the European Convention on Human Rights (the right to respect for private and family life). Thus, an area once touted as offering great promise has almost ground to a halt (at least for Big Tech) whereas some of the less reputable actors continue to seek other markets around the world leading to a response by regulatory authorities. Even in China, which has often been touted (for better or worse) for its widespread use of facial recognition, the situation in terms of responsible use of facial recognition is actually more nuanced and there is evidence of pushback.
In a great bit of timing, Ghazi Ahamat one of our alums who now works at the UK’s Centre for Data Ethics and Innovation (CDEI) got in touch to let us know that CDEI just this week published a major review into bias in algorithmic decision-making. The review concentrates on four sectors (financial services, local government, policing and recruitment), and proposes significant measures for government, regulators and industry to tackle the risks of algorithmic bias. They also issued a specific briefing note on FRT earlier this year. So we are left with some final questions:
- What might firms have done differently to try to avoid the situation they currently find themselves in? Are there better or worse examples of firms trying to be more proactive? Would the situation have improved if they were able to coordinate more effectively?
- How problematic are the biases in algorithms? The actual bias in the human-led judicial system is rife and mistaken eyewitness identification has been shown to be common, so would the outcome of an AI-led system not be better, particularly with firms continually pressured to strive to eliminate bias? Is the issue bigger than simply whether the system produces better outcomes in terms of false conviction rates?
- Because of the involvement of US tech giants and the interplay with existing systemic bias in the US, the discussion has been dominated by the American debate, but the issues remain or will emerge in different guises around the world. How do you feel your country could or should handle these sorts of questions?
- The use of FRT for surveillance is understandably the most sensitive and politically salient, but there are other uses and other areas for facial recognition and a wide range of areas where concerns over algorithmic bias arise. Where do you feel the greatest dangers or opportunities lie?