In an era of disinformation, holding onto what we mean by ‘truth’ can be increasingly challenging. A decade ago, Photoshop might have been used to manipulate images, for example, by adding in a person to a photo who was not present or replacing the background of a photo. Such efforts were rather clunky and mostly used for entertainment rather than anything more nefarious or practical. However, as computing power and technology has improved, the potential threat (from criminality or to our notion of objective truth) grows but so too do opportunities (vastly improved assistive technologies or reproducing interactions with a dear departed loved ones).
Most people will have heard of ‘deep fakes’ whereby video is somehow manipulated and one aspect is the ability to develop AI-driven fake voices. For example, in 2019, the CEO of a UK-based energy firm was defrauded of €220,000 after having thought he was speaking to his group CEO. More prominently, the decision by the director of the new movie Roadrunner to use AI to recreate the late celebrity chef Anthony Bourdain’s voice but not disclose that this was done provoked much outrage. This week, BBC’s Analysis programme tackled the issue of voice cloning as did . How should we respond to potential threats? The EU has perhaps done the most work on the subject, but they are not alone and a number of other countries have tried to address the subject as part of broader efforts on combatting disinformation. This leaves us with some questions to start the discussion:
- What do you think are the biggest challenges associated with deepfakes from a policy perspective?
- Is voice cloning somehow distinct from other types of deepfake and if so, how?
- How feasible and/or desirable is it to regulate deepfakes?
- We often dwell on the negatives, but discuss some of the more positive aspects of deepfakes and whether you think voice cloning, for example, will be a net benefit or a net detriment?