Harry Styles and “The Liar’s Dividend”
Back in 2019, the legal scholars Danielle Citron and Robert Chesney defined a useful concept: The “liar’s dividend”. They were worried about how deepfakes could create political chaos. Tools for producing realistic-but-fake video and audio were becoming cheaper and cheaper. Anyone who wanted to throw an election by making fake media about a politician could now do it, with decent realism. Thankfully, we have not yet seen this happen in a major North American political season. Four years after Citron and Chesney wrote their article, making faked video and audio has become simpler than ever. But the most common uses of deepfakes aren’t political. They’re mostly created a) for porn — such as attempts to humiliate and demean female celebrities — and b) for scams, as with scammers deepfaking a family member’s voice. Why haven’t political deepfakes taken off? Possibly because it still takes a bit of work to do a really high-quality political fake. More importantly, as the fake-media scholar Hany Farid told me a few years ago, one doesn’t need to create deepfakes when “shallow fakes” will work just fine. Want to make Joe Biden seem less electable? There’s no need to elaborately stage a fake video in which he admits to hauling in millions from Ukranian mobsters. Just find a picture of him with his mouth open and eyes closed, then splash some memetext over it claiming he’s in the late stages of dementia. No complex forgery needed; just Microsoft Paint and an audience that regards sniggering as the acme of political thought.But to me, the more interesting part of Citron and Chesney’s essay came later in the piece. They also predicted that an era of deepfakery would cause a second-order problem. If deepfakes become widespread and common, they could make the public deeply cynical, and suspicious of the veracity of any media. This would also give bad actors an “out”: If an actual video of their malfeasance were released, they could claim “it’s a deepfake”.
0 Comments