Bad Robots – “Deepfake” Technology is Deeply Concerning
Bad Robot Outcome:
“Deepfakes” – AI generated fake images, videos, and audio files – are becoming more commonplace as their proliferation across the internet explodes.
As these altered pieces of content continue to make their way into our social media feeds, inboxes, and televisions the line between genuine and fraudulent will continue to blur, making it hard for even savvy individuals to determine their accuracy.
The effects have already been shocking, but the true impacts of this troubling use of technology are still rippling throughout the world and will only continue to get worse and more frightening.
The Story
Deepfake technology is an application of “deep learning” – a form of artificial intelligence – to make images of falsified events. Such content first made its way onto the internet in 2017, powered by generative adversarial networks (GANs – an innovative new deep learning method).The Fall-out
The ramifications of deepfake technology are vast and terrifying. It has the power to wreak havoc on peoples’ personal lives, change the course of political elections, and leave us questioning the veracity of genuine content.
As noted above, deepfake videos are widely circulated on pornographic websites. This practice is already deeply concerning, as the likeness of female celebrities is used without their consent. But it can get worse. As the technology continues to progress to the point where it can be utilized by unskilled people, it has the power to fuel the already insidious practice of “revenge porn” (the distribution of sexually graphic images of individuals without their consent). Danielle Citron, professor of law at Boston University, bluntly states: “Deepfake technology is being weaponized against women.”
In January 2020 the Brookings Institution published a report about the challenges that deepfake technology will pose to politics. Their findings were shocking. The prolific think tank concluded that “these realistic yet misleading depictions will be capable of distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.”
As evidenced by the unsettling situation in Gabon, the emergence of deepfake technology causes us to question whether any piece of content is actually genuine. Hao Li, professor at USC, points out that “people are already using the fact that deepfakes exist to discredit genuine video evidence.” If someone disagrees with a piece of content, why not attack its legitimacy? Turning back to the country of Gabon, experts are still not – to this day – able to determine whether the video is authentic. Have a look for yourself and see what you think.
Our view
The emergence of deepfake technology poses many significant ethical dilemmas. Should governments enact legislation aimed at curtailing the proliferation of deepfakes? Should private entities, like Facebook, Twitter, and Google, be responsible for proactively monitoring and removing such content? In 2019 the state of California enacted legislation banning the creation or distribution of deepakes involving politicians within 60 days of an election. However, such legislation was fraught from its inception, bumping up against constitutional rights – such as free speech – and being incredibly hard to enforce. We at the Ethical AI Advisory believe that education is the first, and perhaps most crucial, step in the fight against the perils of deepfake technology. If individuals are aware of how this technology is being used, where it might be seen, etc., they can exercise healthy skepticism and critical thinking. Doing so will not be easy, particularly in developing countries where digital literacy still lags. As such, it will be absolutely crucial for both private and public entities to help spread awareness of this potentially ruinous technology. Without an underlying understanding of the perils of deepfakes (or even that they exist at all!) individuals will be more likely to be misled and manipulated.Written by: