
Creating and disseminating persuasive propaganda used to demand the resources of a state. Now all you need is a smartphone.
Creating and disseminating persuasive propaganda used to demand the resources of a state. Now all you need is a smartphone.
Generative artificial intelligence can now create fake images, clones of our voices, and even videos that depict and distort world events. The result: From our personal circles to the political circuses, everyone must now question whether what they see and hear is true.
Hello! You are reading a premium article
Generative artificial intelligence can now create fake images, clones of our voices, and even videos that depict and distort world events. The result: From our personal circles to the political circuses, everyone must now question whether what they see and hear is true.
We have long been warned about the potential of social media to distort our view of the world, and now there is a risk of more false and misleading information being spread on social media than ever before. Equally important, exposure to AI-generated fakes can make us question the authenticity of everything we see. Real pictures and real recordings can be dismissed as fake.
“When you show people deepfakes and generative AI, many times they come out of the experiment saying, ‘I just don’t trust anything anymore,'” said David Rand, a professor at MIT Sloan who studies the creation, spread and impact of disinformation.
This problem, which has become more acute in the age of generative AI, is known as the “liar’s dividend,” says Renee DiResta, a researcher at the Stanford Internet Observatory.
The combination of easily generated fake content and the suspicion that anything could be fake allows people to choose what they want to believe, DiResta adds, leading to what she calls “tailored realities.”
Examples of misleading content created by generative AI are not hard to come by, especially on social media. A widely circulated and fake image of Israelis lining the streets in support of their country has many of the hallmarks of being AI-generated – including clear oddities that are apparent upon close inspection, such as distorted bodies and limbs. For the same reason, a widely shared image purporting to show fans at a soccer match in Spain displaying a Palestinian flag does not stand up to scrutiny.
The signs that an image is AI-generated are easy to miss for a user just scrolling by, who has a moment to decide whether to like or boost a post on social media. And as generative AI continues to improve, it’s likely that such signs will become harder to detect in the future.
“What our work suggests is that most ordinary people don’t want to share fake stuff — the problem is that they’re not paying attention,” says Rand. People’s attention spans are already limited, and the way social media works — encouraging us to soak up content, while quickly deciding whether to share it or not — leaves us with very little capacity to decide whether something is true or not, he adds .
With increasingly difficult to detect fake content spreading, it’s no surprise that people are now using its existence as an excuse to dismiss accurate information. Earlier this year, for example, during a trial over the death of a man who used Tesla’s “full self-driving” system, Elon Musk’s lawyers responded to video evidence of Musk making claims about this software by suggesting that the spread of “deepfakes” by Musk was reason to dismiss such evidence. They made that argument even though the clip of Musk was demonstrably real. The judge in the case said this argument was “deeply troubling.”
Recently, many have claimed, on TikTok and Twitter, that a gruesome image of a victim of the attack on Israel by Hamas, tweeted by Israeli Prime Minister Benjamin Netanyahu, was created by AI. Experts say there is no evidence that the image has been altered or generated by AI.
If the crisis of authenticity were limited to social media, we might be able to take solace in communication with those closest to us. But even these interactions are now potentially full of AI-generated fakes. The US Federal Trade Commission is now warning that what sounds like a call from a grandchild requesting bail money could be scammers who have scraped recordings of the grandchild’s voice from social media to trick a grandparent into sending money.
Similarly, teenagers in New Jersey were recently caught sharing fake nude photos of their classmates, made with AI tools.
And while these are malicious examples, companies like Alphabet, Google’s parent company, are trying to make changes to personal images a good thing.
With its latest Pixel phone, the company unveiled a suite of new and upgraded tools that can automatically replace one person’s face in an image with the face of another, or quickly remove someone from a photo entirely.
Making photos perfect is neat, but it also welcomes the end of capturing authentic personal memories, with their spontaneous quirks and unplanned moments. Joseph Stalin, who was fond of erasing people he didn’t like from official photos, would have loved this technique.
In Google’s defense, it adds a record of whether an image has been modified to the data attached to it. But such metadata is only available in the original photo and some copies, and is easy enough to remove.
The rapid adoption of many different AI tools means that we are now forced to question everything that we are exposed to in any medium, from our immediate communities to the geopolitical, says Hany Farid, a professor at the University of California, Berkeley who specializes in digital forensics and image analysis.
To put our current moment in historical context, he notes that the PC revolution made it easy to store and replicate information, the internet made it easy to publish, the mobile revolution made it easier than ever to access and disseminate, and the rise of AI has made it easy to create disinformation. And each revolution came faster than the one before it.
Not everyone agrees that arming the public with easy access to AI will exacerbate our current difficulties with disinformation. The primary argument of such experts is that there is already far more misinformation on the internet than a person can consume, so throwing more into the mix will not make matters worse.
Even if that’s true, it’s not exactly reassuring, especially given that trust in institutions is already at one of its lowest points in 70 years, according to the nonpartisan Pew Research Center, and polarization — a measure of how much we distrust each other – is at a high point.
“What happens when we have eroded trust in the media, governments and experts?” says Farid. “If you don’t trust me and I don’t trust you, how do we respond to pandemics or climate change, or have fair and open elections? That’s how authoritarianism happens – when you erode trust in institutions.”
#true #Internet #anymore