By | November 13, 2023
magazine block image

Early in the horrific war in Israel and Gaza, a new media reality became clear: real-time information on social media is less reliable than ever. X, the social network formerly known as Twitter and the most popular platform for breaking news, apparently no longer has the ability or the will to fight disinformation. Footage of fireworks in Algeria has been presented as evidence of Israeli attacks on Hamas, video game graphics have been presented as reality, and a clip from the war in Syria has been recycled and enhanced on X as if it were new.

The recent decisions made by the platform’s owner, Elon Musk, complicate the issue. On Twitter, a blue tick meant that a user’s identity had been validated. It wasn’t a perfect system, but it helped in finding reliable sources. Under Musk, the platform has removed blue ticks from journalist accounts and offered them to virtually anyone willing to pay $8 a month for a premium subscription. These account holders share revenue with X when their content goes viral, which encourages them to share engaging content regardless of whether it’s true or not, and the algorithm gives their posts more weight in users’ feeds.

Under Musk, X has also downsized most of the company’s teams, particularly “trust and security,” the department responsible for ensuring that content posted on the network is accurate and not malicious. That team has reportedly shrunk from 230 employees to about 20. While a voluntary system called Community Notes allows X users to flag and potentially expose inappropriate content, users have complained that it can take days for those notes to appear — if ever do it.

While X’s performance has been so poor that EU Commissioner Thierry Breton has announced an investigation into the platform’s handling of disinformation during the Israel-Hamas war, a larger disinformation crisis is unfolding. Simply put, the journalists, activists and researchers who study disinformation on social platforms no longer have the tools to do their job, or a safe environment in which to work.

Scholars began to take digital misinformation and disinformation seriously as a force in politics in 2016, when the Brexit campaign in the UK and the Trump campaign in the US both featured prominent deceptions in digital spaces. Studying the 2016 US campaigns, a team at Harvard led by my colleague Yochai Benkler concluded that the most influential disinformation was not always stories made out of whole cloth, but propaganda that reinforced certain facts and frames at the expense of others. While stories of Eastern European teenagers writing pro-Trump political fiction received wide coverage, more important were stories from right-wing blogs and social media accounts, amplified within a right-wing media ecosystem and ultimately by the mainstream media, if only to refute them.

Benkler’s analysis, and the analysis of many others, relied on information from Twitter’s Application Programming Interface (API), a stream of data from the platform available to researchers, journalists, activists and other interested parties. In March of this year, looking for a new source of revenue, Twitter announced that research access to the API would now start at $42,000 a month, putting it out of reach for most researchers. Other platforms – notably Reddit, which was also popular with academic researchers – followed suit.

Facebook and Instagram have historically been much more protective of their APIs, but some insight into content on those platforms was provided via CrowdTangle, a tool developed by activists to see how their content was performing on social media. Facebook (whose parent company, Meta, owns Instagram) bought the tool in 2016, and it was run by one of its founders, Brandon Silverman, until he left in 2021 in a bitter atmosphere. In 2022, Meta stopped accepting new applications to use the tool, and users reported that the project seemed resource-starved, with bugs not fixed.

Losing the tools to study social media—no longer allowing outside researchers to determine, for example, whether X did an adequate job of removing disinformation—would be problematic enough. But another set of barriers has made the researchers’ job even more difficult.

In July 2023, X filed a lawsuit against the Center for Countering Digital Hate (CCDH), a non-profit organization that researches the spread of extremist speech on digital platforms and campaigns for tougher enforcement. CCDH had reported that hate speech against minority groups had increased since Musk bought the platform in October 2022. X’s CEO, Linda Vaccarino, has called CCDH’s allegations false, and the suit seeks unspecified damages. It is hard to see the act as anything other than an attempt to silence research about the platform. When the world’s richest man makes it clear that he will sue, it significantly raises the stakes for criticizing his favorite toy.

But an angry Musk isn’t the only powerful individual disinformation researcher to be targeted. The chairman of the Judiciary Committee of the US House, Republican Congressman Jim Jordan, has sought information from researchers who have studied the amplification of falsehoods on digital platforms. These requests, aimed at university professors, seek years of communications in the expectation of exposing a “censorship regime” involving these researchers and the US government. Such information requests are costly for institutions to comply with, and often add to the emotional burden faced by researchers, who are harassed after their alleged role in “censoring” social media is reported.

When the world’s richest man makes it clear he will sue critics, it raises the stakes considerably

This constellation of factors—increasing disinformation on some platforms, the shutdown of tools used to study social media, lawsuits against investigations into disinformation—suggests that we may face an uphill battle to understand what is happening in the digital public sphere within a not-so-distant future. It’s very bad news as we head into 2024, a year that includes key elections in countries including the UK, Mexico, Pakistan, Taiwan, India and the US.

Elections in Taiwan are of particular interest to China, and journalists report that Taiwan has been inundated with disinformation portraying the United States as a threat to the territory. One story claimed that the Taiwanese government would send 150,000 blood samples to the United States so that America could engineer a virus to kill Chinese. The goal of these stories is to encourage Taiwanese voters to oppose alliances with the United States and push for closer ties to mainland China. Taiwanese NGOs are developing fact-checking initiatives to combat false narratives, but are also affected by reduced access to information on social media.

India’s Prime Minister, Narendra Modi, has enacted legislation to combat fake news on social media and it seems likely that these new laws will target government critics more effectively than Modi’s supporters. The 2024 US presidential election, meanwhile, is shaping up to be a battle for the disinformation artists. Serial liar Donald Trump, who made more than 30,000 false or misleading claims during his four years in office, is running against not only incumbent Joe Biden, but anti-vaccine crusader Robert F. Kennedy, who was banned from Instagram for medical misinformation before getting his account restored when he became a presidential candidate.

If there’s any hope for our ability to understand what’s really happening on social media next year, it may come from the European Union, where the Digital Services Act requires transparency from platforms operating on the continent. But enforcement is slow, and war and elections are fast by comparison. The upsurge of disinformation around Israel and Gaza may point to a future where what happens online is literally impossible to know.

#internet #unknown

Leave a Reply

Your email address will not be published. Required fields are marked *