By | November 13, 2023
The Supreme Court is rethinking its entire approach to the Internet.  Oops.

This is part of Opening arguments, Slate’s coverage of the start of the most recent Supreme Court term. We are working on change the path media covers the Supreme Court. Support our work when you join Slate Plus.

The Supreme Court has been reluctant to resolve disputes over the scope of free speech online. Five years ago, in a decision that Justice Anthony Kennedy described as “one of the first cases the Court has taken to address the relationship between the First Amendment and the modern Internet,” the Court cautioned against making any broad statements. Justice Kennedy wrote, “while we can now come to the realization that the cyber age is a revolution of historic proportions, we cannot yet appreciate its full dimensions and enormous potential to change how we think, express ourselves, and define who we want to be.” He warned that “courts must be aware that what they say today may be obsolete tomorrow.” Despite these caveats, the court is poised this term to intervene in digital speech disputes in a significant way.

At the end of next June, the Supreme Court will rule on whether state laws forcing social media platforms to carry content they would otherwise exclude violate the platforms’ freedom of speech, whether federal agencies and officials have illegally forced platforms to remove posts the government deems harmful, and whether state officials who operate social media pages or websites may exclude constituents from those pages based on their content. Resolving any of these disputes would have significant effects on speech on the Internet. Combined, the court’s decision could fundamentally change how social media platforms operate, when and how governments communicate with platforms to address public health, terrorism and other harms, and whether public officials can exclude constituents from websites they use for official business.

Consider state laws, passed in Florida and Texas, that prohibit major social media platforms like Facebook and YouTube from deplatforming speakers or removing speeches based on the message or idea being communicated. These laws force major social media platforms to carry hateful, defamatory and other harmful speech that the platforms’ own terms of service do not allow. They force companies to publish speech that undermines or harms their online communities. The laws also threaten to bury the platforms in private lawsuits by speakers who claim the law gives them the right to speak on the platform. In short, they run roughshod over the platforms’ First Amendment right to exercise editorial control over content published on their sites.

While the platforms have shown some bias in their policies or enforcement of terms of service, the states’ proposed fix is ​​far worse than the disease. If the court finds that social media platforms—unlike, say, newspapers—don’t have editorial or similar rights, state and federal governments can effectively decide what speech must be allowed on the platforms. Platforms would be forced to carry speech from white supremacists, anti-Semites, and terrorists because excluding such speech would be discriminating based on content that violates state law.

Murthy v. Missouri, another case the court will decide this term, addresses another form of government control of Internet speech. IN Murthy the question is whether the government, through various means of “caking” or persuasion, illegally pressured Meta, Twitter (now “X”), Google, and YouTube to remove posts and other content because the government saw it as disinformation that harmed public health or undermined election integrity. Like the cases in Florida and Texas, Murthy will address the extent to which governments can control what is communicated online. But in this case, the means of control is not legislation but behind-the-scenes arm-twisting and threatened legal reprisals by government officials.

Plaintiff i Murthy claims that the White House, the Office of the Surgeon General, the Centers for Disease Control and the FBI pressured Facebook and other platforms to remove posts that criticized the administration’s pandemic policies, supported the Covid-19 “lab leak” theory, and questioned the results of the 2020 presidential election and promoted The Hunter Biden laptop controversy. For example, a White House official told a platform to take down an offending post “ASAP” and instructed it to “keep an eye out for tweets that fall within the same … genre” so they too could be removed. The platforms apparently complied by removing posts and sending frequent reports to government agencies about their compliance. United States Court of Appeals for the 5th CircuitTh The Circuit sided with the plaintiffs on their First Amendment claims, enjoining certain government agencies and officials from compelling or substantially encouraging platforms to remove content.

Public health and other dangers can be exacerbated by viral online communication. It is imperative that governments are allowed to communicate and have open dialogues with social media platforms about the dangers associated with online expression that may pose imminent threats to public health and safety. Like the 5thTh Through a circuit recognized, governments are free to communicate their views to the public through press conferences, public service and other educational campaigns, and through regulation and laws. But when they weaponize or significantly encourage social media platforms to remove speech the government concludes is harmful or simply disagrees with, they cross a First Amendment line. Whether or not the government’s compulsion to remove speech violates the platforms’ First Amendment rights, it does platforms complicit in the violation of their of users First Amendment Rights.

Finally, earlier this month, the court heard arguments in a pair of cases that raise the question of whether city managers, school board officials and other officials who have social media pages or websites can exclude critical or inflammatory comments from the constituents. The analysis is likely to be intensely fact-specific and address the extent to which the incumbents address official as opposed to private concerns on the sites, whether they created the sites while they were private citizens, whether any law required or authorized the creation and operation of the sites, and if any public employees helped with their operation. In a decision relying on similar factors, the U.S. Court of Appeals for the 2nd Circuit statedn.d Circuit ruled that part of former President Donald Trump’s Twitter page was a “public forum” to which his critics had First Amendment rights. After Trump’s reelection bid failed, the Supreme Court dismissed the case, leaving the 2n.d Circuit’s decision, but now the issue is back.

As Justice Elena Kagan observed at the oral argument, voters trying to communicate with or petition office holders may have limited opportunities to do so in reality. As public officials increasingly turn to social media to communicate with constituents and communities, it is critical that constituents can communicate with them in these places. As the Supreme Court has recognized, social media platforms today are among the most important places for the exchange of ideas. Public officials running private websites certainly have the right to decide what to post and what comments they will tolerate. But when they use their platforms to conduct official business or perform government functions, they are bound by the First Amendment.

To decide all of these cases, the Court will have to wade into a complex thicket of legal doctrines and principles regarding compelled speech, the distinction between government persuasion and coercion, and the contours of “state action” and the public forum doctrines. Its guiding principle should be to limit government control of internet speech – no matter what form the control takes. Governments and public officials should not be allowed to force platforms to carry content, force them to remove it, or block comments from public forums based on their views. When dealing with Internet speech issues in this era, the Court should generally heed its own call in the aforementioned Kennedy statement: “The nature of a revolution in thought may be that, in its early stages, its participants may also be unaware of it. And when awareness comes, they still may not be able to know or anticipate where its changes lead.” So too here.

#Supreme #Court #rethinking #entire #approach #Internet #Oops

Leave a Reply

Your email address will not be published. Required fields are marked *