Commentary: Section 230’s Legacy: SCOTUS Social Media Case Will Continue Giving Platforms the Freedom to 'Edit' Recklessly
With Section 230 in place, social media companies will have "the best (or worst) of both worlds," writes NewsGuard co-CEO Steven Brill
Welcome to a special edition of NewsGuard's Reality Check, a report on how misinformation online is undermining trust — and who’s behind it.
Section 230’s Legacy: SCOTUS Social Media Case Will Continue Giving Platforms the Freedom to 'Edit' Recklessly
By Steven Brill, NewsGuard Co-CEO
After listening to much of this week’s Supreme Court argument over whether the states of Florida and Texas could prevent social media platforms from exercising editorial discretion over what appears on their platforms, I was thinking about how this issue arose in the first place. The irony is striking.
February marks the 28th anniversary of the passage of Section 230 of the Telecommunications Act of 1996. Today, Section 230 is notorious for giving social media platforms exemptions from all liability for pretty much anything their platforms post online. But in February of 1996, this three-paragraph section of a massive telecommunications bill aimed at modernizing regulations related to the nascent cable television and cellular phone industries was an afterthought. Not a word was written about it in mainstream news reports covering the passage of the overall bill.
Section 230 made sense at the time. It was aimed at correcting illogical court decisions in order to protect AOL, CompuServe, and Prodigy (remember them?) — the then-start-up, dial-up internet services that allowed paying members to post comments on chat groups. In 1991, a judge had ruled that CompuServe could not be held liable for a defamatory comment posted on its finance forum because CompuServe did absolutely nothing to screen online comments. But in 1995, another court held that Prodigy could be held liable for comments on one of its chat forums because Prodigy did try to screen content (although its editors had failed in this instance). Therefore, as an “editor,” it should be liable. In other words, trying to do the right thing and screen out harmful content put the one company in legal jeopardy while the other was immune.
That’s why those who introduced Section 230 called it the “Protection for Good Samaritans” Act. However, nothing in Section 230 required screening for harmful content, only that those who did screen and, importantly, those who did not screen would be equally immune. And, as we now know, when social media replaced these dial-up services and opened its platforms to billions of people who did not have to pay to post anything, their executives and engineers became anything but good Samaritans. Instead of using the protection of Section 230 to exercise editorial discretion, they used it to be immune from liability when their algorithms deliberately steered people to inflammatory conspiracy theories, misinformation, state-sponsored disinformation, and other harmful content. As then-Federal Communications Commission Chairman Reed Hundt told me 25 years later, “We saw the internet as a way to break up the dominance of the big networks, newspapers, and magazines who we thought had the capacity to manipulate public opinion. We never dreamed that Section 230 would be a protection mechanism for a new group of manipulators — the social media companies with their algorithms. Those companies didn’t exist then.”
Now, the social media platforms are telling the Supreme Court that they have to be able keep the First Amendment right to edit their content. And the high court seems likely to agree, as it should. Therefore, the paradox of the status quo will remain. With Section 230 in place, the platforms will not only have a First Amendment right to edit, but also have the right to do the kind of slipshod editing — or even the deliberate algorithmic promotion of harmful content — that has done so much to destabilize the world.
It's a best (or worst) of both worlds, enjoyed by no other media companies. For example, last year, Fox News was held liable to the tune of $787 million for defaming Dominion Voting Systems by putting on guests meant to pander to its audience by claiming voter fraud in the 2020 election. The social media platforms’ algorithms performed the same audience-pleasing editing with the same or worse defamatory claims. But their executives and shareholders were protected by Section 230.
We launched Reality Check after seeing how much interest there is in our work beyond the business and tech communities that we serve. Subscribe to this newsletter to support our apolitical mission to counter misinformation for readers, brands, and democracies. Have feedback? Send us an email: realitycheck@newsguardtech.com.