Lorie Shaull

Tech

/

Section 230: How it Actually Works, What Might Change, and How that Could Affect You

Although considered by many in the tech community to be one of the most important laws protecting Internet companies, Section 230, the so-called “first amendment of the Internet,” has been mostly unknown to the general public. As a result of this relative obscurity, much of what is being said about this law is wrong. At its core, the law is simple – an online intermediary is not legally responsible for hosting or republishing the speech of others.

This immunity protects everyone from individual bloggers up to the big technology platforms. Without Section 230, an online intermediary would be faced with the nearly impossible task of reviewing all user-provided content for objectionable issues. Most would simply choose not to host any user-generated content, dramatically altering the business models of our largest tech companies. Indeed, the protections afforded by Section 230 are much more generous than the laws of most other countries, and are, in part, the reason so many prominent online companies are based out of the United States.

The biggest misconception about the law is the idea that there is a legal distinction between being considered a “platform” versus a “publisher”; that an online intermediary has to choose between Section 230 immunity or user moderation. However, there is no risk of losing immunity if an intermediary edits user content. In fact, this was something that Section 230 did away with. The “good Samaritan clause” specifically states that the intermediary may restrict access to material that is objectionable.

A quick review of the history of the law can help clarify. Section 230 was part of the Communication Decency Act (CDA) of 1996. Although much of the CDA was struck down in a Supreme Court decision, this section survived. Section 230 was a response to two court decisions from the earliest days of the Internet. The first case, Cubby, Inc. v. CompuServe, Inc., involved a defamation suit against CompuServe for a comment posted in one of the forums it hosted. The court held that the company was not liable because it did not review or approve any of the user content on its forums. In a similar case, Stratton Oakmont, Inc. v. Prodigy Servs. Co., however, a court did find liability. In that case, Prodigy did edit some of the posts on the message boards that it hosted. And because it had edited some posts, the court held that the company was akin to a publisher and thus liable for the content of all posts.

These cases left online intermediaries with two options, either moderate nothing, or host nothing that needs moderating, because any attempt at controlling user content could leave a company on the hook for everything. Congress responded with Section 230. Representatives Ron Wyden (D-OR) and Chris Cox (R-CA) co-sponsored the amendment to the CDA. The amendment provided both that online intermediaries would not be treated as publishers, who are liable for any content they print, and also that they did not have to remain neutral, they could moderate any third-party content that they hosted without losing their immunity. The amendment had strong bipartisan support, passing on a 420-4 vote in favor.

For two decades, the law seemed to function as intended and generated little interest. More recently, however, it has come into the crosshairs of nearly everyone. As tech companies, and the social media giants in particular, have had an increasingly outsized impact on our culture, more people have become concerned with how these companies are using, and possibly abusing, Section 230.

In 2018, a number of Internet scholars and public interest groups, such as the ACLU and the Electronic Frontier Foundation (EFF), released The Santa Clara Principles. These are a proposed set of guidelines for online intermediaries that engage in large-scale content moderation. The principles focus primarily on transparency and engagement around moderation policies.

They are as follows: “Companies should publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines; Companies should provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension; (and) Companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension.”

Although an improvement over current practices, the principles rely on self-regulation by online intermediaries and do nothing to address Section 230 directly. Many have criticized this approach for being too naive about the behavior of the social media giants. As one example, the EFF defended the softer approach of the principles, and criticized a Republican attempt to amend Section 230 by suggesting it would lead to more censorship. In doing so, it asked, “Would Reddit have to delete r/The_Donald, the massively popular forum exclusively for fans of the current U.S. president?” Within a week of EFF posting that article on its website, however, Reddit did delete r/The_Donald.

Politicians, on the other hand, have called for much more significant changes. In the lead-up to last year’s election, both candidates took strong positions against Section 230. Donald Trump, no fan of the social media giants, issued an executive order towards the end of his term that would have placed limits on the Good Samaritan Clause, restricting an online intermediary’s ability to moderate user content. The executive order asked the FCC to reinterpret the meaning of “good faith” as used in the clause, considering whether the company had acted deceptively or with pretext, and whether the company had offered notice, explanation, and an opportunity to be heard to the person being censored or suspended.

Joe Biden went even further. Early in last year’s election cycle (before he had secured his party’s nomination), Biden surprised many when he called for the immediate and total revocation of Section 230. Biden was upset about ads that had been running on Facebook related to his role in the firing of a prosecutor in Ukraine. Although he was able to eventually get Facebook to take the unflattering ads down, he felt that Section 230 was a shield that made that too difficult.

Despite this apparent bipartisan support for reform of the law, Democrats and Republicans have very different concerns about its impact, as is reflected in the two presidential candidates’ actions. Republicans see Section 230 being used as a tool for tech companies to censor them based on their political views. Indeed, many prominent conservative online personalities that have been censored or suspended by Big Tech have had their lawsuits thrown out because of this Section 230 immunity. Democrats, conversely, are concerned with what they see as a proliferation of misleading or false information being spread on social media about them. In short, Republicans are upset that too much is being removed, and Democrats are upset that not enough is. Republicans want to restrict the scope of what can be moderated under the Good Samaritan Clause portion of the law, and Democrats want to restrict the immunity granted by Section 230.

Now that the Democrats have control, their concerns will likely prevail. To that end, this February they introduced the SAFE TECH Act. Most consequential of the changes are those that limit the coverage of the law. The amendment would take away immunity for any ads, marketplace listings, or other content that the company was paid to host. The amendment would also limit immunity to only cover user “speech.” Currently, the law includes the broader term “information,” and this means that Section 230 provides immunity for a lot broader swath of activities. In line with Biden’s sentiments, this law would primarily serve to limit the scope of immunity provided by Section 230.

Many are reluctant to make such changes to the law, however, because the one major change to Section 230, the SESTA/FOSTA amendments of 2018, have been derided as a near-complete failure. These amendment were designed to crack down on online sex-trafficking, but actually made it harder for law enforcement, forcing the activity deeper into the shadows. Many sexual minorities also complained that these amendments impacted their online communities. Famously, this law was also the impetus for Craigslist getting rid of its personals section.

If the SAFE TECH Act is ultimately passed into law, it will likely lead to an increase in content moderation by online intermediaries. Further, it will do nothing to address concerns of censorship problems or to address the transparency and engagement concerns of the Santa Clara Principles. Only time will tell if it turns out to be as bad as the SESTA/FOSTA amendments, but all indications are that it won’t be a net positive for Internet users.