Volume 14, Issue 1, June 2017

Twenty Years of Intermediary Immunity: The US Experience

Jeff Kosseff*

Download PDF

© 2017 Jeff Kosseff
Licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Abstract
Policymakers worldwide have long debated how to maintain free expression on the Internet while minimising defamation and other harmful online speech. Key to these debates has been intermediary liability: whether online platforms should be held legally responsible for user-generated content. To inform this continued debate, this article examines the US experience with relatively broad intermediary liability immunity. Enacted two decades ago, Section 230 of the Communications Decency Act of 1996 provides robust immunity to websites, Internet service providers, social media providers, and other online platforms for legal claims arising from user content. This article examines the scope of the immunity that Section 230 provides to US platforms and examines the primary criticisms of this approach. This article analyses court opinions involving Section 230, and examines the content moderation policies and practices of the leading US online platforms. The article concludes that Section 230 has fostered the growth of social media, user reviews, and other online services that rely primarily on user-generated content. Critics of Section 230 raise valid concerns that the broad immunity often prevents lawsuits against online platforms. However, my research concludes that many of the largest US intermediaries voluntarily block objectionable and harmful content due to consumer and market demands.

Keywords
intermediaries, liability, immunity, defamation, United States

Cite as: Jeff Kosseff, "Twenty Years of Intermediary Immunity: The US Experience" (2017) 14:1 SCRIPTed 5 https://script-ed.org/?p=3309
DOI: 10.2966/scrip.140117.5


* Assistant Professor, Cyber Science Department, United States Naval Academy, Annapolis, Maryland, United States. J.D., Georgetown University Law Center, M.P.P., B.A., University of Michigan. The views expressed in this article are only those of the author. Thanks to participants at the Oxford Internet Institute’s Internet, Policy & Politics Conference for helpful comments on an earlier working draft of this article.

1       Introduction

As user-generated online content has proliferated in recent years, so, too, have questions about the extent to which platforms should be held liable for their users’ online comments, blog posts, videos, and other content. Globally, lawmakers and judges have taken a variety of approaches to imposing liability on online intermediaries. For instance, the European Court of Human Rights in 2015 held that an online news site is liable for allegedly defamatory comments posted by an anonymous user.[1] Moreover, the European Union’s new General Data Protection Regulation (GDPR)[2] requires data controllers to erase certain content at the request of the data subject. Other jurisdictions, such as Japan, provide intermediaries with a limited safe harbour for user content, though intermediaries are not immune if they knew of the harmful content and failed to remove it.[3]

This article assesses the approach of the United States, which provides some of the strongest legal protection for online intermediaries. Twenty years ago, the US Congress enacted Section 230 of the Communications Decency Act 1996, which states that, with a few exceptions, online service providers are immune from liability for user-generated content.[4] The statute also provides websites with flexibility to edit, delete, or retain user-generated content. For instance, if a user posts a defamatory comment on a website, the website generally is not liable. Instead, the liability typically rests with the individual that posted the content.

This article reviews the US experience with strong intermediary immunity over two decades. A close examination of Section 230 and its implementation by US courts reveals a law that is consistent with global values of free expression, promotes online innovation, and continues to provide avenues for victims of harmful online content to seek legal recourse. Although the US approach to intermediary immunity is not without its flaws and inequities, it demonstrates that even under a system of robust intermediary immunity, online platforms will develop reasonable safeguards for users.

The article first examines the history of Section 230, the structure of the statute, and the relatively broad interpretation that US courts have taken in their application of Section 230’s immunity. As US courts recognised, Section 230 was drafted with the twin goals of promoting innovation and growth surrounding user-generated content while encouraging online platforms to voluntarily develop responsible community standards.

The article then assesses the social benefits that Section 230 has created in the past two decades. Section 230 has encouraged tremendous online innovation over the past two decades. Bulletin boards, social media, chat apps, and other services that have defined the Internet would not have been feasible in their current forms if service providers had been held legally responsible for the content provided by users.

The article next examines the legitimate concerns that this broad immunity has prompted. In recent years, as the magnitude and scope of cybercrime and online harassment has increased significantly, some advocates have called for the United States to eliminate or scale back Section 230’s intermediary immunity. Online anonymity tools, they contend, often make it impossible to hold bad actors responsible for their activities in cyberspace. They argue that the most effective way to combat illicit online activity is to hold the service providers responsible for their users’ actions in court.

The article addresses the concerns about illegal and objectionable user content, and examines how victims have been able to seek legal recourse in the United States, despite the relatively strong intermediary immunity offered by Section 230. First, this article reviews all written court opinions issued between 1 July 2015 and 30 June 2016 in which judges immunised intermediaries under Section 230. The review finds that in the majority of such cases, the plaintiffs were not individual victims, but corporations who allege that user-content harmed their business interests. The article also concludes that US courts are increasingly reluctant to extend Section 230 immunity to intermediaries that contributed to the harmful online content.

Next, the article reviews how online service providers have responded to illicit and malicious use of their services by examining the user-generated content policies of the twenty-five most popular US websites. The article finds that all of the platforms have voluntarily implemented policies to block illegal and objectionable content and help law enforcement. Indeed, online services find it to be in their commercial interests to keep illegal and objectionable content off of their services, despite Section 230’s protections.

The US experience with broad intermediary immunity can help inform other countries as they determine liability frameworks for online actors. In short, the United States has demonstrated that intermediary immunity is a catalyst for free speech, online innovation and economic growth, and that despite this immunity, online service providers act responsibly to prevent illegal and objectionable content. The United States has allowed market demands – rather than legal requirements – to set the boundaries of acceptable user content.

2       The twin goals of Section 230

Congress passed Section 230 with two very distinct goals: promoting online innovation and encouraging online intermediaries to voluntarily set community standards for user-generated content.

Liability for online intermediaries first emerged as a legal issue in a 1991 case, Cubby, Inc. v CompuServe, Inc. In that case, a New York federal judge dismissed a lawsuit against CompuServe, an online service, arising from allegedly defamatory content in an online newsletter distributed to CompuServe subscribers. The Court reasoned that CompuServe did not edit the newsletter, and therefore, like bookstores, libraries, and other distributors of written materials, could not be liable unless it “knew or had reason to know” of the allegedly harmful content.[5] Four years later, in Stratton Oakmont, Inc. v Prodigy Services Co., a New York state court judge refused to dismiss a defamation lawsuit against online service provider Prodigy, arising from a user posting on a Prodigy bulletin board. The primary reason that Prodigy was held to be responsible for user content is that it reserved the right to edit content and filter offensive user posts.[6]

Taken together, the Cubby and Stratton Oakmont cases stood for the proposition that online intermediaries might be legally responsible for user-generated content only if they take steps to control the content, such as forum moderation and user guidelines. However, if intermediaries take an entirely hands-off approach to third-party content, they would not be liable. In other words, the two opinions created an incentive for intermediaries to take a hands-off approach to user content. Section 230 only has three explicit exemptions: It does not apply to enforcement of federal criminal laws, intellectual property laws, or electronic communications privacy laws.[7]

These rulings soon caught the attention of the public. In the mid-1990s, the Internet was evolving from an academic and government network to an increasingly popular household and workplace service. Policymakers and advocacy groups worried that rulings such as Cubby and Stratton Oakmont would turn the Internet into a lawless no-man’s land with highly offensive content that is inappropriate for children.[8]

Congress could have imposed stringent requirements for intermediaries to edit third-party content. However, such a proposal likely would have faced significant opposition from Internet service providers and other intermediaries.

Instead, Congress addressed intermediary content moderation in Section 230 of the Communications Decency Act 1996. Section 230 has two primary provisions, Section 230(c)(1) and Section 230(c)(2).

Section 230(c)(1) is the source of the broad liability protection that intermediaries receive in the United States. That subsection provides that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[9] The statute’s broad definition of “interactive computer service” includes Internet service providers, websites, mobile apps, and any other platforms that transmit user-generated content.[10] As demonstrated below, these twenty-six words create strong – but not impenetrable – immunity for online service providers, shielding them from defamation, privacy, and other claims arising from user-generated content.

Section 230(c)(2) receives less public attention than Section 230(c)(1), but it is equally important, and reflects Congress’s desire to encourage moderation of user content. The provision states that online service providers shall not be held liable based on:

any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected [… or …] any action taken to enable or make available to information content providers or others the technical means to restrict access to [such material].[11]

In other words, this statute immunises interactive computer services from claims arising from their voluntary decision to edit (or not edit) user-generated content.

Section 230(c)(2) was a driving force for many of the bill’s supporters. Indeed, the section containing both Section 230(c)(1) and Section 230(c)(2) is entitled “Protection for ‘Good Samaritan’ blocking and screening of offensive material.” In the conference report that accompanied the bill containing Section 230, the bill’s authors expressed a desire to strike down court rulings such as Stratton Oakmont, which the members of Congress believed would discourage service providers from blocking objectionable content:

One of the specific purposes of this section is to overrule Stratton-Oakmont v. Prodigy and any other similar decisions which have treated such providers and users as publishers or speakers of content that is not their own because they have restricted access to objectionable material. The conferees believe that such decisions create serious obstacles to the important federal policy of empowering parents to determine the content of communications their children receive through interactive computer services.[12]

The two provisions of Sections 230(c)(1) and 230(c)(2) reflect Congress’s twin goals of encouraging online platforms to voluntarily moderate user content and encouraging innovation and development of the nascent commercial Internet. Indeed, in the conference report accompanying the legislation, the bill’s authors stated that they explicitly intended to overrule court rulings such as Stratton Oakmont because they believe “that such decisions create serious obstacles to the important federal policy of empowering parents to determine the content of communications their children receive through interactive computer services.”[13] Congressman Bob Goodlatte, who co-sponsored the legislation, stated at the time that this free-market, hands-off approach is preferable to requiring service providers to screen user-generated content, as it is impossible for platforms to “take the responsibility to edit out information that is going to be coming in to them from all manner of sources onto their bulletin board.”[14]

There is an additional aspect of Section 230 that was not discussed during debate over the bill: It reflects the US fundamental values that generally place free speech over privacy. Often, disputes present a conflict between an individual’s privacy rights and the uncensored distribution of free information. The United States, like other countries, balances the two rights. However, the United States often errs on the side of free expression rather than privacy. Hence, it is unlikely that the United States would adopt a right to be forgotten that is similar to that of the European Union. Similarly, immunising intermediaries for user content is consistent with the broad free speech values embedded in the First Amendment of the US Constitution.[15]

In short, Section 230 emerged from the recognition in the early days of the modern Internet that there was a need for community standards for user-generated content. Policymakers recognised the great potential of harm to innocent victims arising from every user having the ability to be the publisher of text, articles, and videos. However, rather than mandate that websites and other service providers set specific standards, US policymakers believed that the free market would effectively force the providers to set responsible content rules that consumers demand. In doing so, the United States took a strikingly hands-off approach to any regulation of user content.

3       Early court interpretations of Section 230

Courts generally have remained faithful to the plain text of Section 230, and granted immunity to online platforms in a wide variety of contexts. In doing so, courts often recognise the general rule that Section 230 has few explicit exceptions and is drafted quite broadly.[16]

The first federal appellate court to issue a binding interpretation of the scope of Section 230 was the United States Court of Appeals for the Fourth Circuit, in the 1997 case, Zeran v America Online. In that case, an anonymous America Online (AOL) user posted the plaintiff’s name and contact information, asserting that he was selling distasteful merchandise related to a recent domestic terrorist attack. The plaintiff sued America Online for negligently distributing defamatory content, and the Fourth Circuit upheld the district court’s dismissal of the lawsuit. The Court reasoned that Section 230 provides complete immunity for America Online from claims that arise from user-generated content. In a broad interpretation of Section 230, the Court ruled that the statute “creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.”[17] Some critics argue that the Zeran reading is far broader than congressional intent, and that an online service provider “should act like a ‘good Samaritan’ in order to enjoy Section 230 ‘good Samaritan’ immunity status.”[18] A provider that fails to do so, they argue, “engages in bad faith” and should “be held accountable.”[19]

Nonetheless, in some cases, judges have immunised online intermediaries even though they recognise that the end result is unfair. For instance, a year after the Zeran decision, a federal district court in the District of Columbia dismissed a defamation lawsuit against America Online that was filed by a political staffer who was accused, in a newsletter distributed by America Online, of domestic abuse. The judge concluded that Section 230 barred the claim; however, the court noted that because AOL had the ability to modify the content, “it would seem only fair to hold AOL to the liability standards applied to a publisher or, at least, like a book store owner or library, to the liability standards applied to a distributor.” Nonetheless, the Court applied Zeran’s broad interpretation of Section 230 and held that the lawsuit was barred.[20]

Courts also will grant immunity even if the online intermediary has modified the third-party content, as long as the modification is not the source of the harmful content. For instance, in Batzel v Smith, a handyman sent an email to a museum security listserv, alleging that one of his clients claimed to be the granddaughter of one of Adolph Hitler’s “right-hand men” and that he saw artwork in her home that he believed had been looted from Jewish people during World War II. The museum security group made minor edits to the email, sent it to its members on the listserv, and posted the edited message on its website. The client sued the museum security group for defamation, and the United States Court of Appeals for the Ninth Circuit ruled that Section 230 applies.[21] The museum group’s “minor alterations” to the email, the Court reasoned, did not necessarily render it responsible for the content created by the handyman, provided that the museum group’s employee reasonably concluded that the email was intended for publication. Section 230, the Court wrote, “necessarily precludes liability for exercising the usual prerogative of publishers to choose among proffered material and to edit the material published while retaining its basic form and message.”[22]

In short, the early court interpretations of Section 230 found few limits to immunity for intermediaries, unless the case fell within one of the three explicit exceptions. The Zeran opinion shaped other courts’ interpretations of the scope of immunity, causing one commentator to write in 2002 that Zeran was “the most influential interpretation of Section 230(c).”[23]

4       Broad intermediary immunity encourages user-generated content

As US courts issued Zeran and other opinions that broadly applied Section 230 immunity, websites and other online intermediaries gradually transformed the US Internet experience into one that depends on the contributions of users. Because websites and other platforms generally are not legally responsible for content created by third parties, they are more likely to allow their users to post consumer reviews, political opinions, news developments, and other content. This has transformed the online media experience into a public commons.

User-generated content has transformed commerce in the United States, as consumer review sites have proliferated. In a 2014 survey conducted by BrightLocal, 88% of respondents stated that they read online user reviews to determine whether to purchase products or services from local businesses, and nearly 40% read these reviews on a regular basis.[24] In a separate 2014 survey conducted by Moz, 67.7% of respondents stated that online user reviews impact their decisions to purchase large products, such as appliances or cars. Indeed, an entire segment of the Internet has developed around user reviews. Yelp provides user opinions of restaurants and other local businesses.[25] TripAdvisor’s user reviews can determine the success – or failure – of hotels and restaurants. Even Amazon, the largest US ecommerce site, has incorporated user reviews as a central component of its product listings.

It is difficult to conceive of how online user reviews – at least in their current form – could continue to exist in the United States without Section 230. User reviews often are blunt, harsh, and, in some cases, subject to factual dispute. The businesses that are the subjects of these reviews may file defamation lawsuits, seeking to be compensated for what they believe are false claims in the user reviews. The people who posted the allegedly defamatory content may have used an anonymity service such as Tor, allowing them to mask their true identities, therefore making it difficult for the subject to name them in a lawsuit. Moreover, even if the posters are identifiable, they may not have sufficient assets to make a defamation lawsuit worthwhile for the plaintiff. Accordingly, the sites hosting the user comments may be an easier and more attractive defendant for a defamation lawsuit.

Section 230 generally has prevented such lawsuits, allowing sites such as Yelp and other consumer review services to act as neutral intermediaries without facing the burden of pre-screening every user comment for accuracy. Yelp and other consumer review sites have successfully relied on Section 230 to dismiss a number of claims arising from user content. For instance, in 2010, a New York state judge swiftly dismissed a lawsuit filed against Yelp by a dentist, arising from a user review that alleged that the dentist’s office is “small” and “smelly” and that the “equipment is old and dirty.” The dentist alleged that after he requested that Yelp remove the negative review, Yelp instead only removed the positive reviews of his business. The judge held that Section 230 clearly immunises Yelp from defamation lawsuits arising from negative reviews, and Yelp would retain its immunity even if, as the dentist alleged, the site highlighted the negative user reviews.[26]

Section 230 also has enabled the proliferation of social media, which relies on content generated by users rather than by the websites’ employees. Social media has become part of the fabric of US culture in the past decade. According to the Pew Research Center, 65% of adults in the United States used social media in 2015, up from 7% in 2005.[27] Although people use social media for a wide range of reasons, it has become a cornerstone of public dialogue in the United States. In a 2015 meta-analysis of studies on political participation and social media use, Shelley Boulianne found that more than 80% of the coefficients suggest a positive relationship between individuals’ participation in civic and political life and their use of social media.[28]

David G. Post, an Internet law scholar, estimated that by passing Section 230, “Congress helped create a trillion or so dollars of value” because companies such as Google, Craigslist, Instagram, and others that rely on user content could not otherwise exist:

The potential liability that would arise from allowing users to freely exchange information with one another, at this scale, would have been astronomical, and it is impossible for me to imagine, say, an investor providing funds for any of these ventures in a world without Section 230. [And it is not a coincidence, in my view, that these companies are all US-based, no 230-like immunity being provided in most other legal systems around the world.][29]

Similarly, as Jack Balkin has observed:

Because online service providers are insulated from liability, they have built a wide range of different applications and services that allow people to speak to each other and make things together. Section 230 is by no means a perfect piece of legislation; it may be overprotective in some respects and underprotective in others. But it has been valuable nevertheless.[30]

5       Criticisms of Section 230

Ever since its enactment 20 years ago, Section 230 has faced a steady drumbeat of criticism from advocates of people who claim to have been harmed by online defamation, harassment, and other harmful content. They argue that the broad reading of Section 230 has rendered it nearly impossible for victims to prevent intermediaries from transmitting harmful content. Unless intermediaries face the prospect of a significant court award, they argue, the companies have no incentive to prevent bad actors from using their services.[31]

Among the most recent grounds for criticism of Section 230 has been revenge pornography.[32] Users of online services post naked or sexual photos of unsuspecting victims, often their ex-lovers. Critics argue that Section 230 enables the distribution of revenge pornography. Indeed, some websites are specifically designed to encourage individuals to post non-consensual pornographic images, but they often are immune from criminal and civil liability because they also are designed to maximise the likelihood that Section 230 will immunise them.[33] As Mary Anne Franks, a law professor who has led the fight against non-consensual online pornography, wrote in a forthcoming Florida Law Review article:

Given the ease with which individual purveyors of nonconsensual pornography can access or distribute images anonymously, it is difficult to identify and prove (especially for the purposes of a lawsuit) who they are. Victims are barred from making most civil claims against the websites that distribute this material because of Section 230 of the Communications Decency Act.[34]

Similarly, some critics argue that cyberbullying is more common due to Section 230. Advocates for children and young adults are increasingly concerned about websites and apps that allow anonymous users to post defamatory – and often hurtful – information about children. In some cases, children and young adults have committed suicide after being victims of cyberbullying. In one of the highest profile examples of cyberbullying, Lori Drew, an adult in Missouri, allegedly collaborated with two other individuals to pose as a teenage boy on MySpace to befriend Drew’s thirteen-year-old neighbour, Megan Meier. After Drew’s fictitious online character suddenly became hostile to Meier, the girl committed suicide. Drew was charged under the federal Computer Fraud and Abuse Act and convicted by jury on a misdemeanour, but the misdemeanour was reversed by a judge. The social media site clearly was immune from civil liability, as no exception to Section 230 existed.[35]

As Internet law scholar Danielle Keats Citron has noted, “[a]s private actors that enjoy immunity from liability for the postings of others under Section 230 of the federal Communications Decency Act, content hosts can host as much or as little of their users’ speech activities as they wish.”[36]

The cloak of anonymity that many platforms offer – coupled with the platforms’ Section 230 immunity – enables uncivil discourse, some critics argue. For instance, in a study of three weeks of user comments on a local newspaper website, Coe et al. concluded that “incivility is a common feature of public discussions,” and that 55.5% of the news articles contained at least one uncivil user comment.[37]

Even in less egregious cases, critics say that Section 230 allows websites and other platforms to host irrelevant content that damages an individual’s reputation, whether true or not. Newspaper articles about twenty-year-old arrests, untrue reviews of small businesses, and other harmful content can stay on the Internet in perpetuity. This directly contradicts the approach of the European Union, which has provided in the GDPR a qualified right to be forgotten, in which data controllers are required to erase personal data under certain circumstances.

In short, critics raise compelling arguments that Section 230, in some cases, unfairly burdens individuals who have been irreparably harmed by user-generated content. Unless they are able to identify and sue the user who created the harmful content, they are without legal recourse due to Section 230.

6       Assessment of recent plaintiffs in Section 230 cases

To assess a primary concern of Section 230 critics – that the immunity unfairly burdens individuals who have been victimised by harmful content – the article analyses the nature of the claims in one year of court opinions in which intermediaries were immunised under Section 230. This section is based on a review of all US federal and state court opinions in the LEXIS database from 1 July 2015 to 30 June 2016. In twenty-seven of those opinions, a judge (or panel of judges) decided whether to immunise the defendant under Section 230. Fourteen of those opinions denied Section 230 immunity,[38] while thirteen of the opinions immunised the defendant.[39] To be clear, many other court opinions mention Section 230; however, this analysis only focuses on the opinions issued during that year in which courts expressly decided whether to grant Section 230 immunity to an online intermediary.

The review of cases found that most of the plaintiffs in these cases were corporations seeking to protect their business interests, not individual plaintiffs. Of the thirteen written opinions in which judges granted Section 230 immunity between 1 July 2015 and 30 June 2016, nine were defamation cases brought by businesses. This suggests that, although Section 230 can serve as a barrier to individuals who have been wronged online, it frequently immunises online platforms in cases that are brought by businesses.

For instance, among those nine cases was Roca Labs, Inc. v Consumer Opinion Corp. the defendants operated pissedconsumer.com, a website that allows customers to publicly post about products or services. The website contained a number of user posts about plaintiff Roca Labs, accusing its employees of lying to customers and selling ineffective products. Roca Labs sued the website operator under a number of common law torts, including four counts of defamation. The district court dismissed the lawsuit, reasoning that the users – and not the defendants – provided the allegedly defamatory information.[40]

Similarly, in Advanfort v The Maritime Executive, LLC, the plaintiffs, which were companies that provide maritime security (and their owners), sued a website operators that published an allegedly defamatory article written by the plaintiff’s former lawyer.[41] The district court dismissed the complaint under Section 230, though allowed the plaintiffs to file a new complaint to demonstrate either that the article was published in print (which would place it outside of the scope of Section 230 immunity) or that the website “was at least partly responsible for the creation or development of the Article, rendering [Section 230] inapplicable.”[42]

Indeed, among the defendants that has most frequently received Section 230 immunity is XCentric Ventures L.L.C, the operator of Ripoff Report, a website with the slogan, “Don’t let them get away with it! Let the truth be known!” Ripoff Report allows consumers to anonymously post complaints about businesses. Section 230 not only has protected Ripoff Report in a number of cases, but it is essential to its existence. The “Legal” section of Ripoff Report’s website contains a detailed summary of Section 230 and warns that “[i]f you are considering suing Ripoff Report because of a report which you claim is defamatory, you should be aware that, Ripoff Report has had a long history of winning these types of cases.”[43] For instance, a federal judge in Arizona dismissed a defamation complaint against XCentric in 2008, reasoning that although it “is obvious that a website entitled Ripoff Report encourages the publication of defamatory content,” the complaint must be dismissed because “there is no authority for the proposition that this makes the website operator responsible, in whole or in part, for the ‘creation or development’ of every post on the site.”[44] Such types of business-related cases are among the most common Section 230 disputes.

To be sure, this article does not argue that only individuals – and not businesses – should have the ability to recover damages for defamation. However, because the critics of Section 230 focus on revenge pornography, harassment, and other harms that target individuals, the business-oriented nature of many Section 230 cases should be kept in mind when assessing the strength of these criticisms.

7       Court-imposed limits on intermediary immunity

Regardless of whether the plaintiffs are individuals or companies, courts have become increasingly reluctant to grant Section 230 immunity to intermediaries. As new forms of harmful online behaviour emerged, US courts began to more carefully scrutinise online platforms’ claims of Section 230 immunity. This trend became clear in 2008, when an eleven-judge en banc panel of the United States Court of Appeals for the Ninth Circuit, whose large jurisdiction includes technology company-heavy California, issued its ruling in Fair Housing Council of San Fernando Valley v Roommates.com. That case involved Roommates.com, a roommate-matching service that allowed users to post and search for roommate listings.[45]

To post a listing on Roommates.com, users filled out a questionnaire that asked for, among other things, sexual orientation, sex, and whether they were seeking to bring children into the home. The questionnaire also had a free-form “Additional Comments” section that enabled users to describe other characteristics that they sought in a roommate. Among the responses that users wrote in the Additional Comments section were that they prefer “white Male roommates,” they are “NOT looking for black muslims,” and they prefer to avoid “drugs, kids or animals.”[46] The Fair Housing Council of San Fernando Valley alleged that Roommates.com violated state and federal housing laws, which prohibited discrimination based on sex, sexual orientation, and familial status. Roommates.com sought to dismiss the case, arguing that if any discrimination occurred, it was due entirely to user-provided content, and therefore Section 230 immunised the website from any liability under the housing laws.[47]

The majority of the en banc panel concluded that Roommates.com was not immune for at least some of the claims. Writing for the majority, Chief Judge Alex Kozinski reasoned that Roommates.com created the questions about sex, sexual orientation, and familial status and therefore is the “information content provider” of those questions “and can claim no immunity for posting them on its website, or for forcing subscribers to answer them as a condition of using its services.” Chief Judge Kozinski acknowledged that Roommates.com is immunised from any liability from illegal responses that are created by users; however, he concluded that liability under the housing laws arose merely if a service asked discriminatory questions.[48]

The Ninth Circuit also decided the Batzel case, described above. Chief Judge Kozinski concluded that holding Roommates.com was entirely consistent with the immunity that the Court found in Batzel. In Batzel, Kozinski stated, the Court held that the intermediary did not lose Section 230 immunity merely due to “minor changes to the spelling, grammar, and length of third-party content.” However, Chief Judge Kozinski reasoned, a website is not immune if it “is the one making the affirmative decision to publish” and therefore “contributes materially to its allegedly unlawful dissemination.”[49]

Chief Judge Kozinski, however, concluded that Roommates.com was entitled to Section 230 immunity for any allegedly discriminatory statements that users wrote in the “additional Comments” section of its online questionnaire. Section 230 immunises the website for these comments, he reasoned, because the site “does not provide any specific guidance as to what the essay should contain, nor does it urge subscribers to input discriminatory preferences.” In short, the majority’s ruling in Roommates.com imposed liability if the very act of soliciting user-generated content violates an existing law; however, if users incidentally violate the law by voluntarily providing information, the intermediaries retain their immunity.[50]

Chief Judge Kozinski concluded that this distinction “is consistent with the intent of Congress to preserve the free-flowing nature of Internet speech and commerce without unduly prejudicing the enforcement of other important state and federal laws.”[51] Other judges, however, disagreed. In dissent, Judge Margaret McKeown wrote that the majority’s ruling “threatens to chill the robust development of the Internet that Congress envisioned.”[52]

Indeed, in the eight years since the Ninth Circuit issued its highly-publicised opinion in Roommates.com, courts have become increasingly likely to deny Section 230 immunity to online intermediaries for user-generated content. In a forthcoming empirical analysis that this author recently conducted, published in 2017 in the Columbia Science and Technology Law Review,[53] in 2001 and 2002, US courts issued written opinions in ten cases in which online intermediaries claimed Section 230 immunity. Of those ten cases, the courts in eight of the cases concluded that the intermediaries were immune. The remaining two cases involved intellectual property claims, which are explicitly exempt from Section 230. In contrast, a review of all written court opinions involving Section 230 that were issued between 1 July 2015 and 30 June 2016, found that in fourteen of the twenty-seven cases, the courts refused to provide intermediaries with full immunity. Only one of those fourteen cases was an intellectual property claim; the remaining denials of Section 230 immunity resulted from the conclusion that the intermediary contributed to the harmful content.

For instance, in Diamond Ranch Academy v Filer, a residential treatment facility filed a defamation lawsuit against the operator of a website that allowed former facility websites to share their stories. The website operator moved to dismiss the lawsuit under Section 230, asserting that she merely summarised and made editorial changes to some of the content provided by third parties, just as the museum security group in Batzel. The district court rejected this argument, concluding that the posts on her website “do not lead a person to believe that she is quoting a third party.”[54]

Likewise, in Doe v Internet Brands, an aspiring model posted information on a modelling industry networking website. She alleged that “two rapists used the website to lure her to a fake audition, where they drugged her, raped her, and recorded her for a pornographic video,” and that the website owner knew about the rapists but failed to warn her or others.[55] The US Court of Appeals for the Ninth Circuit concluded that her claim against the website is not barred by Section 230 because her “failure to warn” claim “has nothing to do with Internet Brands’ efforts, or lack thereof, to edit, monitor, or remove user-generated content.”[56]

Diamond Ranch Academy, Doe, and many other similar cases demonstrate a gradual willingness of courts to seek to hold intermediaries accountable for third-party content that they encouraged or somehow augmented. Accordingly, Section 230 does not act as a complete bar to relief for plaintiffs who believe that they have been wronged online.

8       Voluntary Intermediary Moderation

In addition to the limits imposed on Section 230 by courts, intermediaries have developed policies, procedures, and technology to moderate user content. Even in cases in which they are not legally required to moderate user content, they do so to meet consumer demands. Such voluntary, market-based moderation was precisely the intent of Congress when it enacted Section 230 two decades ago.

To assess the extent to which US websites have voluntarily restricted user content, it is useful to review the 25 most popular US websites, as ranked by Alexa.com. Of the 25 sites, 18 allowed user content. All 18 of these sites have implemented terms of use that include extensive restrictions on user content. Although the policies take a variety of approaches and some policies have more details than others, at minimum, the policies addressed:

  • Illegal activities
  • Hate speech
  • Harassment
  • Bullying
  • Distribution of personal information
  • Nudity or pornography
  • Violent content

For example, consider the US User Content and Conduct Policy of Google, the most visited US website.[57] The roughly 1,200-word document bans a great deal of content that could harm third parties. For instance, Google prohibits users from engaging in “harassing, bullying, or threatening behavior,” and from inciting such behaviour from others. Google reserves the right to delete content or ban users who “single someone out for malicious abuse,” threaten someone with serious harm,” “sexualize a person in an unwanted way,” or “harass in other ways.”[58]

Section 230 provides online platforms with the flexibility to determine the level of moderation. For instance, Google recognises that although its products “are platforms for free expression,” Google does not:

support content that promotes or condones violence against individuals or groups based on race or ethnic origin, religion, disability, gender, age, nationality, veteran status, or sexual orientation/gender identity, or whose primary purpose is inciting hatred on the basis of these core characteristics.[59]

Google recognises that assessing such content “can be a delicate balancing act, but if the primary purpose is to attack a protected group, the content crosses a line.”[60]

In addition to having policies that restrict harmful user-generated content, many of the large platforms have continued to develop innovative procedures to enforce these policies. For instance, Facebook enables its users to select a button next to a post that they believe violates Facebook’s community standards. The users also select a category that describes the type of violation, which triggers a report to Facebook. The company’s staff then review the content to determine whether to remove it.  Alternatively, even if a poster complies with Facebook’s standards, Facebook has developed easy tools for users to choose to block future content from that poster.[61]

Moreover, online platforms are increasingly developing new technologies to automatically filter content that is objectionable by community standards. For instance, some US news websites use a technology, known as Civil Comments to enable community moderation of online comments. When users comment on a story on a participating news website, they also are asked to rate the civility and quality of two randomly chosen comments. The news website’s staff manually review comments that many users have deemed uncivil.[62]

Other online platforms have determined that anonymity fosters objectionable user content. Accordingly, a growing number of news websites in recent years have required users to post comments under their Facebook logins. For instance, when North Carolina television station WRAL announced in 2015 that it would begin requiring users to post comments under their Facebook accounts, it recognised that some users prefer anonymous comments, but that WRAL “would prefer to have fewer comments in exchange for dialogue that is more relevant, thoughtful, and courteous.”[63]

Some platforms have simply decided that user-generated content is not consistent with the quality that they seek to provide to their customers. For instance, in August 2016, National Public Radio announced that its news website would no longer allow user comments. In its announcement of this change, NPR wrote that it concluded that user comments “are not providing a useful experience for the vast majority of our users.”[64]

Online platforms also have gone far beyond their legal duties to prohibit illegal and obscene content on their services. For instance, federal criminal law requires US online service providers to notify the National Center for Missing and Exploited Children (NCMEC) if the providers have actual knowledge that their customers apparently have used their services to distribute child pornography.[65] NCMEC then analyses the content and, if it determines it is child pornography, contacts the proper law enforcement agency. Despite the obligation to file NCMEC reports when they obtain actual knowledge of apparent child pornography, US service providers are not required to proactively search for the illegal content. Indeed, the statute explicitly states that intermediaries are not required to “monitor any user, subscriber, or customer”.[66] Accordingly, US service providers are free to develop a hands-off approach in which they look the other way; if the providers do not have actual knowledge of the apparently illegal content, then they do not have to file NCMEC reports, and possibly incur legal fees during their customers’ criminal prosecutions.

However, the exact opposite approach has emerged. Many of the largest US online intermediaries have developed and implemented technology that scans the content of their users’ cloud data, email, and other content for hash values that match an NCMEC database of hash values of known child pornography. They are under no legal obligation to conduct such scanning. However, the service providers say that they implemented these programs because consumers demanded a family-friendly online environment that is free of illegal content. For instance, in a criminal prosecution of child pornography defendant that relied in part on evidence detected during AOL’s scan of his account, an AOL representative testified that AOL implemented the scanning partly in response to consumer complains about “objectionable content,” and that AOL “would like to actually keep the members who complain about it and have a countermeasure against those who do it.”[67] In other words, market demand has driven the decision for online intermediaries to go far beyond their legal duties. This is precisely the rationale behind Section 230.

9       Conclusion

The US experience with broad intermediary immunity for user-generated content is useful as jurisdictions across the world assess their Internet liability regimes. A few lessons can be drawn from this review of the US experience under Section 230:

  • The relatively free-market approach of Section 230 has fostered the growth of social media and other platforms that depend heavily on user-generated content. These platforms have not only caused remarkable economic benefits, but they have fundamentally changed many aspects of life in the United States.
  • Section 230 does not provide intermediaries with complete protection from lawsuits. Courts are increasingly likely to conclude that the intermediaries somehow contributed to the content and therefore are not immune to lawsuits.
  • Although many Section 230 critics focus on the inequities that the statute imposes on individuals, Section 230 more frequently prevents businesses from suing their critics.
  • In response to consumer demand, online platforms have developed a number of policies and methods to moderate user-generated content.

To be sure, there always will be vile users who spread horrific content. However, these users are being pushed further to the fringe corners of the Internet as online platforms develop market-based responses to consumer demand. The mainstream, commercial Internet has developed reasonable limits to user-generated content based on society’s expectations. Without Section 230, those limits would be in response to court opinions, statutes, and intermediaries’ fear of legal liability.


[1]     Delfi AS v Estonia, [2015] ECtHR 64669/09.

[2]     Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation).

[3]     Act No. 137 of 2001.

[4]     47 U.S.C. § 230.

[5]     Cubby, Inc. v CompuServe, Inc, [1991] 776 F. Supp. 135 (S.D.N.Y.).

[6]     Stratton Oakmont, Inc. v Prodigy Services Co., [1995] INDEX No. 31063/94, 1995 N.Y. Misc. LEXIS 229 (N.Y. Sup. Ct.).

[7]     47 U.S.C. § 230(e).

[8]     Mary Fine, “Mom Wants AOL to Pay in Child’s Sex Ordeal, She Calls Service Liable, Despite Law” (The Bergen Record, 19 April 1998).

[9]     47 U.S.C. § 230(c)(1).

[10]    47 U.S.C. § 230(f)(2) (“The term ‘interactive computer service’ means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.”).

[11]    47 U.S.C. § 230(c)(2).

[12]    House of Representatives Report 104-458 (1996), at p. 194, available at https://www.congress.gov/104/crpt/hrpt458/CRPT-104hrpt458.pdf (accessed 9 June 2017).

[13]    Ibid.

[14]    “Statement of Representative Goodlatte” (1995) 141 Congressional Record, at p. H8471, available at https://www.congress.gov/crec/1995/08/04/CREC-1995-08-04.pdf (accessed 9 June 2017).

[15]    Jeff Kosseff, “Defending Section 230: The Value of Intermediary Immunity” (2010) 15 Journal of Technology Law & Policy 123-158.

[16]    See, e.g., PatentWizard, Inc. v Kinko’s, Inc., [2001] 163 F. Supp. 2d 1069 (D.S.D. 2001) (“For now, the § 230 of the Communication Decency Act errs on the side of robust communication, and prevents the plaintiffs from moving forward with their claims.”); Morrison v American Online, Inc., [2001] 153 F. Supp. 2d 930 (N.D. Ind. 2001) (“The wisdom of Congress in providing such immunity is well taken considering the myriad of constitutional and other legal issues that could be raised by various parties without giving such interactive computer service providers the ability to regulate without fear of legal action.”).

[17]    Zeran v America Online, [1997] 129 F.3d 327 (4th Cir.).

[18]    Andrew Sevanian, “Section 230 of the Communications Decency Act: A ‘Good Samaritan’ Law without the Requirement of Acting as a ‘Good Samaritan’” (2014) 21 UCLA Entertainment Law Review 121-145, p.144.

[19]    Ibid.

[20]    Blumenthal v Drudge, [1998] 992 F. Supp. 44 (D.D.C.).

[21]    Batzel v Smith, [2003] 333 F.3d 1018 (9th Cir.).

[22]    Ibid.

[23]    Paul Ehrlich, “Cyberlaw: Regulating Content on the Internet: Communications Decency Act Section 230” (2002) 17 Berkeley Technology Law Journal 410-419.

[24]    BrightLocal, “Local Consumer Review Survey 2015” (2015), available at https://www.brightlocal.com/learn/local-consumer-review-survey-2015 (accessed 5 May 2017).

[25]    Dan Hinckley, “New Study: Data Reveals 67% of Consumers are Influenced by Online Reviews” (2015) available at https://moz.com/blog/new-data-reveals-67-of-consumers-are-influenced-by-online-reviews (accessed 5 May 2017).

[26]    Reit v Yelp!, [2010] 29 Misc. 3d 713 (N.Y. Sup. Ct.).

[27]    Andrew Perrin, “Social Media Usage: 2005-2015” (2015) available at http://www.pewinternet.org/2015/10/08/social-networking-usage-2005-2015 (accessed 5 May 2017)

[28]    Shelley Boulianne, “Social Media Use and Participation: A Meta-Analysis of Current Research.” (2015) 18 Information, Communication & Society 524-538.

[29]    David Post, “A bit of Internet history, or how two members of Congress helped create a trillion or so dollars of value.” (2015) available at https://www.washingtonpost.com/news/volokh-conspiracy/wp/2015/08/27/a-bit-of-internet-history-or-how-two-members-of-congress-helped-create-a-trillion-or-so-dollars-of-value/ (accessed 5 May 2017).

[30]    Jack Balkin, “The Future of Free Expression in a Digital Age” (2009) 36 Pepperdine Law Review 427-444, p.434.

[31]    Arthur Chu, “Mr. Obama, Tear Down This Liability Shield” (2015) available at https://techcrunch.com/2015/09/29/mr-obama-tear-down-this-liability-shield/ (accessed 5 May 2017).

[32]    Zak Franklin, “Justice for Revenge Porn Victims: Legal Theories to Overcome Claims of Civil Immunity by Operators of Revenge Porn Websites” (2014) 102 California Law Review 1303-1336.

[33]    Amanda Levendowski, “Our Best Weapon Against Revenge Porn: Copyright Law?” (2014) available at https://www.theatlantic.com/technology/archive/2014/02/our-best-weapon-against-revenge-porn-copyright-law/283564/ (accessed 5 May 2017).

[34]    Mary Franks, “‘Revenge Porn’ Reform: A View from the Front Lines”, Florida Law Review (forthcoming).

[35]    Kim Zetter, “Judge Acquits Lori Drew in Cyberbullying Case, Overrules Jury” (2009) available at https://www.wired.com/2009/07/drew_court/ (accessed 5 May 2017).

[36]    Danielle Citron, “Addressing Cyber Harassment: An Overview of Hate Crimes in Cyberspace” (2015) 6 Case Western Reserve Journal of Law, Technology & the Internet 1-11, p.9.

[37]    Kevin Coe, Kate Kenski and Stephen Rains, “Online and Uncivil? Patterns and Determinants of Incivility in Newspaper Website Comments” (2014) 64 Journal of Communication 658-679, p.673.

[38]    Amcol v Lemberg Law, LLC, [2016] No. 3:15-3422-CMC, 2016 U.S. Dist. LEXIS 18131 (D.S.C.); Congoo v Revcontent, [2016] Civil Action No. 16-401 (MAS) (TJB), 2016 U.S. Dist. LEXIS 51051 (D.N.J.); Consumer Cellular v ConsumerAffairs.com, [2016] 3:15-CV-1908-PK (D. Or.); Diamond Ranch Academy v Filer, [2016] Case No. 2:14-CV-751-TC, 2016 U.S. Dist. LEXIS 18131 (D. Utah); Doe v Internet Brands, Inc, [2016] Case No. 12-56638 (9th Cir.); E-Ventures Worldwide, LLC v Google, [2016] Case No. 2:14-cv-646-FtM-29CM, 2016 U.S. Dist. LEXIS 62855 (M.D. Fla.); General Steel Domestic Sales v Chumley, [2015] Civil Action No. 13-cv-00769-MSK-KMT (D. Colo.); Giveforward v Hodges, [2015] Civil No. JFM-13-1891, 2015 U.S. Dist. LEXIS 102961 (D. Md.); J.S. v Village Voice Media Holdings, [2015] 359 P.3d 714 (Wash.); Malibu Media v Weaver, [2016] Case No. 8:14-cv-1580-T-33TBM, 2016 U.S. Dist. LEXIS 47747 (M.D. Fla.); People v Bollaert, [2016] No. D067863, 2016 Cal. App. LEXIS 517 (Cal. Ct. App.); Tanisha Systems v Chandra, [2015] CIVIL ACTION NO. 1:15-CV-2644-AT (N.D. Ga.); Trump Village Section 4 v Bezvoleva, [2015] Docket No. 509277/2014, 2015 N.Y. Misc. LEXIS 4848 (Sup Ct. NY); Xcentric Ventures v Smith, [2015] No. C15-4008-MWB, 2015 U.S. Dist. LEXIS 109965 (D.N.D.).

[39]    Advanfort v The Maritime Executive, LLC. [2015] Civil Action No. 1:15-cv-220, 2015 U.S. Dist. LEXIS 99208 (E.D. Va.); Brennerman v Guardian News & Media, [2016] Civ. No. 14-188-SLR/SRF, 2016 U.S. Dist. LEXIS 42923 (D. Del.); Caraccioli v Facebook, [2016] Case No. 5:15-cv-04145-EJD, 2016 U.S. Dist. LEXIS 29021 (N.D. Cal.); Despot v The Baltimore Life Insurance Co. [2016]. Civil Action No. 15-1672 (W.D. Pa.); Doe v Backpage, [2016] 817 F.3d 12 (1st Cir.); Fakhrian v Google, [2016] No. B260705, 2016 Cal. App. Unpub. LEXIS 3004 (Cal. Ct. App.); Free Kick Master LLC v Apple, [2015] Case No. 15-cv-03403-PJH (N.D. Cal.); Nail v Schrauben, [2016] Case No. 1:15-CV-177, 2016 U.S. Dist. LEXIS 17987 (W.D. Mich.); Roca Labs v Consumer Opinion Corp, [2016] 140 F. Supp. 3d 1311. (M.D. Fla.); Rose v Facebook, [2016] Civil Action. No. 16-2075, 2016 U.S. Dist. LEXIS 67111 (E.D. Pa.); Ross v Elightbars LLC, [2016] Case No. 3:14 CV 2610, 2016 U.S. Dist. LEXIS 82448 (N.D. Ohio); Sikhs for Justice v Facebook Inc, [2015] Case No. 15-CV-02442-LHK (N.D. Cal.); Silver v Quora, [2016] No. CV 15-830 WPL/KK (D.N.M.).

[40]    Roca Labs. v Consumer Opinion Corp, [2015] 140 F. Supp. 3d 1311. (M.D. Fla.).

[41]    Advanfort v The Maritime Executive, LLC.

[42]    Ibid. 

[43]    Ripoff Report, “About Us: Want to sue Ripoff Report?” (2011), available at http://www.ripoffreport.com/consumers-say-thank-you/want-to-sue-ripoff-report (accessed 5 May 2017).

[44]    Global Royalties, Ltd. v Xcentric Ventures, LLC, [2008] 544 F. Supp. 2d 929 (D. Ariz.).

[45]    Fair Housing Council of San Fernando Valley v Roommates.com, [2008] 521 F.3d 1157 (9th Cir.) (en banc).

[46]    Ibid.

[47]    Ibid.

[48]    Ibid.

[49]    Ibid.

[50]    Ibid.

[51]    Ibid.

[52]    Ibid.

[53]    Jeff Kosseff, “The Gradual Erosion of the Law that Shaped the Internet: Section 230’s Evolution Over Two Decades” (2017) 18 Columbia Science & Technology Law Review 1-41.

[54]    Diamond Ranch Academy v Filer.

[55]    Doe v Internet Brands.

[56]    Ibid.

[57]    Alexa, “The top 500 Sites on the web” (2017) available at http://www.alexa.com/topsites (accessed 7 May 2017).

[58]    Google, “User Content and Conduct Policy” (2017) available at https://www.google.com/intl/en-US/+/policy/content.html (accessed 11 April 2017).

[59]    Ibid.

[60]    Ibid.

[61]    Facebook, “Community Standards” (2017) available at https://www.facebook.com/communitystandards (accessed 11 April 2017).

[62]    Joseph Lichterman, “By crowdsourcing moderation duties, the startup Civil is working to improve comments for news orgs” (2016) available at http://www.niemanlab.org/2016/03/by-crowdsourcing-moderation-duties-the-startup-civil-is-working-to-improve-comments-for-news-orgs/ (accessed 7 May 2017).

[63]    WRAL, “WRAL Now Requiring Facebook Login to Comment.” (2015) available at http://www.wral.com/wral-now-requiring-facebook-login-to-comment/14435600/ (accessed 5 May 2017).

[64]    Scott Montgomery, “Beyond Comments: Finding Better Ways to Connect with You.” (2016) available at http://www.npr.org/sections/npr-extra/2016/08/17/490208179/beyond-comments-finding-better-ways-to-connect-with-you (accessed 5 May 2017).

[65]    18 U.S.C. § 2258A.

[66]    Ibid.

[67]    United States v Keith, [2013] 980 F. Supp. 2d 33. (D. Mass.).

Twenty Years of Intermediary Immunity: The US Experience

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.