Justice Thomas Emerges as an Internet Revolutionary

The court’s senior justice appears to be spoiling to upend the worldwide web.

AP/J. Scott Applewhite, file
Justice Clarence Thomas in 2018. AP/J. Scott Applewhite, file

The Supreme Court is set to weigh in on online content moderation, with repercussions that could set the internet’s rules for a new generation. Two cases — Gonzalez v. Google and Twitter v. Taamneh — will be argued before the justices on February 21 and February 22. Both push novel arguments at the intersection of terrorism and technology.

At stake is Section 230 of the Communications Decency Act. It shields online platforms from liability for content posted by users. The law was enacted in 1996, the dawn of the internet age. It states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 

A book by a law professor, Jeff Koseff, titled “The Twenty-Six Words That Created the Internet,” conveys Section 230’s efficacy in insulating platforms from liability for the third-party content they distribute. According to the law, Facebook is merely the host for the opinions of its nearly three billion users.  

A review in 2020,  undertaken by the Department of Justice, found that the “time is ripe to realign the scope of Section 230 with the realities  of the modern internet,” to give “online platforms the freedom to grow and innovate while also encouraging them to moderate obscene and unlawful content.”  

Now Section 230 could have its day in court. In Gonzalez v. Google, the family of a young woman killed in an ISIS attack that claimed 130 victims at Paris sued the trillion-dollar search giant, claiming that the Islamic State used YouTube — owned by Google — to recruit foot soldiers and indoctrinate them once they were on board. 

The Gonzalezes argue that Google’s algorithms promoted terrorist content by recommending ISIS’s “hundreds of radicalizing videos.” Those recommendations, they maintain, are their own kind of content, and thus pierce the protections of Section 230, which assumes that platforms are neutral hosts.

In its own briefings, Google responds that “YouTube does not produce its own reviews of books or videos or tell users that a given video is ‘terrific,’” distinguishing it from publishers. The search giant argues that the court “should not lightly adopt a reading of Section 230 that would threaten the basic organizational decisions of the modern internet.”  

The case has attracted attention — and amicus briefs — from one of Section 230’s highest-profile foes on Capitol Hill, Senator Hawley. He writes to the justices that the law has been used to “shield the Nation’s largest and most powerful technology corporations from any legal consequences” and “allows platforms to escape any real accountability for their decision-making.”    

In Twitter v. Taamneh, a companion case, the relatives of Nawras Alassaf, whose life was taken by ISIS in a 2017 strike at Istanbul, argued that the named social media giant — along with Google and Meta — countenanced the proliferation of ISIS’s material without proper moderation or editing and should be liable under the Anti-Terrorism Act for treble damages.

One justice appears ready to think anew about Section 230. In a note issued in 2020 on the denial of certiorari, Justice Clarence Thomas noted that “when Congress enacted the statute, most of today’s major Internet platforms did not exist. And in the 24 years since, we have never interpreted this provision.”

Stepping into judicial terra incognita, Justice Thomas noted that “extending §230 immunity beyond the natural reading of the text can have serious consequences,” particularly in the realms of racial discrimination, human trafficking, and child pornography. He expressed an interest in “paring back the sweeping immunity courts have read into §230.”    

Justice Thomas pursued this skepticism the following year, in a concurrence relating to President Trump’s Twitter account. He used  a footnote to remind lower court judges that Section 230 does not “give digital platforms immunity for bad-faith removal of third-party content.” 

Just this March, Justice Thomas seized the occasion of another petition to argue that “the arguments in favor of broad immunity under §230 rest largely on ‘policy and purpose,’ not on the statute’s plain text.” That is a startling analysis, coming from the committed textualist.

In Justice Thomas’s summary, the case was one where a male “sexual predator” — using Facebook — lured an underage girl, “Jane Doe,” to a meeting where she was “repeatedly raped, beaten, and trafficked for sex.” She escaped, and pressed charges. The Texas supreme court held that while she could proceed with her statutory claims, her common law ones were barred by Section 230. 

The Texas judges demurred on the common law front despite finding that Facebook “knows its system facilitates human traffickers” and that the platform “failed to take any reasonable steps to mitigate” predators for fear of forfeiting advertising revenue.  

Clearly frustrated by the law’s impregnability, Justice Thomas noted then that “assuming Congress does not step in to clarify §230’s scope, we should do so in an appropriate case.” That case now appears at hand. 


The New York Sun

© 2024 The New York Sun Company, LLC. All rights reserved.

Use of this site constitutes acceptance of our Terms of Use and Privacy Policy. The material on this site is protected by copyright law and may not be reproduced, distributed, transmitted, cached or otherwise used.

The New York Sun

Sign in or  create a free account

By continuing you agree to our Privacy Policy and Terms of Use