The authors of Section 230: ‘The Supreme Court has provided much-needed certainty about the landmark internet law–but AI is uncharted territory’

Since 1996, the law, known as Section 230, has governed liability for millions of websites, blogs, apps, social media platforms, search engines and online services that host user-created content. Hundreds of court decisions have created the now-respected jurisprudence around the statute. But it wasn’t until this year that the US Supreme Court got a chance to hear the first Section 230 cases.

When the court’s 2022-23 term ends in June, policymakers anxiously awaiting the judge’s decision on the 27-year-old law finally have their answer: Section 230 will remain in place. As co-authors of the statute in the 1990s, we were quick to acknowledge that no law is perfect, and this law is no exception. But the legal certainty provided by the court decision comes at a most critical time, as artificial intelligence and apps such as ChatGPT raise new questions about who is responsible for defamatory or other illegal content on the internet.

The clarity provided by Section 230 is critical to answering these questions for AI investors and consumers of all stripes. The law clearly states that it does not protect anyone who creates or develops content, even parts of it, which, by definition, is created by generative AI applications such as ChatGPT.

Two cases decided by the Supreme Court this year, Google vs. Gonzalez and goodbye on twitter Allegations involving the internet giant’s platform aiding and abetting terrorism by unknowingly hosting ISIS videos. The court answered in a 9-0 vote: “No.”

Shifting responsibility from actual wrongdoers to providers of services generally offered to the public has negative consequences, the justices noted. They note that YouTube and Twitter are “transmitted by billions of people, most of whom use these platforms for interactions they once had by mail, phone or in public.” That alone “is insufficient to make a claim…(A) a claim to the contrary would effectively hold any type of communications provider liable for any type of wrongdoing.”

Both platforms have strict policies against terrorism-related content but have failed to block it. Even so, the court said that while “bad actors like ISIS are able to use platforms like the defendants for illicit — and sometimes horrific — ends (…), the same can be said of cell phones, email, or the Internet.”

While Google and Twitter are no doubt pleased with the outcome, they don’t need to thank Section 230. The law is not a reason for them to avoid responsibility. Instead, the plaintiffs failed to prevail on their basic claim that the platforms caused the harm they suffered.

This fact in itself demonstrates something that has long been evident in Article 230. It can be a convenient scapegoat. It has been blamed for the platform’s decision to moderate too much content while failing to moderate enough. However, Section 230 does not give platforms the leeway to make these decisions. The First Amendment gives platforms the right to decide how to moderate the content on their sites.

and Gonzales and Tamney Looking back, attention will soon turn to two cases the Supreme Court is likely to hear in its next term. Both Florida and Texas have enacted laws giving attorneys general sweeping powers to oversee content moderation decisions by social media platforms. Both laws have been banned following lawsuits in lower federal courts, while requests for Supreme Court review are still pending.

Meanwhile, Congress continues to consider legislative changes to Section 230. It’s important to remember that Congress enacted Section 230 with overwhelming bipartisan support after lengthy consideration of numerous conflicting interests. Today’s debate is characterized by starkly different proposals that all but cancel each other out. Some are based on the premise that platforms are not aggressive enough in monitoring content; others are based on concerns that platforms have become overly moderate. All of this would put governments in charge of deciding what platforms should and should not publish, an exercise that would itself create new problems.

At the same time, we acknowledge that many tech companies face legitimate criticism for doing too little to prevent illegal content from leaving their platforms and for failing to provide transparency about controversial moderation decisions. Their use and abuse of Americans’ private data is a looming issue. These are areas where Congress should take decisive action.

One thing is for sure: Section 230 and its refinement of the legal regime that governed the internet for the first three decades will soon appear less challenging than the truly novel issues surrounding the rapid adoption of artificial intelligence in the future internet.

In that sense, 2023 is a lot like 1995, when we stood on the shores of the uncharted ocean of the World Wide Web. Today, as then, lawmakers are grappling with complex new issues for which the reflective political answers of the past were insufficient. A happy side effect is that the process of shared learning could yield bipartisan results, as it did in the 230s when Section 230 was crafted.

Ron Wyden is a U.S. Senator from Oregon, chairman of the Senate Finance Committee and ranking member of the Senate Select Committee on Intelligence.

Former U.S. Representative Christopher Cox is an attorney, director of several for-profit and nonprofit organizations, including NetChoice, and author of the forthcoming Woodrow Wilson biography (Simon & Schuster, 2024).

The opinions expressed in Fortune review articles are solely those of the authors and do not necessarily reflect the views and beliefs of: wealth.

Svlook

Leave a Reply

Your email address will not be published. Required fields are marked *