Clarifying Section 230 Means Looking Beyond the Immunity Provision
Privacy Plus+
Privacy, Technology and Perspective
Clarifying Section 230 Means Looking Beyond the Immunity Provision. This week, controversy has continued to stir over Section 230 of the Communications Decency Act of 1996 (“CDA” or “Act”) as Twitter and YouTube joined Facebook in curbing the spread of conspiracy theories on those platforms. In turn, the actions of those platforms prompted various proclamations from prominent individuals, including the President (“REPEAL SECTION 230!!!”) and the Chairman of the Federal Communications Commission (“FCC”), who announced on Twitter that the FCC has the "legal authority to interpret Section 230” and that it intends to move toward issuing new rules to “clarify its meaning.” A link to FCC Chairman Ajit Pai’s statement follows:
https://twitter.com/AjitPaiFCC/status/1316808733805236226/photo/1
Putting aside the issue of the FCC’s rule-making authority (or for that matter, repealing a well-established law), let’s consider Section 230. For your reference, a link to Section 230 follows:
https://codes.findlaw.com/us/title-47-telecommunications/47-usc-sect-230.html
Also, we have written before about immunity under subsection 230(c), and you can read that post here:
https://www.hoschmorris.com/privacy-plus-news/revisiting-section-230
Courts have often construed Section 230 by focusing on subsection 230(c), which acts as a liability shield for websites. Subsection 230(c) provides for two types of immunity— ‘publisher’ immunity under subsection 230(c)(1), and immunity to ‘police content’ (but not an obligation to police it) under subsection 230(c)(2).
Yet, two important subsections that precede subsection 230(c). Respectively, subsections 230(a) and 230(b) contain factual findings and policy statements that characterized the entire ecosystem of the Internet as it existed in 1996, when the Act was written. Back in the days when websites functioned primarily as bulletin boards, the following statements about the Internet (abbreviated and paraphrased from subsections 230(a) and 230(b)) were universally true:
· services offer users a great degree of control over the information that they receive;
· services provide a forum for a true diversity of political discourse; and
· it is U.S. policy to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.
See 47 U.S.C. § 230(a) and (b). Today, there are still platforms which remain consistent with this 1996 characterization of the Internet (think of individual blogs and modern-day bulletin boards) – but after 25 years of explosive growth in computing power, data storage, algorithmic processing, data commoditization, interactivity, and substantial initiatives and investments in artificial intelligence, connectivity, and augmented reality, it is perfectly clear that many platforms operate in ways that are totally inapposite. This is especially true for platforms, like Facebook, where the collection and use of its users’ personal information is an essential part of the services provided.
It would have been hard even to imagine “Big Tech” when Section 230 was written, in 1996. Not so anymore, as giant companies have morphed to the point that there are real questions whether their platforms are advancing “the vibrant and competitive free market,” or are strangling it. Here, we recall Rep. Pramila Jayapal’s statement during her questioning of Mark Zuckerberg earlier this year: “Facebook is a case study, in my opinion, in monopoly power because [the] company harvests and monetizes our data, and then…uses that data to spy on competitors and to copy, acquire and kill rivals.”
In fact, many of these platforms harvest vast amounts of personal information; process it through proprietary algorithms to develop “insights” into individual users; and then push out near-hypnotic content individually curated to each user’s preferences. Whether such targeting constitutes a “service” that “offer[s] users more control” over the information receive as imagined in Section 230(a) – or actually takes control of that transaction, coopting all control of both who gets to see/use/weaponize the raw data of their users’ lives and also of how that data is fired back to manipulate them – is a live and urgent question.
Similarly, whether the major platforms are advancing “true diversity of political discourse” as was hoped in 1996’s Section 230(b) – or whether, instead, they are gyrating wildly between (i) denying they bear any responsibility at all to secure their systems from becoming giant disinformation tools in the hands of immoral, criminal, and foreign threats; (ii) insisting that they’re doing a better job of that every day; and (iii) admitting that their platforms have lost control over malign forces and that so far as “advancing true diversity of political discourse” is concerned they’ve become giant, hugely profitable failures -- is also live, and equally urgent if not more so.
So we agree that Section 230 needs fresh eyes with fuller focus on subsections 230(a) and (b). Here, however, we worry less about conferring immunity under subsection 230(c) for policing of content on the platforms, than about publisher immunity associated with Big Tech’s role in the promulgation of disinformation.
---
Hosch & Morris, PLLC is a Dallas-based boutique law firm dedicated to data protection, privacy, the Internet and technology. Open the Future℠.