A Turning Point for Broadcasting Online Misinformation and Disinformation?
Privacy Plus+
Privacy, Technology and Perspective
A Turning Point for Broadcasting Online Misinformation and Disinformation? For years now, many people have decried decay of truth. Of particular concern has been the floods of misinformation (“facts” which are not right) and disinformation (“facts” which the speaker knows are not right and publishes anyway), increasingly pouring through every channel communication. Such falsehoods have been amplified on online social media platforms both by AI-enabled bots, and through large-scale broadcast technologies whose algorithms determine what people see when they search or scroll through those platforms.
The issues underlying online misinformation and disinformation are numerous and complex. In our view, the best single source for scholarship in this area is the Misinformation Review published quarterly by the Shorenstein Center on Media, Politics, and Public Policy of the Harvard Kennedy School of Government:
https://misinforeview.hks.harvard.edu/
Its research director, Joan Donovan, has given an excellent synopsis of the disinformation crisis recently, which you can read here:
Generally, commentators have suggested that we combat both misinformation and disinformation with more information, social-media education, and education in basic civics and critical thinking, but that battling this scourge in the digital ecosystem without the aid of the platforms themselves has met only limited success. And, up to now, nothing has convinced the platforms to act.
Instead, by and large, the major online platforms – Facebook, Twitter, Parler, and others – have relied (i) on more-or-less pious Terms of Use in hopes of keeping their users civil (forbidding use of the platforms to promote violence for instance), and (ii) on Section 230 of the Communications Decency Act to protect them as neutral “publishers” and shield them from liability for what their users spew online and, indeed, what the platforms then enlarge and amplify.
Section 230 has been effective in shielding the platforms from liability, but in our view, the platforms’ enforcement of their own Terms of Use – relying mostly on “moderators,” “juries of [their users’] peers,” and warning disclaimers – has been more gummy than biting. Kara Swisher’s recent interview with Parler CEO, John Matze, provides a devastating example of such toothless enforcement by a platform. You can listen to that interview on her podcast, “Sway,” by clicking on the following link:
https://www.nytimes.com/2021/01/07/opinion/sway-kara-swisher-john-matze.html
Yet, after January 6th and with President Biden’s administration and an evenly split Congress, we wonder if things are starting to change.
Soon after the mob stormed into the Capitol and five people died, the platforms started to act. Facebook suspended then-President Trump’s account and is now referring the issue to its powerful Oversight Board, which you can read about here:
https://www.facebook.com/help/711867306096893
After months of applying disclaimers and “this is disputed” labels, Twitter then dumped Trump’s account. Additionally, Apple and Google dropped Parler from their app store, and Amazon Web Services (AWS) stopped hosting Parler.
None of these actions have been without response, of course. Some have lamented about “censorship” and incorrectly invoked the First Amendment, even though it only applies to actions of the government (and not to actions of private companies, which are at issue here). Angry users have also moved to other platforms, which promise even more freewheeling Terms of Use or encrypted messaging. Meanwhile, Parler has been trying to adapt, registering its domain with Epik (a domain seller and registrar for Gab, often favored by the far-right), though apparently (as of this writing), it is not yet fully operational with a new host. You can read about that here:
https://www.cnn.com/2021/01/17/tech/parler-back-online/index.html
The significance of recent events, however, is not in the inevitable pushback from angry users, but in the remarkable fact that these platforms have at last begun to take powerful actions. Schools, courts, scholars, and certainly Congress haven’t slowed the algorithm-driven flood of misinformation and disinformation that has poisoned our public conversations, and otherwise haven’t convinced users to be more thoughtful in how they use the platforms.
Perhaps the platforms are starting to see – at last – that since their algorithms are largely responsible for amplifying and enlarging this flood, they bear at least some responsibility for slowing it, too. And perhaps the political pressure of a new administration also helps with this realization.
Regardless, we only wish it hadn’t taken the recent sights, sounds, and deaths to make the platforms enforce their own Terms.
---
Hosch & Morris, PLLC is a boutique law firm dedicated to data privacy and protection, cybersecurity, the Internet and technology. Open the Future℠.