Democrat-Run States Are Pushing Laws To Target Social Media ‘Misinformation’
By Ailan Evans
- Democratic state lawmakers are proposing laws to curb “misinformation” on social media sites and other online platforms, mirroring efforts by Democrats in Congress.
- “Social media algorithms are specially programmed to spread disinformation and hate speech at the expense of the public good,” said New York state Sen. Brad Hoylman, who announced a bill Monday aimed at preventing social media companies from promoting “false” or “fraudulent” content that could endanger the public. “The prioritization of this type of content has real life costs to public health and safety.”
- California Assemblyman Ed Chau introduced a bill, AB 35, in December 2020 that would require social media platforms to disclose their policy and mechanisms for reducing misinformation and their plans for addressing manipulative or deceptive practices.
- Democrats’ state-level efforts to combat alleged misinformation mirror the party’s efforts in Congress, where Democratic members have introduced legislation targeting social media companies over the content for which they provide a platform.
Democratic state lawmakers are proposing laws to curb “misinformation” on social media sites and other online platforms, mirroring efforts by Democrats in Congress.
New York State Sen. Brad Hoylman announced a bill on Monday aimed at reducing the spread of misinformation and harmful content online.
The bill seeks to prohibit, among other things, social media companies from “knowingly or recklessly” promoting content through recommendation algorithms or other means that “includes a false statement of fact or fraudulent medical theory that is likely to endanger the safety or health of the public.”
“Social media algorithms are specially programmed to spread disinformation and hate speech at the expense of the public good,” Hoylman said. “The prioritization of this type of content has real life costs to public health and safety.”
Hoylman cited testimony from former Facebook employee Frances Haugen, in which she said Facebook’s algorithms amplified hateful and incendiary content, as justification for the bill. Haugen leaked internal Facebook communications and research to journalists and lawmakers detailing the tech giant’s business practices with regard to COVID-19 misinformation and election fraud content on the platform.
“So when social media push anti-vaccine falsehoods and help domestic terrorists plan a riot at the U.S. Capitol, they must be held accountable,” Hoylman said. “Our new legislation will force social media companies to be held accountable for the dangers they promote.”
The bill seeks to create a mechanism of enforcement by allowing private citizens, as well as the state’s Attorney General, to sue companies that amplify false and dangerous content.
However, as the bill aims to restrict speech traditionally protected under the First Amendment, its constitutionality is dubious, according to legal experts.
“This bill’s attempt to restrict the dissemination of that constitutionally protected content is facially unconstitutional,” Santa Clara University School of Law professor Eric Goldman told The New York Post.
California Democratic state lawmakers have also set their sights on limiting social media misinformation, though they have opted for bills intended to force social media companies to be more transparent regarding misinformation policies.
California Assemblyman Ed Chau introduced a bill, AB 35, in December 2020 that would require social media platforms to disclose their policy and mechanisms for reducing misinformation and their plans for addressing manipulative or deceptive practices.
“The proliferation of misinformation can have a severe impact on the psychological and emotional well-being of Californians, which makes access to accurate information all the more important,” Chau said, adding that social media platforms have “become an avenue for those interested in spreading misinformation, including acts of fraud.”
Though the legislation is currently stalled in committee, another California bill, AB 587, introduced several months later seeks additional transparency compliance from online platforms.
This legislation, sponsored by eight Democrats and one Republican and introduced in March 2021, would require social media platforms to “file semi-annual reports” disclosing their policies on hate speech, misinformation, extremism and other issues, as well as their plans to combat this objectionable content. The bill would also force companies to disclose data and metrics on the aforementioned content every quarter.
“Californians are becoming increasingly alarmed about the role of social media in promoting hate, disinformation, conspiracy theories, and extreme political polarization,” Assembly member Jesse Gabriel said, announcing the bill. “It’s long past time for these companies to provide real transparency into their content moderation practices.”
The bill attracted support from several advocacy organizations, including the Anti-Defamation League, whose CEO, Jonathan Greenblatt, said the legislation would “move us closer to holding social media companies accountable for the hate and harassment they allow on their platforms.”
Democrats’ state-level efforts to combat alleged misinformation mirror the party’s efforts in Congress, where Democratic members have introduced legislation targeting social media companies over the content for which they provide a platform.
Democratic Sens. Amy Klobuchar of Minnesota and Ben Ray Lujan of New Mexico unveiled a bill in July which would remove Section 230 liability protections from social media platforms that promote “health misinformation,” as defined by the Department of Health and Human Services (HHS). This would enable private citizens to sue tech companies if they are recommended this content.
Top Democrats on the House Energy and Commerce Committee introduced legislation in October which would remove Section 230 immunity from platforms that recommend “personalized” content contributing to “physical or severe emotional injury” of a user, thereby enabling individuals to sue if they are harmed by promoted content.
The lawmakers said the bill would help curb the spread of “disinformation” and “extremism.”