Sunday, November 17, 2024
Share:

Documents Shed New Light on Feds’ Collusion with Private Actors to Police Speech on Social Media



In the runup to the 2020 election, cybersecurity experts at the Department of Homeland Security and Stanford University decided they had discovered a major problem. 

The issue was not compromised voter rolls or corrupted election tallies but a “gap” in the government’s authority to clamp down on what it considered misinformation and disinformation – a gap identified by DHS officials and interns on loan to the agency from the Stanford Internet Observatory. Given what SIO research manager Renee DiResta described as the “unclear legal authorities” and “very real First Amendment questions” regarding this gap, the parties hatched a plan to form a public-private partnership that would provide DHS with an avenue to surreptitiously censor speech. 

The collaboration between DHS’ Cybersecurity and Infrastructure Agency and the Stanford outfit would quickly expand into a robust operation whose full extent is only now becoming clear. RealClearInvestigations has obtained from House investigators records revealing in previously undisclosed detail the nature and mechanics of the operation – the SIO-led Election Integrity Partnership.  

Related: Right Was Focus of Fed-Private Lens on Social Media RCI
Related: Rep. Jordan’s Report on Gov’t/Stanford/Big Tech Censorship Fox

They show at a granular level the thousands of tweets and Facebook posts on topics from mail-in voting to aberrant election results – arguably core protected speech – that the public-private partnership flagged to social media platforms for censorship, much of which the platforms would suppress. 

The evidence shows EIP – sometimes alongside CISA – pressuring platforms to target speech that included statements by then-President Trump; opinions about election integrity rooted in government records and even think-tank white papers; and speculative tweets from statesmen and everyday citizens alike. RCI details notable instances here

EIP scoured hundreds of millions of social media posts for content disfavored by the government about election processes and outcomes, and collected it from the operation’s governmental and non-governmental partners to identify the offending speech. 

RCI’s reporting, drawing on sources including Missouri v. Biden, a pending lawsuit alleging state suppression of free speech including COVID dissent and news of Hunter Biden’s abandoned laptop, illustrates how EIP coordinated with government officials; targeted right-leaning domestic speech from politicians, journalists, and everyday Americans; and served as an active censorship advocate rather than a mere misinformation and disinformation research vehicle. 

“One look at these documents shows the government and these organizations working hand-in-glove to suppress the speech of Americans,” said Rep. Dan Bishop (R-N.C.), Chairman of the House Homeland Security oversight subcommittee that procured the documents from Stanford. 

When notified that the EIP flagged her Nov. 2020 tweet of a Federalist op-ed titled “America Won’t Trust Elections Until The Voter Fraud Is Investigated,” its editor-in-chief, Mollie Hemingway, called the effort “unconscionable” and said the “censorship-industrial complex … clearly views free speech as its enemy.” 

EIP, whose work came to light in the “Twitter Files,” is one of a constellation of government-tied third-party organizations that critics see as First Amendment-skirting “cutouts.” The NGOs reject this characterization, arguing they are engaged in critical research about information harmful to the public. 

They see the criticism of their efforts itself as part of the problem they are fighting. One of EIP’s partners observed that “As mis- and disinformation researchers, it’s distressing … to see some of the very dynamics and tactics we study being used to disrupt and undermine our own work.” 

The ‘Nerve Center’ of Fed-Led Speech Policing
– and Its Stanford Partner 

Established in November 2018 under the Trump administration to “elevate the mission” of an existing DHS office, CISA’s mandate is to protect critical infrastructure.  

This includes election infrastructure like polling stations and voting machines, a responsibility the departing Obama administration assigned to CISA’s predecessor after claims of Russian meddling in the 2016 contest. 

Before the 2020 election, CISA widened its mandate to include combatting “election infrastructure disinformation,” encompassing online speech about election administration and results – without regard to whether the speaker was foreign or domestic. 

The plaintiffs in Missouri v. Biden would later characterize CISA as the “nerve center” of federal government-led speech policing due to its role in linking and coordinating federal agencies and social media companies to combat mis- and dis-information; and its purported pressuring of “platforms to increase censorship of speech that officials disfavor.” They also accuse it of “switchboarding,” collecting reports of election-related mis- and dis-information on social media from state and local election officials, and forwarding the content to the platforms for potential suppression.

Federal authorities like CISA and their private-sector counterparts generally distinguish mis-information (“not necessarily intentionially false information,” in the words of EIP)  and dis-information (“purposefully seeded”) by whether the speaker intends to mislead or to manipulate.

The plaintiffs also amassed evidence showing that CISA “outsourced” efforts to the Stanford Internet Observatory. That CISA worked with SIO, and SIO spearheaded the Election Integrity Partnership, was no coincidence. 

SIO founder Alex Stamos had significant clout as Facebook’s former chief security officer, where he spearheaded the company’s internal probe of Russian efforts to meddle in the 2016 election. As a longtime cybersecurity executive, his connections to national security agencies dated back years. Research manager Renée DiResta had previously investigated Russia’s 2016 social media meddling for the Senate Intelligence Committee, which Democrats used to claim Russia helped elect Trump.

SIO bills itself as non-partisan, though Stamos and DiResta have both been publicly critical of Trump. Stamos called for the president to be banned from social media after January 6. DiResta helped raise money that would seed, and ultimately served as research manager for, New Knowledge, a cybersecurity company that had led a social media disinformation campaign aimed at defeating Republican candidate Roy Moore in the 2017 Alabama Senate race by framing him as Kremlin-backed.

The burgeoning mis- and dis-information-fighting industry of which SIO is a part arose in direct response to and consists mainly of those outraged over Trump’s 2016 victory, which they believe social media platforms enabled – including through facilitating Russian influence operations.

The Stanford Internet Observatory launched the EIP months before the 2020 election, alongside other, often government-linked heavy hitters in what advocates call social media misinformation and disinformation analysis, including the University of Washington’s Center for an Informed Public; the Atlantic Council’s Digital Forensics Research Lab; and social media analytics firm Graphika. 

The Stanford observatory convened EIP as “a model for whole-of-society collaboration” aimed at “defending the 2020 election against voting-related mis- and disinformation.” 

The partnership did so in two primary ways, the records show.  

First, EIP lobbied social media companies, with some success, to adopt more stringent moderation policies around “content intended to suppress voting, reduce participation, confuse voters as to election processes, or delegitimize election results without evidence.” 

It did so through something of a passive-aggressive strategy. The consortium documented the major platforms’ content moderation policies, highlighted their perceived deficiencies, and published their findings publicly on a rolling basis as the policies changed leading to election day.  

Stamos later said, “We’re not going to take credit for all of the changes,” adding  that EIP produced revised versions of its analysis “eight or nine times.” 

Putting the platforms “in a grid to say, you’re not handling this, you’re not handling this, you’re not handling this, creates a lot of pressure inside of the companies,” Stamos added, “and forces them to kind of grapple with these issues …” 

This effort coincided with regular meetings between national security agencies and Big Tech, led by CISA, in which government officials too questioned the companies’ content-moderation policies, and periodically asked if the companies were modifying them. 

Second, EIP surveilled hundreds of millions of social media posts for content that might violate the platforms’ moderation policies. In addition to identifying this content internally, EIP also collected content forwarded to it by external “stakeholders,” including government offices and civil society groups. EIP then flagged this mass of content to the platforms for potential suppression.

EIP’s government stakeholders included CISA, the Election Infrastructure Information Sharing and Analysis Center, or EI-ISAC for short, and the State Department Global Engagement Center

Outside of CISA, most heavily involved was the CISA-funded EI-ISAC. It is a conduit for state and local election officials to report false or misleading information, which could then be forwarded by its parent, the CISA-funded Center for Internet Security, to social media companies for review. CISA connected EI-ISAC to the EIP. 

EIP coordinated its efforts via a digital “ticketing” system. There, one of as many as 120 analysts, or an external  stakeholder, could highlight a piece of offending social media content, or narrative consisting of many offending posts, by creating a “ticket,” and share it with other relevant stakeholders by “tagging” them. Tagged stakeholders could then communicate with each other about the content, and what actions they might take to combat it. 

For social media companies this meant removing outright, reducing the spread of, or “informing” users by slapping corrective labels on dubious posts.  

During the 2020 election cycle, EIP generated 639 tickets, covering 4,784 unique URLs – content shared millions of times – disproportionately related to the “delegitimization” of election results. Platforms like Twitter, Google, and Facebook responded to 75% or more of the tickets in which they were tagged. These platforms “labeled, removed, or soft blocked” 35% of the URLs shared via EIP. For comparison, platforms reportedly remove content flagged by the FBI at a 50% rate. 

The EIP was open about its 2020 election-related efforts, writing real-time blog posts and releasing a lengthy report detailing its activities.  

But it never produced the underlying tickets, prompting requests for them from the House Homeland Security and Judiciary Committees in connection with their investigations into alleged government-driven censorship. 

Targeting Americans’ Political Speech  

SIO has now produced ticket-level data for the House Homeland Security Committee, which solicited the data following an oversight subcommittee hearing at which I testified due to my writings on free speech and civil liberties at Newsweek, The Federalist, and elsewhere. 

RCI has extensively reviewed the ticket data. It comes in the form of several spreadsheets. The spreadsheets encompass nearly 400 of EIP’s 639 tickets, each containing 95 fields of data. The fields include descriptions like “Misinformation tweet regarding re-voting,” or “Voter turnout >100% in swing states chart, spreading on Twitter.” The fields also include URLs of associated questionable content; the stakeholders tagged; and their comments, including calls for takedowns of content. 

The spreadsheets, informed by EIP’s 2020 report and court filings, indicate that: 

  • About 10% of the tickets bear markings of involvement from federal officials, including at CISA, the FBI, and/or the State Department’s Global Engagement Center through references to government email addresses, offices, or officials. CISA internally kept tabs on the progress of at least 11 EIP tickets, according to discovery material it produced in Missouri v. Biden.
  • The number of government-involved tickets grows to more than 25% when one cross-references the EIP data with the lawsuit discovery material. Nearly a quarter of the 400 EIP tickets originated with misinformation reports from the CISA-funded EI-ISAC. Federal officials directly forwarded dozens of these same reports to social media companies for potential moderation. SIO did not respond to RCI’s inquiry as to the number of tickets federal officials weighed in on, and what percentage of those tickets social media companies acted on.
  • Almost 60% of all comments associated with tickets are redacted. This may have been to protect the identities of the many Stanford students who participated in the effort. It is unclear if those comments make reference to federal officials.
  • Nearly all tickets deal with domestic speech. This is consistent with EIP’s 2020 report noting that less than 1% of tickets pertained to foreign interference.  
  • Of the 330 tickets in which EIP analysts measured the virality of the offending content, nearly half were less-than-viral, per EIP’s definition of 1,001 or less engagements.
  • The word “recommend” or some derivative thereof appears over 100 times in ticket comments, suggesting EIP was not a passive research effort but one aimed at spurring social media companies to censor.
  • No right-leaning groups flagged offending content in the ticket sample. By contrast, groups including the Democrat National Committee, Common Cause Education Fund, and the NAACP identified content for potential suppression.
  • Conservative influencers and news sources predominate in the tickets, consistent with EIP’s 2020 report. In response to claims of bias, EIP says that “without targeting any specific accounts of politically affiliated content, EIP’s research determined that accounts that supported President Trump’s inaccurate assertions around the election included more false statements than other accounts.”  

Ticket Samples 

A review of individual tickets shows EIP targeting sometimes clear falsehoods about the 2020 election. But it also demonstrates among other things that EIP targeted elected officials; called for social media platforms to take action on content while CISA was doing the same; and flagged a raft of speculative and satirical posts – even posts making reference to U.S. government documents – subjectively, at the whims of analysts. 

One ticket concerns a tweet from then-President Trump concerning the ability of an early voter to change his vote, which EIP termed “Procedural Interference.” The ticket comments indicate EIP flagged the tweet for Twitter, and later that EIP “heard back from Twitter through CISA,” that “the Tweet was not in violation of our Civic Integrity Policy.” (Emphasis added) Records in Missouri v. Biden show CISA’s chief counter-mis- and dis-information officer, Brian Scully, had also reported the tweet to Twitter, which responded to him directly about it. Therefore, EIP and its stakeholder, an executive agency, had both forwarded the chief executive’s speech to a social media platform for potential censorship – albeit unsuccessfully.  

Another tweet flagged to Twitter by EIP, and separately by CISA, concerned voting machines. It was taken down – and no archived version of the tweet exists – after an election official who identified the tweet for EIP wrote: “This is false. Voting machines work the vast majority of the time. Old machines do have issues, but to phrase it like [this] vastly overstates the scope of the problem.” This suggests that how tweets were interpreted could lead them to be censored.  

In another ticket, EIP flags a tweet from Ron Coleman, a New Jersey-based conservative lawyer who, responding to claims Philadelphia had destroyed ballot envelopes shortly after the 2020 election, noted the claimed destruction “makes it hard to prove Biden got any legitimate mail in votes at all.” 

Asked for comment, Coleman pointed RCI to a reply to his tweet, in which he clarified “It increasingly looks like this [the destruction of ballots] didn’t happen.” 

Coleman added: “I corrected the record without any censorship because of my own regard for the truth and my reputation.” 

“In contrast,” he said, EIP “operate[s] in the shadows” and is “unaccountable.” 

“Why didn’t they confront me instead of ‘telling on me’?”  

RCI has collected a number of additional examples of flagged content, including the responses of those targeted, in a separate article

CISA and EIP’s Further Ties 

The district court that heard Missouri v. Biden described CISA and the EIP as “completely intertwined.” Events after the 2020 election further substantiate that claim.  

In the days following Nov. 3, 2020, with President Trump challenging the integrity of the election results, CISA rebuked him in a statement, calling the election “the most secure in American history.” The president would go on to fire CISA’s director, Christopher Krebs, by tweet.  

Almost immediately thereafter, Krebs and Stamos would form a consultancy, the Krebs Stamos Group. In March 2021, Krebs would participate in a “fireside chat” when EIP launched its 2020 report. 

CISA’s top 2020 election official, Matt Masterson, joined SIO as a fellow after leaving CISA in January 2021. Krebs’ successor at CISA, Director Jen Easterly, would appoint Stamos to the sub-agency’s Cybersecurity Advisory Committee, established in 2021, for a term set to expire this month.  

Director Easterly would appoint Kate Starbird, cofounder of the University of Washington’s Center for an Informed Public, one of the four organizations comprising the EIP, to the committee. Starbird chaired the advisory committee’s since-abolished MDM (Mis-, Dis-, and Mal-Information) Subcommittee, focusing on information threats to infrastructure beyond elections.

SIO’s DiResta served as a subject matter expert for the now-defunct subcommittee. DHS scrapped the entity in the wake of the public furor over DHS’ now-shelved “Disinformation Governance Board.”  

The EIP would re-emerge on a smaller scale in the 2022 midterms. In the interim, SIO launched a successor project called the Virality Project, which sought to do for COVID-19 what EIP did for elections, pursuing “narratives that questioned the safety, distribution, and effectiveness of the vaccines.” Stamos and others communicated with CISA officials about these efforts, and current and former CISA interns worked as researchers and analysts on the project. 

Despite this raft of ties, in a March 2023 statement, SIO’s partner, the University of Washington Center for an Informed Public, challenged the idea EIP censored as a government cutout. It wrote: “CISA did not found, fund, or otherwise control the EIP. CISA did not send content to the EIP to analyze, and the EIP did not flag content to social media platforms on behalf of CISA.”  

House Homeland Security Committee Chairman Mark Green (R-Tenn.) countered, in a statement to RCI, that the ticket records represent “clear evidence that government employees helped the Election Integrity Partnership … work with social media companies to censor free speech in the name of combating ‘misinformation.’”  

An aide to the committee added that “Even if SIO and their partners didn’t literally take down a tweet or post, they were a foundational part of the process that led to those takedowns.”   

“It is a distinction without a difference.” 

EIP Faces a Chill? 

The courts will weigh in on this dispute. 

On July 4, Louisiana District Judge Terry A. Doughty found in Missouri v. Biden that the Biden White House, and agencies such as CISA, the FBI, and CDC, had likely violated Americans’ First Amendment rights by proxy through cajoling, coercing, and colluding with social media platforms to censor protected speech on matters from election integrity to COVID-19.   

He sought to freeze the alleged government-led censorship during the pendency of the case by issuing a ten-plank preliminary injunction. The injunction prohibited the feds not only from working with platforms to quell protected speech, but also colluding with entities specifically including the SIO and EIP to accomplish the same. Federal authorities challenged the ruling – arguing that the preliminary injunction violated the government’s own right to speech – sending the case to the 5th U.S. Circuit Court of Appeals.  

The appellate court upheld the injunction, but in modified form, jettisoning the SIO- and EIP-related provision. Still unsatisfied, the U.S. government appealed to the Supreme Court. On Oct. 20, it granted certiorari in the case now dubbed Murthy v. Missouri. The high court will assess whether the government did in fact convert social media companies into First Amendment-violating speech police, and whether the injunctions’ “terms and breadth” are proper. 

In an Oct. 10 Supreme Court filing, the plaintiffs requested that if the court granted certiorari, that it rule on whether the 5th Circuit erred when it vacated the injunction forbidding the government from partnering with entities like SIO and EIP to suppress speech.  

In granting certiorari, the court did not indicate it would take up this question. Asked whether the plaintiffs would continue to press it, Amber Hargroder, Communications Officer at the Louisiana Department of Justice told RCI: “We intend to raise all relevant arguments and considerations to the Supreme Court in pushing back against federal censorship.”  

There is also a pending companion case, Hines v. Stamos, brought by some of the same plaintiffs. They contend the likes of the SIO, EIP, and Virality Project served as co-conspirators with the federal government in an effort to violate the First Amendment.  

When asked for comment on the case, the plaintiffs told RCI “We intend to press forward expeditiously in litigating our claims against the so-called ‘Election Integrity Partnership’ and ‘Virality Project…’”  

SIO did not respond to RCI’s related inquiries about the two pending cases, nor its other inquiries.

While the legal cases work their way through the courts, and Congress investigates, those in the anti-censorship space have been on a public relations campaign trying to draw attention to what they see as improper government intrusion. 

Kate Starbird holds out hope that “researchers and their institutions won’t be deterred by conspiracy theorists and those seeking to smear and silence this line of research for entirely political reasons.”  

Alex Stamos is less sanguine. In a transcribed interview before the House Judiciary Committee in June, when asked whether the EIP would continue in the coming election cycle, he replied:   

I’m going to have to have a discussion with Stanford’s leadership. Since this investigation has cost the university now approaching seven figures legal fees, it’s been pretty successful I think in discouraging us from making it worthwhile for us to do a study in 2024. 

This article was originally published by RealClearInvestigations and made available via RealClearWire.