The Washington Mandarins and Veteran Disinformation Spooks Behind Stanford’s Internet Observatory

Stanford University’s Internet Observatory (SIO) is the latest in a growing network of cybersecurity groups policing the activities of social media users, pushing a pro-State Department line about Washington’s adversaries, and spreading the fear of disinformation that’s driving popular support for renewed conflict with Russia and other nations.

This article will pull back the veil on one of the fastest-growing institutions in the US Security State’s information war. Some of the observatory’s central figures include Facebook’s security chief during the Cambridge Analytica scandal, leading academic champions of Western triumphalism and US policy advisers, and even a lead researcher from a cybersecurity group previously exposed as being itself a disinformation outfit.

‘Rebuilding the Arsenal of Democracy’

Formed just this past summer, the observatory is attached to Stanford University’s Cyber Policy Center instead of a think tank like the Atlantic Council’s Digital Forensic Research Lab or operating as a for-profit cybersecurity company, like FireEye. In line with its more academic focus, SIO offers cybersecurity courses for students, such as program Director Alex Stamos’ popular “HackLab” course, “Trust & Safety Engineering,” and “Online Open Source Investigation,” as well as grants for full-time researchers and post-doctoral students, counting among its staff senior Stanford academics.

Newmark’s justification for supporting the Observatory serves as an effective summary of his efforts in this field: 

Who’s Who at SIO

Heading up the outfit is Alex Stamos, the former security chief at Facebook and a visiting fellow at the conservative Hoover Institution think tank. Not only was Stamos in charge of Facebook’s cybersecurity during the Cambridge Analytica scandal in which some 87 million users had their personal information collected and used, most without their knowledge or consent, by companies intending on helping candidates like Donald Trump shape elections, but when he left Facebook in the summer of 2018 to form the SIO, Stamos’ departure was based primarily in his disagreement with other Facebook executives’ apprehension at surrendering so much user data to cybersecurity firms in their effort to track down supposed Russian political meddling on the site.

Meanwhile, its research team is headed up by Renée DiResta, a Mozilla fellow who’s lectured at events hosted by the arch-conservative Federalist Society. DiResta was also the research director at New Knowledge, ostensibly a cybersecurity outfit that disbanded after it was revealed they had played a deep role in an online disinformation campaign designed to influence the December 2017 Alabama special election in favor of Democratic candidate Doug Jones.

The Washington Mandarins and Veteran Disinformation Spooks Behind Stanford’s Internet Observatory

Stanford Internet Observatory Director Alex Stamos (left) and Research Manager Renée DiResta (right)

While DiResta has sworn up and down that she knew nothing about the disinformation campaign being waged by New Knowledge chief Jonathon Morgan, at the time of the campaign she was also an Advisor on Policy at Data for Democracy, an informal cybersecurity brainstorming and collaboration network set up and coordinated by Morgan.

Further, the Stanford Cyber Policy Center’s codirector is Nate Persily, who founded Social Science One in April 2018 to do much the same as the SIO hopes to now: sort through petabytes of privacy-protected data on Facebook users to “investigate the spread of information and misinformation on Facebook, and their impact on elections and democracy,” as Wired put it in July 2018. Indeed, Wired noted Stamos hoped in setting up the Observatory to gain access to Facebook data via SSO.

“There’s an enormous number of questions you can ask about what 2 billion people around the world are clicking and reading and sharing,” Harvard University Institute for Quantitative Social Science director Gay King, who co-founded SSO with Persily, told Wired. The group receives funding from Omidyar-connected groups such as the Knight Foundation, Democracy Fund and the Omidyar Network, as well as the Charles Koch Foundation of conservative oil tycoon fame.

DiResta’s Proximity to Disinformation Electioneering

DiResta became embroiled in this disinformation cauldron as early as November 2016, when she met Mikey Dickerson, a former Obama administration official who served as the first head of the US Digital Service, which managed US government websites and who, according to the Washington Post, “expressed a desire to fight back” against Trump’s election victory.

The Washington Mandarins and Veteran Disinformation Spooks Behind Stanford’s Internet Observatory

Facebook banned the account of Jonathon Morgan, the CEO of New Knowledge, a cyber firm that meddled in Alabama’s 2017 special election.

Soon after, Dickinson founded American Engagement Technologies (AET), which later played a pivotal role in the Alabama special election, coordinating a massive effort to impersonate Republicans in an effort to discourage Alabamans from voting for Republican candidate and slavery and pedophilia defender Roy Moore. AET was bankrolled by LinkedIn founder Reid Hoffman, who sunk $750,000 into AET, $100,000 of which was used by Dickinson to hire New Knowledge. New Knowledge, in turn, coordinated Project Birmingham, the name of the disinformation effort, as documents obtained by the Washington Post show.

​DiResta, in turn, advised AET before it obtained Hoffman’s backing, migrating to New Knowledge in the aftermath of the Alabama campaign, pledging to the Post she knew only of “an experiment in Alabama” but disavowing knowledge of their tactics.

However, DiResta had also joined Morgan startup group Data for Democracy as the group’s Policy Lead in August 2017, a nonprofit network of roughly 1,200 hackers founded by Morgan the previous December to “use data and technology for social impact,” according to its LinkedIn profile, especially “election integrity, disinformation & social network manipulation, transparency, and policy initiatives.” Between these three connections, it seems increasingly unlikely she knew nothing of the tactics behind Project Birmingham  – indeed, she had already professed sympathy with just such efforts to “fight back” against Trump.

The Pesky Problem of Privacy

Stamos, in an interview with Reuters, said pressures on Facebook to protect user privacy are making it harder to see and combat misuses on the platform, such as disinformation and election interference.

He specifically blasted the Federal Trade Commission settlement requiring Facebook’s board to create an independent privacy committee and to have greater oversight over third-party apps on its platform, saying the skittishness it creates around sharing users’ data with research groups would be “a blow for the public’s understanding of social media manipulation,” as Reuters summarized it.

​Building a ‘Data Clearinghouse’

Indeed, now that Stamos has moved from security chief to academic, he’s renewed his pressure on tech giants to hand over user data for “academic” purposes, successfully winning backdoor access to user and ad data via API gateways from Facebook, Google, and Bing, turning the SIO into what Wired called “his data clearinghouse.”

At the time this article went to publication, Stamos had not answered inquiries by Sputnik about the other social media giants Wired noted the Observatory was courting for API access, including Twitter, YouTube and Reddit.

‘For $5,000 I Can Buy A Whole Lot of Speech’

Like Stamos, DiResta has a history of distrusting online people-to-people networks. In September 2018, she appeared on a panel titled, “Is Social Media a Threat to Democracy? Fake News, Filter Bubbles, and Deep Fakes,” at which she blasted Twitter for enabling “more speech” with its autofill retweet function, noting that “for $5,000 I can buy a whole lot of speech.”

In describing the panel’s focus, the event handout noted that “Social media companies are under increasing scrutiny for their effect on American society. According to critics, these platforms promote hyper-partisanship, disinformation, extremism and even violence. They are also blamed for facilitating Russian-sponsored voter manipulation during the 2016 election. Our panel of experts will discuss these challenges as well as their implications for the upcoming election.”

The Washington Mandarins and Veteran Disinformation Spooks Behind Stanford’s Internet Observatory

Renee DiResta speaking at Reboot 2018 Conference in San Francisco, October 2, 2018

Pruning Facts: Be More Like Google!

Since its formation, the Observatory has rushed to embrace the standard of “Russian disinformation campaigns,” turning out several reports ostensibly documenting such efforts in Africa, and also attacking Bing for failing to prune their search results of pro-Russian links.

While the SIO purports to have identified “Evidence of Russia-Linked Influence Operations in Africa,” as the title of an October 2019 report suggests, in truth they offer nothing but insinuation and supposition. The report itself admits they “cannot independently verify” their most contentious claim, that the Russian security contractor “Wagner Group” is behind a supposed “Facebook operation” in Libya.

This is the same kind of suggestive wording obfuscating ignorance that FireEye and the Atlantic Council’s DFRL have employed in claiming to identify disinformation campaigns by US targets for regime change, including Russia but also Venezuela, Iran, and China, among others.

For example, as Sputnik has reported, during the regular purges carried out by Twitter and Facebook, the cybersecurity reports used to justify the purges routinely enjoyed only moderate or low confidence by the researchers, and in at least one instance in May 2019, Twitter Site Integrity chief Yoel Roth was forced to admit they never looked at FireEye’s report before summarily deleting 2,800 accounts “linked to Iran.”

In another report in December 2019, the SIO faulted Microsoft default search engine Bing for failing to prune its search results as efficiently as Google does, specifically naming higher-ranked links to Sputnik and RT stories on search terms like “novichok,” “Skripal,” and “MH17” as a problem.

The Washington Mandarins and Veteran Disinformation Spooks Behind Stanford’s Internet Observatory

A graphic from a December 2019 report by the Stanford Internet Observatory showing the appearance of RT and Sputnik in search results for “skripal” “MH17” and “novichok” on Bing and Google

In an interesting moment of frankness, the SIO admits that Google does, in fact, manually prune its search results – something the tech giant has long maintained it doesn’t do. A November 2019 report by the Wall Street Journal confirmed what documents leaked to the Daily Caller the previous April had hinted at: that Google intervenes directly to “correct” algorithms that yield politically undesirable search results.

“Despite publicly denying doing so, Google keeps blacklists to remove certain sites or prevent others from surfacing in certain types of results,” the WSJ report continues. “These moves are separate from those that block sites as required by US or foreign law, such as those featuring child abuse or with copyright infringement, and from changes designed to demote spam sites, which attempt to game the system to appear higher in results.”

From ‘Fake News’ to ‘Disinformation’

The language deployed in the SIO report is also interesting, in that it categorizes stories forwarding a point of view differing from the US State Department’s line alongside conspiracy theory links about Pizzagate, links between vaccines and autism, and white supremacist content, as if they were all just flavors of “fake news” to be purged in the defense of “real” news.

Although US President Donald Trump is credited with popularizing the slander, “fake news” is a narrative swallowed hook, line and sinker in recent years by both tech giants and the billionaire investors who bankroll their growing media networks. For example, in 2015, Craig Newmark Philanthropies joined the Omidyar-backed Knight Foundation, the George Soros-funded Open Society Foundations and the Ford Foundation to back Google’s First Draft News and the Google News Lab, projects that aimed to coordinate “efforts between newsrooms, fact-checking organizations, and academic institutions to combat mis- and disinformation.”

“Fake news” has dovetailed with “disinformation” to provide a convenient excuse to extend state control over the realm of cyberspace, and the SIO is just the latest instrument in a growing symphony of thought policing.


The Washington Mandarins and Veteran Disinformation Spooks Behind Stanford’s Internet Observatory

0.00 (0%) 0 votes