The impact of bots and disinformation on education news is unclear so far, but their effects on public discourse and on other areas of journalism suggest the need for much greater awareness.
By Alexander Russo
Horrifying and fascinating as it’s been, seemingly never-ending revelations about hackers and bots and fake news probably seem like they are a long way away from the relatively placid world of education news.
After all, most of these efforts were aimed at preventing Hillary Clinton from becoming president and sowing social divisions, not at anything education-related. Who would bother trying to hijack public sentiment or influence media coverage about schools?
In reality, however, the distance is not so great.
There’s no concrete evidence that these kinds of shenanigans – networks of fake social media accounts spreading disinformation – have affected mainstream education coverage in any significant ways – yet.
But we already know that domestic social media advocacy groups attempted to influence debate over issues including Common Core, through a network of semi-automated Twitter accounts that few knew about at the time. And it turns out that Russian hackers defended Betsy DeVos against her critics during last summer’s controversial Bethune-Cookman appearance, too – also unnoticed in the torrential back-and-forth. There’s little doubt (in my mind, at least) that reporters and editors are being exposed to bots and disinformation campaigns on their Twitter and Facebook feeds, and that these efforts have affected coverage of education issues in some way – or will soon.
Education news appears to have dodged the bot-spread disinformation bullet so far – but it’s going to take awareness for it to stay that way.
By now, it’s no longer news that social media – Twitter, Facebook, etc. – has been greatly influenced by bots, hackers, and hidden methods of shaping perceptions like targeted ads and the dissemination of misinformation.
Bots made up a substantial portion of election-related tweets in 2016, according to researchers at the University of Southern California. Over the summer, CJR reported on a study that highlighted the large role they have been playing in the spread of fake news stories. The NYT reported on how fake news spreading on social media tore apart the people of Twin Falls, Idaho. Social media is full of bots, botnets, targeted disinformation, and automated advocacy.
And yet, the existence of these accounts and their tactics “remain largely unknown to the public, as invisible as they are invasive,” according to a recent Newsweek story by Samuel Earle. “Citizens are exposed to them the world over, often without ever realizing it.”
PJNET dominated the debate over Common Core at times, and likely influenced media coverage.
There are lots of ways that people are trying to influence public conversation (and, indirectly, news coverage). Using Twitter bots and networked botnets are common methods of spreading disinformation.
Strictly speaking, a bot is an automated account on Twitter, controlled by software rather than an individual. It posts tweets, retweets, and likes posts on Twitter, but it’s not being done by an individual. It’s programmed (to retweet each time someone mentions the Common Core, for example). Or it’s bundled together with other accounts into a network that can act in concert, called a botnet, to publish on a certain topic at a certain time, or to antagonize or support an individual.
A hybrid kind of bot is called a cyborg, because it is at least originally set up by a human being but has since been hijacked by or “donated” to a larger, coordinated effort.
Some bots like the ones that swarmed last spring’s #EWA17 conference hashtag are trying to sell things. Others are trying to engage and often enrage others for political or ideological purposes.
Bots spread their influence both directly – through followers seeing their posts – and indirectly – by other bots and real-people followers passing the disinformation along.
In most cases, those who pass disinformation along aren’t doing so maliciously. They trust the source or admire the sentiment. They think it’s true, even if it seems outlandish.
If you’re like me, you’ve probably retweeted some fake news along the way, too.
BuzzFeed News’ recently published a guide showing how to spot a bot. (Short version: “Their volume of tweets. Their profile information. What they’re posting.”)
This tweet from the fake Russian account @TEN_GOP generated an enormous amount of attention.
In the world of disinformation and bot networks, education is actually a trailing topic, says UPenn Consortium for Policy Research in Education (CPRE) co-director Jonathan Supovitz, pulled into disinformation campaigns on occasional hot-button topics but not specifically targeted as an area of ongoing focus.
But already we know that bots have played some role in shaping the education debate, especially when it comes to spreading disinformation about hot-button political issues like Common Core.
Last spring researchers at UPenn revealed that a conservative advocacy organization had dominated the Common Core debate on Twitter, at times supplying more than three of every five tweets on the controversial standards on some days.
The network, called the Patriot Journalism Network (aka #PJNET) claims 500 members and a total reach of nearly 4 million combined followers. As I wrote then, “few – including me – seemed to know the tweets on Common Core were being manipulated this way.” The large numbers of #StopCommonCore messages were incorporated into media coverage, which often observed that the state standards were being caught between progressive and conservative criticism.
Since then, things have been pretty quiet. Nobody I know of has stood up and admitted that they wrote a story based on a fake tweet, or quoted a bot. Education journalists and social media gurus I contacted about their experiences reported little or no obvious bot-spread disinformation efforts of late:
“During the height of #CommonCore implementation, absolutely,” there were bots spreading disinformation, reports Patrick “Eduflack” Riccards. “Other than that, more propaganda trolling that specific disinformation.”
“Russo, are you a bot?” quipped NPR’s Anya Kamenetz.
All that could change at any minute, says UPenn’s Supovitz.
PJNET’s dominance during the Common Core debate shows just how quickly bad information can get amplified and incorporated into mainstream media coverage. It’s not hard to imagine something similar happening on the next big education debate, be it refugees in schools, DACA teachers in the classroom, vouchers, or ESSA implementation.
Just this week, it’s been reported by the Chronicle of Higher Education that a Russian-linked troll farm that controlled the now-infamous @TEN_GOP account was among those who defended EdSec Betsy DeVos against her detractors:
“Betsy DeVos gets booed by black students. Unbelievable idiocy. She’s pushing for school CHOICE that will help BLACK students the most!”
That single tweet (image above) generated nearly a thousand retweets, 1,300 likes, and 435 responses.
Bots are often used to spread disinformation directly and by making Twitter’s “Trending News” list.
So, why aren’t more education journalists aware of the bot war going on around them?
It’s easy to dismiss fishy “trending topics” and twitter engagements as the result of anger not automation.
“I’ve had some interactions on Twitter with people who I thought were just really angry about my stories,” says USA Today’s national education reporter Greg Toppo. “But then when I responded it seemed like there wasn’t any ‘there’ there. They responded in an unrelated way, not to the point I was making. I just chalked it up to someone who wasn’t engaged, but maybe it was a bot.”
We like to think we’re too smart to fall for anything like that. And hard-working. So smart! So hard-working!
We also like to think that advertisements don’t affect us, and that we don’t exhibit racial bias, that Facebook and Twitter are progressive-leaning platforms because they were created by people who identify as liberals, and that we know where to place the apostrophe on a plural possessive noun no matter how long or short it is.
While journalists and media outlets are big users of Twitter in particular, in terms of reading it and pumping out their stories, they’re not really at the heart of the debate on social media. The UPenn Common Core studies have shown this, as does Education Next’s almost-annual ranking of top social media accounts.
Journalists skew liberal to moderate, and disinformation efforts that we know about have all skewed to the right. Most reporters probably didn’t know much about Breitbart News or other alt-right outlets 18 months ago, either.
Finally, it’s much more comfortable to focus on students’ abilities to catch fake news rather than to contemplate the possibility of having been fooled or manipulated into covering something that wasn’t entirely true.
Lots of journalists have interacted with Twitter accounts that turned out to be bots.
Twitter clamped down on PJNET earlier this month, according to Slate, claiming that PJNET’s system of sending out multiple, repeated, scheduled messages from individual Twitter accounts violated the company’s rules regarding spam.
The network hasn’t been shut down entirely, and hopes to revive its efforts after adjusting to the new determination by Twitter.
Indeed, PJNET is still going at it on Common Core.
Over the weekend, a Twitter account by the name of @TeriGRight responded to the Bill Gates speech in Cleveland with the tweet “More Gates dumps #CommonCore, Pledges $1.7B To start A NEW EXPERIMENT on OTHER PEOPLE’s CHILDREN! https://www.technocracy.news/?p=10732 #StopCommonCore #PJNET”
On Tuesday, Twitter announced a new transparency initiative that will require political ads to be identified and sourced.
Bots can be created for constructive purposes, too, right? (Yes, of course.)
Trending topics aren’t always fake, of course. Some are generated by real-world events. Others are the product of one or several high-follower Twitter accounts engaging on a certain topic that’s picked up by others.
And there’s no conclusive evidence that bots and disinformation determined the outcome of the presidential election, and may never be. Ditto for the adoption or refutation (renaming?) of Common Core in various states.
But common sense indicates that there has been some impact by the combined effects of bots, botnets and targeted advertising. That’s why Twitter and Facebook are taking action, and Congress is asking both social media companies to testify, and media outlets are working hard to figure out how fake news and bots affect public perception.
The lesson is for all of us – reporters and otherwise – to remember that what we see online is not necessarily organic or authentic. What we see on social media is a function of the sites and people we have liked and interacted with in the past (the infamous “filter bubble”), the ways Twitter and Facebook in particular are organized to serve us up information that confirms our preconceived beliefs (the Facebook algorithm), and the advocacy efforts that have sprung up in recent times to enhance and intensify those processes.
Not everyone you engage with on Twitter is an actual person. What you’re seeing may or may not represent authentic viewpoints of actual people. The debate on any hot-button issue is being distorted to some extent, and editors and reporters need to know that. I’m not sure they do.
In the meantime, some of those who study disinformation recommend extreme caution for journalists engaging with social media.
“If a journalist finds themselves embedding or using a tweet as a source without knowing who is behind that account, they are laying themselves open,” says disinformation expert Nimmo. “If it’s not a blue tick [confirmed] account, how do you know who it is?”