Disinformation research relies on AI and lots of scrolling: NPR

Atilgan Ozdil/Anadolu Agency/Getty Images

Atilgan Ozdil/Anadolu Agency/Getty Images
What kinds of lies and falsehoods are circulating on the internet? Taylor Agajanian used her summer job to help answer that question, one message at a time. It often becomes spongy.
She reviewed a social media post where someone had shared a story about vaccines with the comment “Hmmm, that’s interesting.” Was the person actually saying the news was interesting or implying that the story wasn’t true?
Agajanian often read around and between the lines when she worked at the Center for an Informed Public at the University of Washington, where she reviewed social media posts and recorded misleading claims about COVID-19 vaccines.
As the midterm elections approach, researchers and private sector companies are racing to track misrepresentations on everything from ballot harvesting to voting machine conspiracies. But the field is still in its infancy even as threats to the democratic process posed by viral lies loom. Getting a feel for the lies people are talking about online might seem like a simple exercise, but it’s not.
“The larger question is, can anyone ever know what everyone else is saying?” says Welton Chang, CEO of Pyrra, a startup that tracks small social media platforms. (NPR has used Pyrra’s data in several stories.)
By automating some of the steps the University of Washington team uses humans for, Pyrra uses artificial intelligence to extract names, locations and topics from social media posts. Using the same technologies that in recent years have enabled AI to write remarkably like humans, the platform generates summaries of trending topics. An analyst reviews the summaries, weeds out irrelevant items such as ad campaigns, edits them slightly, and shares them with clients.
A recent summary of these summaries includes the unsubstantiated claim “Energy Infrastructure Under Globalist Attack”.
Bifurcated paths and interconnected webs
The University of Washington and Pyrra’s approaches go to the most extreme extremes in terms of automation – few teams have that many staff – around 15 – just to monitor social media, or rely so much on algorithms for them to synthesize the material and output.
All methods have caveats. Manual monitoring and coding of content could miss developments; and while capable of processing massive amounts of data, artificial intelligence struggles to manage the nuances of distinguishing between satire and sarcasm.
Although incomplete, having an idea of what is circulating in online discourse allows society to react. Research on voting-related misinformation in 2020 helped inform election officials and voting rights groups on the messages to highlight this year.
For responses to be proportionate, society must also assess the impact of false narratives. Journalists covered disinformation spreaders who appear to have a very high total number of engagements but limited impact, which risks “spreading more hysteria about the state of online operations”. wrote Ben Nimmo, who now investigates global threats at Meta, Facebook’s parent company.
Although the language can be ambiguous, it is easier to know who followed and retweeted whom. Other researchers analyze networks of actors as well as narratives.
The plethora of approaches is typical of a field just forming, says Jevin West, who studies the origins of academic disciplines at the University of Washington’s School of Information. Researchers come from different fields and bring methods they are comfortable with to start with, he says.
West collected research articles from the academic database Semantic Scholar that mentioned “misinformation” or “disinformation” in their title or abstract, and found that many articles were from medicine, computer science, psychology and there also geology, mathematics and art.
“If we’re a qualitative researcher, we’ll go…and literally code everything we see.” West says. More quantitative researchers do large-scale analysis like topic mapping on Twitter.
Projects often use a combination of methods. “Whether [different methods] starting to converge on similar kinds of… conclusions, so I think we’ll feel a little bit better about that,” West said.
Struggle with basic questions
One of the very first steps in researching misinformation — before someone like Agajanian starts tagging posts — is identifying relevant content under a topic. Many searchers start their search with phrases they think people talking about the topic might use, see what other phrases and hashtags show up in the search results, add them to the query, and repeat the process.
It is possible to miss keywords and hashtags, not to mention that they change over time.
“You have to use some sort of keyword analysis.” West says, “Of course it’s very rudimentary, but you have to start somewhere.”
Some teams build algorithmic tools to help. A Michigan State University team manually sorted more than 10,000 tweets into pro-vaccine, anti-vaccine, neutral, and irrelevant as training data. The team then used the training data to create a tool that sorted over 120 million tweets into those buckets.
To keep automatic sorting relatively accurate as social conversation evolves, humans need to keep annotating new tweets and feeding them with the training set, project co-author Pang-Ning Tan told NPR in an email.
If the interplay between automatic detection and human review sounds familiar, it might be because you’ve heard of big social platforms like Facebook, Twitter and ICT Tac describing similar processes for moderating content.
Unlike platforms, another fundamental challenge that researchers face is access to data. Much disinformation research uses data from Twitter, in part because Twitter is one of the few social media platforms that easily allows users to access its data pipeline – known as the interface. application programming or API. This allows researchers to easily download and analyze large numbers of tweets and user profiles.
The data pipelines of smaller platforms tend to be less well documented and may change at short notice.
Grab the recently deplatform Kiwifruit farms for example. The site served as a forum for anti-LGBTQ activists to harass gay and trans people. “When he first fell, we had to wait for him to reappear somewhere and then people talk about where he is.” Chang said.
“And then we can identify, okay, the site is now here – it has this similar structure, the API is the same, it was just replicated somewhere else. And so we redirect the data ingestion and pull the content from there.”
Facebook’s data service CrowdTangle, while claiming to deliver all publicly available posts, turned out not to have done it consistently. On another occasionFacebook messed up data sharing with researchers More recently, Meta ends CrowdTanglewithout any announced alternative being in place.
Other big platforms, like YouTube and TikTok, don’t have an accessible API, data service, or collaboration with researchers at all. TikTok has promised more transparency for researchers.
In such a vast, fragmented and changing landscape, West says there’s no good way at this point to tell what the state of misinformation is on any given topic.
“If you asked Mark Zuckerberg, what are people saying on Facebook today? I don’t think he could tell you.” Chang said.