Stanford University: Seven tips for spotting disinformation related to the Russia-Ukraine conflict
Social media is a well-established source for first-hand accounts of breaking news, and the Russian invasion of Ukraine has been no exception: As the conflict continues, Ukrainian citizens are using platforms like Twitter, Facebook and TikTok to show the world what is unfolding on the ground.
Amidst the deluge of authentic reports have been a spate of misleading news and disinformation – narratives intended to discredit or cause harm – related to the conflict, says Shelby Grossman, a research scholar at the Stanford Internet Observatory (SIO), an interdisciplinary program run under the auspices of the Cyber Policy Center at the Freeman Spogli Institute of International Studies (FSI).
“We are seeing the unintentional spread of falsehoods, along with covert influence operations around the conflict in Ukraine,” said Grossman.
Grossman and her team are closely monitoring the narratives emerging on social media related to the crisis, including online propaganda from the Kremlin. A report of their initial findings published just two days before Russia launched a large-scale invasion of Ukraine.
Grossman said that while they aren’t necessarily seeing new disinformation tactics, what’s new is how the tactics are being applied. Here are seven disinformation trends Grossman and her team have observed related to the Russia-Ukraine war, along with her tips for seeing through them.
1. Hacked accounts
Meta, Facebook’s parent company, recently announced that a Belarusian hacking group had taken over Ukrainian Facebook accounts. The hackers used those accounts to post videos claiming that Ukrainian soldiers were surrendering.
In explaining the appeal of hacking over creating a new account, Grossman said: “If disinformation campaigns create new fake accounts, it takes time to build up an audience and get engagement. Hacking an existing account that already has an organic audience and meaningful engagement is a strategy to increase reach quickly.”
How to spot: Sometimes the name of the account is changed, but the handle – the username often denoted by the @ symbol – isn’t. “Just spending 10 seconds looking at an account, in some cases one can realize that something is weird,” Grossman said. But here too Grossman urges caution: Sophisticated actors can change the handle, too.
2. Fabricated claims and false flags
The investigative group Bellingcat recently uncovered a report circulated by pro-Kremlin outlets that implied the Ukrainian government was responsible for an improvised explosive device (IED) explosion in the Donbas region that killed Ukrainian citizens. Included in the report was a photo of a body purported to be a victim of the blast. (Forensic experts determined that the person was dead before the alleged explosion and that the event was likely staged with cadavers.)
Fortunately, many of these fabricated claims and false flags – reports of actions made to look like they were carried out by the other side – are being spotted and stopped before they can gain much traction. “The whole disinformation research community has been on the lookout for these false flag claims and calling them out as fake before they’ve had the chance to really spread,” Grossman said.
How to spot: Check for the source of claims about the war, she says. “Frequently, falsehoods are spread without any source. If there is a source, you can Google the source to see what people have written about its reputation. For example, you might come across an article from riafan[dot]ru. You might not know what this outlet is, but if you Google it, the second entry is a Wikipedia page which quickly explains that this news outlet is tied to a troll factory.”
3. Old media circulating out of its original context
Grossman saw a video on her TikTok feed of a parachuter recording himself jumping out of a plane. The comments indicated that users believed the parachuter was a Russian soldier invading Ukraine. In fact, the video was from 2015.
“I don’t think that was malicious, and it may not be that impactful, but that kind of material is going viral,” said Grossman.
How to spot: If you see something that seems suspicious or outrageous, Grossman recommends reverse image searching, which works for video as well. Simply upload a screenshot of the image or video into the search bar of Google Images and results will show you where else that image has appeared. You can also search account names and their posting history, which is how one reporter figured out where the video of the parachuter originated.
4. Manipulated images
Public profile photos that have been stolen are commonly modified by flipping the orientation of the original shot, switching its coloring and changing the background.
How to spot: Reverse image searching works reasonably well on manipulated images, said Grossman, so that’s a good place to start. Pro-Kremlin actors are also creating fake social media accounts with AI-generated profile photos. “Similarly, these can be hard for most people to spot, but looking for asymmetries is one approach – for example, a face with different earrings in each ear, or a shirt that doesn’t look completely symmetrical,” Grossman said.
Andrew Grotto and Janine Zacharia
Motivation, provenance of disinformation is pivotal in news reporting
Stanford scholars Janine Zacharia and Andrew Grotto discuss strategies for reporters and editors to write about disinformation, leaked material and propaganda in a responsible and timely way.
5. Unverified reports
Resharing or posting statements without a source is common, even among journalists. “Often, posters will fail to say if it’s based on their own reporting or if they got it from somewhere else,” said Grossman.
How to spot: Be skeptical of content that has no material backing up the claim – even if it was shared by someone you trust. Instead, look for reporting published on news outlets.
A Twitter account representing the Ukrainian government recently solicited cryptocurrency donations from the public – this was an authentic ask. But in response, there was a spate of verified Twitter accounts – those with the blue checkmarks – being hacked and changed to look like official Ukrainian government accounts, said Grossman. These hacked accounts asked for donations to support Ukraine, but in reality, the funds were being sent to the address of a scammer.
How to spot: Before donating funds – particularly cryptocurrency – do some Googling to verify that your funds will go where you intend them to, Grossman advises.
7. Pro-Kremlin narratives
Some of the claims that Grossman and her team have seen circulating are Kremlin-sponsored news – for example, that the West was stoking hysteria about an imminent attack and that the panic was benefiting Biden politically.
How to spot: One way to spot pro-Kremlin messages is to look for reports emerging from Russian state-affiliated media. Both Facebook and Twitter label the accounts of such outlets – which include those not commonly known to be affiliated with the Russian state. Twitter recently began labeling posts that include a link to Russian state media, and in the U.S., Facebook started demoting links. In the E.U., users are unable to access pages for RT and Sputnik, two Russian state-owned news outlets.
Not all platforms have been as transparent and proactive. Research by Stanford students in one of Grossman’s classes showed that TikTok does not label state-sponsored media as such. Grossman hopes more platforms will start identifying state-affiliated websites and accounts.
“I think that’s a really useful and important thing to be doing,” Grossman said. “It gives people information about the political agenda of the content they are reading and might give people pause before sharing.”