Deepfakes and LinkedIn: malign interference campaigns

Credit to Author: Christopher Boyd| Date: Wed, 20 Nov 2019 16:00:00 +0000

Deepfakes haven’t quite lost the power to surprise, but given their wholesale media saturation in the last year or so, there’s a sneaking suspicion in some quarters that they may have missed the bus. When people throw a fake Boris Johnson or Jeremy Corbyn online these days, the response seems to be fairly split between “Wow, that’s funny” and barely even amused.

You may well be more likely to chuckle at people thinking popular Boston Dynamics spoof “Bosstown Dynamics” videos are real—but that’s exactly what cybercriminals are banking on, and where the real malicious potential of deepfakes may lie.

What happens when a perfectly ordinary LinkedIn profile features a deepfake-generated image of a person who doesn’t exist? Everyone believes the lie.

Is the sky falling? Probably not.

The two main markets cornered by deepfakes at time of writing are fake pornography clips and a growing industry in digital effects, which are a reasonable imitation of low budget TV movies. In some cases, a homegrown effort has come along and fixed a botched Hollywood attempt at CGI wizardry. Somehow, in an age of awful people paying for nude deepfakes of anyone they choose, and the possibility of oft-promised but still not materialised political shenanigans, the current ethical flashpoint is whether or not to bring James Dean back from the dead.

Despite this, the mashup of politics and technology continues to simmer away in the background. At this point, it is extremely unlikely you’ll see some sort of huge world event (or even several small but significant ones) being impacted by fake clips of world leaders talking crazy—they’ll be debunked almost instantly. That ship has sailed. That deepfakes came to prominense primarily via pornography subreddits and people sitting at home rather suggests they got the drop on anyone at a nation-state level.

When it comes to deepfakes, I’ve personally been of the “It’s bad, but in social engineering terms, it’s a lot of work for little gain” persuasion. I certainly don’t subscribe to the sky-is-about-to-cave-in model. The worst areas of deepfakery I tend to see are where it’s used as a basis to push Bitcoin scams. But that doesn’t mean there isn’t potential for worse.

LinkedIn, deepfakes, and malign influence campaigns

With this in mind, I was fascinated to see “The role of deepfakes in malign influence campaigns” published by StratCom in November, which primarily focused on the more reserved but potentially devastating form of deepfakes shenanigans. It’s not fake Trump, it isn’t pretend Boris Johnson declaring aliens are invading; it’s background noise level interference designed to work its silent way up a chain of command.

I was particularly taken by the comment that “Doom and gloom” assessments from experts had made way for a more moderate and skeptical approach. In other words, as the moment marketers, YouTube VFX fans, and others tried to pry deepfake tech away from pornography pushers, it became somewhat untenable to make big, splashy fakes with sinister intentions. Instead, the battle raged behind the scenes.

And that’s where Katie Jones stepped up to the plate.

Who is Katie Jones?

In the grand scheme of things, nobody. Another fake account in a never-ending wave of fake accounts stretching through years of Facebook clones and Myspace troll “baking sessions” where hundreds would be rolled out the door on the fly. The only key difference is that Katie’s LinkedIn profile picture was a computer-generated work of fiction.

The people “Katie” had connected to were a little inconsistent, but they did include an awful lot of people working in and around government, policy, academia, and…uh…a fridge freezer company. Not a bad Rolodex for international espionage.

Nobody admitted talking to Katie, though this raises the question of whether anyone who fell for the ruse would hold up their hand after the event.

While we can speculate on why the profile was created—social engineering campaign, test run for nation-state spying (quickly abandoned once discovered, similar to many malware scams), or even just some sort of practical joke—what really amuses me is the possibility that someone just randomly selected a face from a site like this and had no idea of the chaos that would follow.

Interview with a deepfake sleuth

Either way, here comes Munira Mustaffa, the counter-intelligence analyst who first discovered the LinkedIn deepfake sensation known as Katie. Mustafa took some time to explain to me how things played out:

A contact of mine, a well-known British expert on Russia defence and military, was immediately suspicious about an attempted LinkedIn connection. He scanned her profile, and reverse searched her profile photo, which turned up zero results. He turned to me to ask me to look into her, and I, too, found nothing.

This is unusual for someone claiming to be a Russia & Eurasia Fellow for an organisation like Center for Strategic and International Studies (CSIS), because you would expect someone in her role to have some publication history at least. The security world is a small one for us, especially if you’re a policy wonk working on Russia matters. We both already knew Katie Jones did not exist, and this suspicion was confirmed when he checked with CSIS.

I kept coming back to the photo. How could you have a shot like that but not have any sort of digital footprint? If it had been stolen from an online resource, it’d be almost impossible. At this point, I started to notice the abnormalities—you must understand my thought process as someone who does photography for a hobby and uses Photoshop a lot.

For one thing, there was a gaussian blur on her earlobe. Initially, I thought she’d Photoshopped her ear, but that didn’t check out. Why would someone Photoshop their earlobe? 

Once I started to notice the anomalies, it was like everything suddenly started to click into place right before my eyes. I started to notice the halo around her hair strands. How her eyes were not aligned. The odd striations and blurring. Then there were casts and artefacts in the background. To casual observers, they would look like bokeh. But if you have some experiences doing photography, you would know instantly they were not bokeh.

They looked pulled—like someone had played with the Liquify tool on Photoshop but dialed up the brush to extreme. I immediately realised that what I was looking at was not a Photoshopped photo of a woman. In fact, it was an almost seamless blending of one person digitally composited and superimposed from different elements.

I went on www.thispersondoesnotexist.com and started to generate my own deepfakes. After examining half a dozen or so, I started to picking out patterns and anomalies, and I went back to “Katie” to study it further. They were all present.

Does it really matter?

In some ways, possibly not. The only real benefit to using a deepfake profile pic is that suspicious people won’t get a result in Google reverse search, TinEye, or any other similar service. But anyone doing that for LinkedIn connections or other points of contact probably won’t be spilling the beans on anything they shouldn’t be anyway.

For everyone else, the risk is there and just enough to make it all convincing. It’s always been pretty easy to spot someone using stock photography model shots for bogus profile pics. The threat from deepfaked snapshots comes from their sheer, complete and utter ordinariness. Using all that processing power and technology to carve what essentially looks like a non-remarkable human almost sounds revolutionary in its mundaneness.

But ask any experienced social engineer, and they’ll tell you mundane sells. We believe the reality that we’re presented. You’re more likely to tailgate your way into a building dressed as an engineer, or carrying three boxes and a coffee cup, then dressed as a clown or wearing an astonishingly overt spycoat and novelty glasses.

Spotting a fake

Once you spend a little time looking at the fake people generated on sites such as this, there are multiple telltale signifiers that the image has been digitally constructed. We go back to Mustaffa:

Look for signs of tampering on the photo by starting with the background. If it appears to be somewhat neutral in appearance, then it’s time to look for odd noises/disturbances like streaky hair or earlobes.

I decided to fire up a site where you guess which of two faces is real and which is fake. In my first batch of shots, you’ll notice the noise/disturbance so common with AI-generated headshots—it resembles the kind of liquid-looking smear effect you’d get on old photographs you hadn’t developed properly. Check out the neck in the below picture:

On a similar note, look at the warping next to the computer-generated man’s hairline:

These effects also appear in backgrounds quite regularly. Look to the right of her ear:

Backgrounds are definitely a struggle for these images. Look at the bizarre furry effect running down the edge of this tree:

Sometimes the tech just can’t handle what it’s trying to do properly, and you end up with…whatever that’s supposed to be…on the right:

Also of note are the sharply-defined lines on faces around the eyes and cheeks. Not always a giveaway, but helpful to observe alongside other errors.

Remember in ye olden days when you’d crank up certain sliders in image editing tools like sharpness to the max and end up with effects similar to the one on this ear?

Small children tend to cause problems, and so too do things involving folds of skin, especially where trying to make a fake person look a certain age is concerned. Another telltale sign you’re dealing with a fake are small sets of incredibly straight vertical lines on or around the cheek or neck areas. Meanwhile, here are some entirely unconvincing baby folds:

There are edge cases, but in my most recent non-scientific test on Which face is real, I was able to guess correctly no fewer than 50 times in a row who was real before I got bored and gave up. I once won 50 games of Tekken in a row at a university bar and let me tell you, that was an awful lot more difficult. Either I’m some sort of unstoppable deepfake-detecting marvel, or it really is quite easy to spot them with a bit of practice.

Weeding out the fakers

Deepfakes, then, are definitely here to stay. I suspect they’ll continue to cause the most trouble in their familiar stomping grounds: fake porn clips of celebrities and paid clips of non celebrities that can also be used to threaten/blackmail victims. Occasionally, we’ll see another weightless robot turning on its human captors and some people will fall for it.

Elsewhere, in connected networking profile land, we’ll occasionally come across bogus profiles and then it’s down to us to make use of all that OPSEC/threat intel knowledge we’ve built up to scrutinize the kind of roles we’d expect to be targeted: government, policy, law enforcement, and the like.

We can’t get rid of them, and something else will be along soon enough to steal what thunder remains, but we absolutely shouldn’t fear them. Instead, to lessen their potential impact, we need to train ourselves to spot the ordinary from the real.

Thanks to Munira for her additional commentary.

The post Deepfakes and LinkedIn: malign interference campaigns appeared first on Malwarebytes Labs.

https://blog.malwarebytes.com/feed/