Black Hat Spam Not Black Magic, but It May be the Worst Spam Ever

Ohhh, man. Just when you thought it was safe to go back into the inbox. We’ve known for awhile now that spam is getting more targeted, more devious, and much more dangerous. The attacks are getting fast and furious, too, even the scorched earth campaigns, which are getting past the anti-spam filters. And while most of us deal with it the only way possible – one spam attack at a time – a new threat is looming and it might just be the one that nips you in the wallet.

Known as ‘influence manipulation’ and dubbed ‘black hat data science’ by data scientist Joseph Turian, black hat spam, according to an article at CNNMoney entitled “Can evil data scientists fool us all with the world’s best spam?” is “a new generation of spam that does away with brute force email barrages in favor of fake online personas so real that people — and, more importantly, email and web-service spam filters — can’t tell they’re fake.” And the implications are far-reaching. “Done right, these fake identities could influence everything from app downloads to e-commerce to elections.”

At the O’Reilly Strata Conference in late February, Turian gave a presentation where he stated that “It’s a pretty serious issue and it’s also pretty hard to catch.” Black hat data science, the article points out, is really just white hat data science that’s been turned on its head so it can be used for evil purposes. “ So, whereas as white-hat data scientists try to uncover unnatural networks of links created to game Google’s PageRank algorithm, Turian explained, black hats will try to build artificial networks so good they look real.” And it’s got very real legs in the future of spamming. “If someone wants to send lots and lots of undetectable spam, it’s just a matter of analyzing enough language to create messages that look less like a machine wrote them and more like a stupid human wrote them — because most spam filters try not to penalize users who just don’t write well.”

This post over at Google illustrates some of the frustrations that users face at the hands of black hat. And the CNNMoney article points out that there are companies enlisting similar technologies that Turian calls “’gray hat’ certain well-known companies that play fast and loose with user data.” It all comes down to influence manipulation, and companies don’t much care who or what the source of tweets was as long as it’s positive and the company’s name is mentioned frequently.

The idea of fake personas that can convincingly charm people into believing they’re real is a trend which is only going to get worse. According to Lutz Finger, co-founder of FishEye Analytics, a media monitoring and consultation firm, “7 percent of people’s twitter followers are actually spambots; 30 percent of social media users are deceived by spambots and chatbots; and 20 percent of social media users accept friend requests from unknown people, 51 percent of which are not human.” Indeed, a poignant article at O’Reilly Strata shows how we are surrounded by bots.

Discerning and validating identities may become big consulting business in the near future. Think of it as the trend that became popular with online dating: identity reports became commonplace, as did Googling people. The only problem now is that Googling may convince you that a fake person is actually real.

Influencing algorithms perform the act of shaping virtual reality. O’Reilly Strata points out that “every second, computer programs create about 300 new websites filled with content just to influence Google’s search algorithm.” O’Reilly Strata also notes that those spam bots are getting into our email systems with promises of male enhancement and too good to believe penny stock opportunities. Of course, they point out, “these tactics are highly inefficient: for example, to find one potential buyer, the spam bot has to send 12.5 million emails.” That’s not really a comforting statistic in a world where we talk of computer technology in terms of billions and trillions now. After all, it’s just data.

So what’s the takeaway? It’s not good. “Next-generation bots will be better at gaining trust…and they’ll act more real by mixing improved chatbot technologies and analytics to figure out how people speak and what to say in what circumstances. Once they have your trust, these bots can make introductions to more bots and people will be more likely to accept those requests, too.”

Our heads hurt already.

Leave a Reply