Pixabay

Tech

/

The Infocalypse is Coming

It’s been two years since our social media lives were forever changed. No matter what side of the political divide you might have been on in 2015-2016: if you spent time online, there’s a chance you fell for fake news.

It’s scary to look back and think about how unaware we must have been to this new age in news and media. Many of us were so eager to fall for clickbait titles and unaware of the influence our social media algorithms have on our daily lives and worldview. But now that we’re on the other side of that paradigm shift, what could possibly happen next?

Unfortunately, a lot can and will still happen. We could very well be on the verge of a new age of “reality apathy” or disbelieving everything simply because of our inability to tell fact from fiction. True information will no longer be easily separated from falsified facts, and even events that have happened can be signed off as “fake” simply because the technology exists.

So how can we prepare for this inevitable future? And how can we be sure that this alarmist perspective isn’t fake in and of itself? Aviv Ovadya is credited by Buzzfeed as being someone who predicted the devolution into fake news prior to the 2016 election campaign.

In a detailed journalistic piece, he told Buzzfeed all about his new predictions: “That future [of the next two decades], according to Ovadya, will arrive with a slew of slick, easy-to-use, and eventually seamless technological tools for manipulating perception and falsifying reality, for which terms have already been coined — ‘reality apathy,’ ‘automated laser phishing,’ and ‘human puppets.’”

Ovadya’s future is indeed bleak. Let’s look at how this transition could have an effect on politics, war, and life itself.

Political Disruption

Already we’ve seen the effect that fake media has had on political movements. The entire Trump campaign was bolstered and aided by “fake news” spread through social media. However, as Ovadya notes, there will be many more opportunities for future campaigns to be disrupted.

For one, Ovadya notes that politicians may soon have to race against “fake” politicians online, and that political grassroots movements will be created, run, and managed by AIs that are indistinguishable from real people. He calls this scenario “polity simulation.”

In essence, fake social media accounts could be created online for politicians that don’t actually exist. AIs could very easily run these accounts, generate political content, ask for donations, and even be voted into positions of power. People (voters) will be unable to distinguish these fake accounts from real politicians.

In many ways, we saw an early version of this during the 2016 election when political events in the US were created by people living and operating in Russia. Despite the issue, many social media sites refused to admit their error, and even now Facebook is barely addressing it.

Although this future is terrifying, it is true that social media content for politicians generates more leads and engagement from voters than almost any other form of advertising in our current age. The problem is, social media can easily be compromised, manipulated, or faked. So how can constituents keep track of real candidates and real grassroots movements?

On the reverse, Odavya notes that politicians are already struggling with AI generated comments, phone calls, and letters. Instead of politicians hearing directly from their constituents, they’re hearing from computer generated “robocalls” or commenters. The FCC had this problem during the hearing for net neutrality: their online comment submission box was flooded with AI-generated comments, making it difficult for them to tell the true commenters from the fake. As we continue to progress down this technological road, this problem will only get worse.

Media Disruption

Although it’s scary to think of fake politicians running for office, it might be even more frightening to look behind the metaphorical media curtain.

Video editing software is becoming horrifyingly good at analyzing faces and superimposing voices into videos. Adobe (creator of photoshop) is even working on creating a photoshop for videos and audio — where people, objects, and voices can all be removed from the original content.

In some cases, this next evolution of editing software is scaring the people who are creating it. Ian Goodfellow, working with the team at Google Brain, noted that tech like this could set back news consumption about a hundred years, saying after an event: “It’s been a little bit of a fluke, historically, that we’re able to rely on videos as evidence that something really happened…In this case AI is closing some of the doors that our generation has been used to having open.”

One such software, SIGGRAPH 2017 from the University of Washington, is proving to be highly realistic. According to the creators: “Given audio of President Barack Obama, we synthesize a high quality video of him speaking with accurate lip sync, composited into a target video clip. Trained on many hours of his weekly address footage, a recurrent neural network learns the mapping from raw audio features to mouth shapes. Given the mouth shape at each time instant, we synthesize high quality mouth texture, and composite it with proper 3D pose matching to change what he appears to be saying in a target video to match the input audio track. Our approach produces photorealistic results.”

This new technology could create what Odavya calls “diplomacy manipulation.” In this case, technology might have gone too far. It doesn’t take much to imagine how this same technology could be used by a malicious actor to create false videos of Kim Jong Un declaring war on the US, or some other equally false but extreme event. All it takes is one convincing video and an eager trigger finger, and suddenly we’re in World War III.

However, this technology also creates loopholes for those who don’t want to be held responsible for their behavior. Take, for example, the now infamous Access Hollywood video of Donald Trump explaining how he sexually assaulted women. Although it is highly unlikely that the video is fake, Trump himself has used the excuse that it could have been forged, simply because the technology exists to manipulate voice and video. Obviously, this video has not harmed his presidency, but it’s very possible that other dangerous people could use a similar excuse to pardon their past sins and avoid responsibility for their actions. If the technology exists, then it can be used as an excuse for horrifying past behaviors.

There are also rising possibilities of phishing attacks disguised as messages from your closest friends (what Odavya coins “laser phishing”) in an effort to steal personal information. Essentially, AIs will be able to scan your social media accounts and take into consideration your friend’s list and who you may be most eager to communicate with. Using that information, they can send you emails disguised as that person that contain malicious material. Ransomware, virus, and other data security issues are always a rising concern in our modern technological world. However, rarely have threats come from people on your contact list.

If we can’t even trust the people closest to us online, how can we trust anything online?

A Future Where Truth is on the Line

This is where reality apathy comes in. After being bombarded with fake news, fake people, and fake media, our culture will start to question truth altogether. Even if something is true, people can easily argue that it was computer-made. Instead of accepting any information, our society will just assume that all of it is fake. This, too, can be extremely harmful to our society and culture. It is the very basis of political propaganda.

The first step in combating this is simply making people (especially security experts) aware of this future that will almost certainly happen. Unfortunately, many of these scenarios are difficult to prevent, as most experts can only predict them happening eventually — not how, when, and where they will occur. Just as with online security, the issue isn’t stopping attacks from happening altogether, but keeping up with those that are doing the attacks. It’s a game of cat and mouse in both scenarios. Luckily, technological experts are starting to take the concerns of maliciously faked media seriously.

However, as experts like Odavya explained to Buzzfeed, the best way to face this new future is to simply arm yourself with the knowledge that these scary possibilities are going to happen. Although solutions are being considered (such as verification software for doctored images), they could still be years in the making.

Until those solutions become apparent, it is vital for you (as a consumer) to take a moment to be critical of what you’re taking in. Ask yourself questions about the content, check if there’s physical evidence behind a story or a person online, search verified news sites and journal citations, and don’t allow yourself to become apathetic to all media.

Odavya concludes his interview with Buzzfeed by saying: “I’m from the free and open source culture — the goal isn’t to stop technology but ensure we’re in an equilibrium that’s positive for people. So I’m not just shouting ‘this is going to happen,’ but instead saying, ‘consider it seriously, examine the implications.’ The thing I say is, ‘trust that this isn’t not going to happen.’”

Although this might not be a comforting thought, it certainly is a reality check. Our post-truth future doesn’t have to become our reality if we think critically about the media presented to us. Letting the truth become twisted is a dangerous future. But if we take the time to think before we act, we could very well find ourselves able to overcome the challenges ahead.