When President Trump announced on social media that he had been diagnosed positive for the coronavirus, a huge number of people on social-media questioned whether the news was real. To counter this “fake news” skepticism, the White House released a video and photos of the president taken at the hospital. In the past, this would have ended most suspicions. But in the rapidly developing world of “deepfakes,” many took to social media again to point out perceived discrepancies in how the president, his speech and his surroundings could have been manipulated.
Neither fake personas nor deepfakes — real people whose image, speech and/or surroundings have been altered — are creations of this century. They have been used to influence others in non-digital format since at least biblical times. What has changed is the medium used to spread these forms of disinformation.
Only in this century have we created a globally accessible platform to immediately transmit false information, disseminated by people who may not exist or appear as they are. And we are harmed by our inability to distinguish who is real or fake/deepfake, and to separate true narratives from false ones — as quickly and intensely as those spreading disinformation. The communication of agreed-upon facts forms the basis for any well-functioning society. Without a common narrative, divisions will widen.
In “The Social Dilemma,” a Netflix documentary, social-media engineers discuss how the technologies they created have been designed to optimize the time we spend online. By feeding us content and ads targeted to each user’s preferences, social media relies on artificial intelligence (AI) to reinforce these preferences. By eliminating posts and opinions that run counter to our presumed or expressed beliefs and likes, we enter into a “tech bubble” free of contradiction and conflict and in which our addiction to social-media content grows.
Yet our “social dilemma” is being exploited further. While those behind the social media platforms are working to feed our tech addiction, there now is a virtually uncountable — and unaccountable — number of fake users hijacking these technologies to push their targeted messaging to us. These fake online users have been created for one purpose — to manipulate and influence the opinions of real people online.
Regrettably, they perform quite well.
Recent studies conducted by the company I advise show that fake profiles increasingly constitute more than 30 percent of those engaged in online discussions. We also are tracking more active uses of deepfake technologies to manipulate opinion and cast doubt on our perceptions of reality.
Until now, the main purveyors of deepfakes have been pornography websites. As deepfake technologies advance, however, manipulated photos and videos of political candidates, government officials, business leaders and others in the public realm are increasing. Going forward, it is conceivable that the person with whom you are chatting on a live Zoom video conference may be a manipulated image engineered in real time.
If we are to overcome what separates us as a society, then we must find a better way to verify the truth of information shared online. As are radio and television, social media is a public forum that requires some measure of public oversight and protection. These platforms can benefit us and already inform too much of our daily lives to expect people to disconnect. At best, we may reduce our use of social media. We also may demand more of those who operate our social media.
Otherwise, as people have learned this year across the globe, what “goes viral” can harm us more than we imagine.
Scott Mortman leads U.S. operations for Cyabra Strategy Ltd.