What you should know about digital foreign interference
Alex Wilner is an assistant professor of international affairs at the Norman Paterson School of International Affairs at Carleton University and a Munk Senior Fellow at the Macdonald-Laurier Institute. Sydney Reis is an MA student at Carleton University’s Norman Paterson School of International Affairs, specializing in intelligence and international affairs.
Forget what you think you know about fake news. The future of digital foreign interference (DFI for short) will be developed and disseminated by machines, algorithms and artificial intelligence (AI). Humans need not apply. Not only will Canadians have trouble distinguishing between fact and fiction, they’re also going to have trouble distinguishing between AI-generated and human-generated information.
DFI is propaganda for the internet age. Its goal is to spread manipulated facts and falsehoods online in order to convince citizens in other countries to change their behaviour in ways that serve foreign rather than national interests.
In Germany, foreign media spread fabricated stories of a German girl’s abuse by migrants, feeding conspiracy theories and social divisions over refugee policies. During France’s recent presidential election, local and foreign far-right groups spread falsified documents to tarnish the reputation of liberal Emmanuel Macron. In the U.K., following the botched assassination of an ex-Russian spy who had worked with British intelligence, a foreign misinformation campaign sought to discredit and ridicule the criminal investigation. In Taiwan, Chinese disinformation implied that Taipei was incapable or unwilling to rescue its own citizens trapped in Japan following a typhoon.
MORE: What happens when artificial intelligence comes to Ottawa
In each case, foreign entities built these particular narratives, influenced the beliefs of targeted citizens and weakened trust in democratic institutions.
Contemporary DFI is still largely generated by humans, who create posts, pictures and videos. Bots play a supportive role, amplifying the message across social media. Advances in AI, however, demonstrate that bots are increasingly capable of generating their own realistic messages, too.
Some algorithms generate realistic text—like news articles or political speeches—from bits of seed text. Feed the machine with starter information and it fills in the gaps. Text-generation software can be used to embarrass prominent individuals or organizations using fabricate statements.
Audio is in for the same treatment. Bots designed to “talk” like humans were used to book appointments over the phone, live. The human on the other end of the line was unaware that they were taking to a machine. Speech-mimicking software will prove useful for creating fake audio material without sacrificing authenticity.
MORE: Ensuring that artificial intelligence is ethical? That’s everyone’s responsibility
Most worrisome are deepfakes, video forgeries that appear to make people say or do things they never did. With enough photo or video images of a person, facial recognition algorithms can unpack every minute detail and create a replica of the person’s face. Add fabricated audio and you have a convincing video of a person engaged in a scenario that never took place.
Deepfake apps are already available, free, online. One comes with a friendly disclaimer: “FakeApp was created to give everyday people access to realistic face-swapping technology for creative and exploratory use. If you use this tool, please use it responsibly.”
The process can be used on just about anyone. Most people happily plaster images and videos of themselves online. It’ll become relatively easy to embarrass a colleague, neighbour or ex. For people with more money than time, deepfakes-for-hire will be a popular option.
Politicians should come to expect to be targeted by deepfakes, which will be used to strategically shift public perception about them. Hyper-realistic synthetic videos might show a future U.S. President suggesting to their Russian counterpart, for instance, that Washington is willing to quietly let Moscow increase its influence over Eastern Europe. Or, deepfakes might depict military leaders uttering toxic statements about an ongoing military engagement, tailored just right to inflame local resentment. On the eve an election a deepfake might have a top candidate engaging in racist or misogynist slurs. The possibilities are nearly endless.
MORE: Four ways Canada can own the artificial intelligence century
Responding effectively to DFI will require a multifaceted approach. Internet companies and social media firms will have to be held accountable for the information they disseminate and post on their sites. To avoid a future misinformation disaster, Facebook recently launched a technology contest to detect deepfake videos before they go viral. Governments should explore ways to make firms be more transparent about how their algorithms function.
Some states, most notably China, also have a clear advantage when it comes to developing AI and creating more sophisticated forms of DFI. This stems from their capacity to pool data resources for the purposes of AI development. For that reason, Canada should explore the feasibility of developing shared norms and agreements among like-minded countries that would permit the pooling of national data for use in collective AI research.
When it comes to the future of DFI, technology will increasingly make us question the veracity of all our information content. We still don’t fully know all the ways that AI could impact DFI. One thing is clear: we ain’t seen nothing yet.