By Ethan Mansour and Bhargava Elavarthi Lab Undergraduate Affiliates
Deepfakes are videos or photos that have been digitally changed whether by AI or humans to create an image/video that is seemingly realistic. These videos can include alterations in which someone’s face has been digitally imputed to say anything the programmer wants. Deepfakes result in nearly indistinguishable vocal intonations and facial movements from the original person. The creation of deepfakes have and will continue to cause disinformation to be more trustworthy and hard to counter, which makes them extremely dangerous.
As artificial intelligence (AI) improves and becomes more accessible, the threat of deepfakes is growing. In 2018, Buzzfeed created a deepfake to highlight the threat of such videos. A study by Crisitain Vaccari and Andrew Chadwick took this video and tested a random sample of the population to see if they were deceived or uncertain of the validity of the video. The study showed that roughly 50% of the sample was deceived or uncertain about the validity, when the explanation was not shown in the clip given to them. A similar situation arose in India in 2018 and resulted in mob violence and deaths. A WhatsApp video went viral showing an innocent child playing cricket on the street getting kidnapped by a random adult on a motorbike. This ultimately led to violence and the death of nine innocent Indian citizens. The reason this is so significant is because it was a fake, edited video created by an education campaign based out of Pakistan on child abduction. However, the video created for the campaign had a message at the end stating that it was just an act, but in the case of the viral WhatsApp video in India, that ending clip was cut out.
Deepfakes also pose risks to the democratic processes. A study by Dr. Beata Martin-Rozumiłowicz and Rasťo Kužel highlighted the rise in deepfake videos incorporating swapped faces and voices with someone other than the purported speaker. Frequently these videos go undetected and offer a potential vector of electoral manipulation. Both domestic actors as well as foreign influence have promoted the spread of fake news during elections to help certain parties win an election. For example, 400,000 bots were identified which produced nearly 3.8 million tweets in just the last month of the election and these bots were ultimately traced back to the Internet Research Agency (IRA), a Russian troll organization.
Deepfakes have the potential to affect countries and their citizens. Despite the potential dangers associated with deepfakes, there are a number of countries actively developing software to enable the creation of deepfakes. In 2019, Israeli researchers created a new deepfake technique and released it to the general public. Concerns of more convincing deepfakes governments are starting to ponder hypotheticals of how these deepfakes could affect them and their country. In a document published by the CRS, the United States’ Congressional Research Service, they mention that deepfakes have already been used by foreign intelligence operatives in an attempt to recruit individuals into their organization. Some analysts believe that deepfakes could be used to show U.S. military personnel engaging in military war crimes, which would cause more enemies to be created. The CRS did not specifically cite where there was evidence of foriegn intelligence operatives using deepfakes.
Countering deepfakes is difficult. Research on deepfakes indicates that AI is being used to adapt outputs to new and increasingly dynamic environments. As the AIs behind deepfakes improve it becomes increasingly difficult to identify them and prevent their videos from gaining traction. One technique to counter AIs is to use other, counter AI algorithms to identify fakes videos based on often obscure indicators. Countering deepfakes essentially pits tailored AIs against one another in an ever more complex and impactful analysis of validity. In the near term this may slow the spread of deepfakes, however it might also result in a strengthening of deepfake algorithms such that it makes identifying and countering them increasingly difficult. Last year, Microsoft released a blog post talking about their new AI video authenticator, not publicly for use but you can buy their services. They acknowledged the ability of AI to sometimes slip through detection by saying “As all detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods.” This shows that if a deepfake slips through detection and nothing is done about it there will be consequences; however, if companies are aware they must look for stronger methods it would only cause the flaws to be abused for a short period of time,
Deepfakes are increasingly discussed as a tool of foreign state interference. On July 13, 2019 the House Intelligence Committee met to have a hearing on “National security challenges of Artificial Intelligence, Manipulated Media, and Deepfakes”. Representative Welch of Vermont was quoted in this hearing to pose the statement of “There is the question of foreign interference.” The thought of interference by deepfakes is discussed in different Senate and House Committees that are attempting to create bills to get ahead of the problem. Another example is the introduction of bill S.2559 which causes the DHS to create a Deepfake Task Force that would be in charge of creating a plan to identify and counter the spread of deepfake. The DHS was also required to conduct an annual study of deepfakes by a bill last year.
There is concern that Deepfakes will be used to affect political activities and alter voting patterns. As the cost and complexity of implementing deepfakes declines it is possible that there might arise a time when deepfakes are micro-targeted towards a specific group. Prolific use of deepfakes would further distort the information environment and make ascertaining accurate information very difficult. There is already evidence that micro targeted deepfakes might be having an impact on voter sentiments. Dobber, Metoui, and Trilling suggest that at present a change in opinion is not significant within a relatively small sample (N=278), but highlight that the closeness of US elections could elevate the impact of such activities on the eventual outcome of an election.
The present state of deepfakes is not a massive problem in the greater sense of disinformation. Disinformation as a whole is rising and at the moment deepfakes are not the leading front; however I would not expect it to always stay that way. The future can have great concern or serenity in what the state of deepfakes might be. This depends on how organizations, institutions, and states react to the disinformation and deep fakes. Implementing the proper checks for disinformation on social media or the accessibility of technology used to counter and identify deepfakes will cause the future to seem more serene from the threat of disinformation. Without these developments deepfakes and disinformation will run rampant and cause our society to be one filled with distrust and uncertainty.
Project funded by the Commonwealth Cyber Initiative - “Exploring the Impact of Human-AI Collaboration on Open Source Intelligence (OSINT) Investigations of Social Media Disinformation” project.