Deepfakes: Navigating the Information Space in 2023 and Beyond

| Spring 2023

The prevalence of deepfakes across the media landscapes and social media has sharply increased over the last decade. Individuals, non-state actors, organizations, and governments use this technology and coordinate efforts to influence specific events, policies, or market conditions. What is real? What is manipulated? How do democracies safeguard against misinformation or disinformation campaigns? How do we as individuals navigate this challenging information environment?

These are the questions that the Intelligence Project tackled in its March 22nd seminar: Deepfakes: Navigating the Information Space in 2023 and Beyond. Michael Miner, Acting Program Manager for the Intelligent Project, moderated the discussion with a panel of experts. The widespread emergence of deepfakes—images and audio-visual elements created for purposes such as expression, play, or experimentation—raises key questions around ethics and policy. Conventional wisdom holds that dangers posed by deepfakes can be mitigated by better information, security, and verification technology, but critical informatics scholar Britt Paris cautioned that truth and evidence do not fit simply into technical protocols. Paris advised that deepfakes and cheapfakes (manipulated audio or visual content produced with free software) can be widely spread online and easily accessed by communities that lack important contextual knowledge.

The widespread emergence of deepfakes...raises key questions around ethics and policy.

The media increasingly cites the “end of the truth as we know it” because society has heavily relied on audio and visual evidence, particularly in courtrooms. There have been several recorded cases of content being manipulated for geopolitical ends, such as the fabricated video of Ukrainian President Volodymyr Zelensky released in 2022 that falsely urged Ukrainian troops to surrender. Another widespread use of deepfakes and cheapfakes, as explained by Paris, is to harass or attack specific social groups, such as women, public figures, or members of the LGBTQ+ community. Once the means to generate this content is online, it is virtually impossible to remove it. Solutions to these challenges must address the social as well as the technical aspects.

Use of deepfakes extend beyond politics and pose key ethical and legal questions. Matthew Ferraro, attorney and advisor on defense, national security, cybersecurity, and crisis management, explained that there are growing concerns for businesses who may suffer reputational harm, market manipulation, or social engineering fraud via deepfakes. Meanwhile, there are permissible uses of deepfakes for parity, satire, or artistic expression, and few legal guidelines for those who could benefit from these technologies. Legislation addressing deepfakes exists federally and in ten U.S. states, but none of the laws is a direct prohibition and the lines remain unclear. Policy tends to lag behind technology. Both Paris and Ferraro noted that there are many permissible uses of deepfakes—including in film and photography. In Ferraro’s view, deepfakes and synthetic media will not be outlawed, and society and courts need clarity on permissible and impermissible use.  

Beyond combating malicious content, should the U.S. government use deepfakes against adversaries? One school of thought holds that the U.S. should leverage these technologies given that adversaries are already employing them, as in the cases of Russia and China employing deepfakes to attempt to influence U.S. domestic politics. However, Belfer Senior Fellow Beth Sanner, former Deputy Director of National Intelligence for Mission Integration, cautioned that the U.S. government using deepfakes poses ethical concerns and could undermine long-term institutional trust.

All panelists agreed on the need for comprehensive cooperation, including public and private actors and the national security and policy communities. However, according to Sanner, progress is unlikely as lines of authority are unclear and there is no established chain of command. Sanner identified responsibilities as falling into three categories: detection, attribution, and pre-bunking. 

"Two-way communication between the public and private sectors is important for early detection and efficient pre-bunking."

– Beth Sanner

Instead of relying entirely on government agencies for all three phases, Sanner pointed out that deepfakes are often first detected by the private sector. While many detection systems for deepfakes already exist, they are often proprietary, resulting in limited public information on their rates of success. Attribution efforts, meanwhile, are a strength of the intelligence community (I.C.), but only if the private and public sectors are sharing information. Several U.S. intelligence community entities cooperate with the private sector, but there is neither a key National Security Council component for deepfakes nor an organizing principle around managing public-private cooperation to combat this threat. Sanner emphasized that “a combined effort has to be the key focus” and recommended that the I.C. and the government establish channels, clear lines of authority, and clear legislation that can yield shared responsibility. Combining public and secret collection streams requires specific workforce skills and training that Sanner does not believe the intelligence community currently has; a discipline of “denial and deception” previously existed but is not currently a priority. Two-way communication between the public and private sectors is important for early detection and efficient pre-bunking. The power of pre-bunking is evident in the revelation of Russian intentions before the invasion of Ukraine; Sanner called for similar pre-emptive action in the realm of deepfakes. 

Looking beyond the United States, Ferraro pointed to several Asian countries that have been aggressively tackling this threat. China, Taiwan, Korea, and Japan have all passed legislation to combat deepfakes, while the European Union is currently discussing an Artificial Intelligence Act and has historically been proactive with regard to data privacy. The E.U. and the North Atlantic Treaty Organization have already established disinformation centers, and these institutions, according to Paris, give the U.S. a key opportunity to engage with its international allies and gradually expand cooperation to other countries beyond these international organizations.

How should government, businesses, and individuals navigate the information space in these unprecedented times in the absence of clear and effective guidelines? Sanner called for clearer legislation on public-private cooperation, the employment of deepfakes against adversaries, and legal authorities for addressing deepfakes involving U.S. persons. For businesses, Ferraro advised establishing a solid and well-known online presence to mitigate fraud, and, in the event of an incident, to respond quickly and not fear going to court. For individuals, Paris pointed to a growing number of offline and online university courses on misinformation and disinformation, and she emphasized the importance of: skepticism instead of cynicism; honing our personal abilities to detect manipulation; and supporting broader education of people of all ages on this subject. Deepfake technologies are here to stay, and a collaborative, whole-of-society effort is necessary to understand the phenomenon and to manage it in the future.

FEATURED IN THE SPRING 2023 NEWSLETTER

For more information on this publication: Belfer Communications Office
For Academic Citation:

Gaspar, Judit. "Deepfakes: Navigating the Information Space in 2023 and Beyond." Belfer Center Newsletter, Belfer Center for Science and International Affairs, Harvard Kennedy School. (Spring 2023)

The Author