Skip to content
Menu
menu
illustration of a suited man wearing a brown bowler hat with a text speech bubble covering his face, signifying deepfakes and impersonated audio

Illustration by iStock; Security Management

Combatting the Rising Threat of Deepfakes in Today's Digital Landscape

Once the domain of skilled hackers, the growing prevalence and plummeting costs of text-to-photo and text-to-video generative artificial intelligence (AI) platforms have put the ability to create deepfakes within reach of virtually anyone. This has led to a rapid proliferation of deepfakes—sophisticated manipulations of audio, video, and images—across the Internet, posing significant risks to individuals, businesses, and society.

Identity verification company Samsub estimated that the United States experienced a 3,000 percent increase in deepfake fraud between 2022 and 2023. One research report found that online deepfake videos alone jumped from 14,678 in 2021 to 95,820 in 2023, which also marked a 550 percent increase over 2019, while deepfake fraud in the United States increased from 0.2 percent to 2.6 percent of all identity fraud in between 2021 and the first quarter of 2022. 

As TechSpective Editor-in-Chief Tony Bradley wrote in “Flawless Deepfake Audio Is a Serious Security Concern,” advances in AI’s capability to automatically generate content that is indistinguishable from human-generated content has significant good and bad implications, but the “use of AI and deep learning to generate deepfake audio, however, has particularly insidious potential.”

Wreaking Havoc

The implications of this phenomenon are profound and far-reaching. Deepfakes can wreak havoc on multiple fronts, from perpetrating fraud and spreading disinformation to inflicting reputational harm on organizations and individuals.

A significant area of concern is the role falsified or spurious content plays in the commission of fraud or deception or the dissemination of disinformation or misinformation. By creating the appearance of someone doing or saying something they did not, deepfakes can lead to reputational harm or financial damage.

Deepfakes can take the form of emails embedded with images and videos or voicemail messages designed to convince employees of interactions with their superiors or company executives. Such was the case for a CEO in the United Kingdom who transferred $243,000 on verbal instruction from his superior in what turned out to be a deepfake audio.

Deepfakes can also be used to blackmail employees for passwords, money, or other sensitive information, or to damage a company’s brand or reputation by spreading false marketing content or impersonating executives. One Hong Kong-based company was tricked out of $25.6 million in an orchestrated scam this year through a multi-person video conference in which only the victim was real.

Deepfakes can even be used to interfere with legal proceedings or commit fraud. For example, generative AI programs like Dall-E, Midjourney or Stable Diffusion can be used to generate images in which an insured item appears to be damaged or more damaged than it really is. In fact, apps like Dude, Your Car!, which allows users to create dents and damage where none exist in photos of vehicles, were created as a gag, but implicitly highlight the realm of possibilities for committing insurance fraud. Even official documents can be easily manipulated to create deepfakes. Invoices, underwriting appraisals, and signatures can be adjusted or invented wholesale. 

Along with misinformation and false content, the negative impacts of deepfakes can be felt most significantly in five other ways: 

Privacy concerns. Deepfake content that reveals personal information or depicts individuals in compromising situations can result in loss of privacy, harassment, or even blackmail.

Bias and discrimination. Deepfakes can perpetuate stereotypes, discrimination, or prejudice against certain groups of people. 

Infringement concerns. When deepfake content is created without proper attribution or permission, it could result in copyright infringement or intellectual property violations. 

Emotional impact and confusion. Deception and/or emotional manipulation by deepfakes can cause emotional distress and create feelings of mistrust and anxiety, while also casting doubt on the veracity and legitimacy of the target or source.

Legal and security issues. Deepfakes can have legal and security implications, potentially leading to legal disputes, defamation cases, or security threats when used for malicious purposes.

Mitigating the Risks

As the threat landscape evolves, so too do the strategies and technologies designed to combat deepfakes head-on. The best mitigation strategy will be one that starts from a position of knowledge by educating customers, employees, partners, and others about deep fakes, how they are created, and how they can be detected.

From there, construct a resilient digital environment capable of discerning and defending against this particularly insidious form of AI-based cyberattack. It should include deployment of safeguards to detect and deflect deepfakes as quickly as possible, such as blockchain-based content verification solutions to validate the integrity of digital assets and multifactor authentication methods to establish veracity. 

Ironically, the same AI technology that gave rise to deepfakes is also the engine driving some of the most powerful and effective tools organizations can use to defeat them. These tools run in the background and automatically determine whether an image, document, audio file or video is genuine or suspicious. When a potential deepfake is identified, it triggers an alert so humans can intervene as appropriate.

Further, using deep-scan analysis, AI deepfake detection tools can provide insights into why the digital file was considered suspect and can uncover what parts of the file may have been edited or synthetically generated. Furthermore, they can spot manually altered metadata, such as time and location, the existence of identical images online, and other irregularities.

Ongoing Vigilance

While working to mitigate the risks posed by deepfakes and establish a culture of trust are crucial steps, they are just the beginning. The sophistication of deepfakes will continue to evolve, along with the opportunities they create for those with nefarious intentions. Thus, it is imperative that organizations remain vigilant and commit resources to support ongoing education and proactive cybersecurity defenses to avoid becoming the next victims of deepfake technology.

 

Nicos Vekiarides, CEO and co-founder of Attestiv, has spent the past 20 years in enterprise IT and data security as a CEO and entrepreneur bringing innovative new technologies to market. His previous startup, TwinStrata, an innovative cloud storage company where he pioneered cloud-integrated storage for the enterprise, was acquired by EMC in 2014. Before that, Vekiarides brought to market the industry’s first storage virtualization appliance for StorageApps, a company later acquired by HP.

 

arrow_upward