Skip to content
Menu
menu
Illustration of a black male businessman pictured holding a cup of coffee in a video, but part of his face is pixelated, showing that the video is deepfake or maliciously altered

Illustration by iStock; Security Management

Does Your Crisis Preparedness Plan Need to Change for Deepfake Threats?

A carefully edited video clip could significantly damage your organization, and artificial intelligence (AI) driven applications are making those videos and audio clips even easier to manufacture.

Consider a 2022 quote from Pfizer CEO Albert Bourla: “I think that it’s really the fulfillment of a dream that we’ve had, together with my leadership team when we started in ‘19—the first week we met, in January of ‘19 in California—and set up the goals for the next five years. And one of them was, by 2023, we will reduce the number of people in the world that cannot afford our medicines by 50 percent. I think today, this dream is becoming reality.”

A deceptively edited version of the video made the rounds on video hosting services and social media, though, removing a few select words so that Bourla appeared to say Pfizer aimed to “reduce the number of the people in the world by 50 percent.”

The edited video was a fairly simple and unsophisticated bit of disinformation. However, generative AI tools are making it increasingly difficult to detect, respond to, and recover from deepfakes and other false media, especially when the fakes are designed to spark outrage and spread like wildfire online. This means it might be time to update your crisis management plan to account for deepfake threats.

An update doesn’t mean a full-on rewrite, though. An organization’s all-hazards crisis management plans contain the general outline for an immediate response and long-term recovery from a crisis incident. Organizations should be practicing and revisiting these plans regularly so they can rely on muscle memory when a crisis strikes, says Ernest DelBuono, senior strategist at global communications consultancy Leidar. The basis of your response should be the same, but deepfake incidents have enough unique elements to them—from the reputational to the personal—that they necessitate special consideration as part of your plan.

When it comes to the reputational damage of a deepfake, organizations have two primary acute challenges to confront: the accusation itself from the faked video or audio and the perception from the audience that the organization lacked appropriate security measures to prevent the disinformation attack, DelBuono says.

Preventing a deepfake from being produced is very challenging—if not impossible—but organizations do have the power to respond effectively, promptly, and proactively so the narrative the deepfake promotes is not widely believed. Being able to demonstrate and communicate that you took action, mitigated the situation promptly, and are taking steps to prevent similar issues in the future can go a long way to reassuring the audience—whether that’s shareholders, employees, or the general public—that your organization remains trustworthy and resilient.

Pre-crisis steps for deepfake preparedness should be familiar to crisis preparedness and security professionals worldwide, says Bruce T. Blythe, chairman of R3 Continuum, a workplace behavioral health consultancy.

Establish investigative resources. Before any crisis strikes, organization leaders should establish relationships with law enforcement, know about relevant laws and restrictions, set up a chain of custody plan, and determine who has expertise—internally or externally—to handle specific situations, including deepfakes.

Establish resources to remove deepfakes from online platforms. Security teams should know who to contact at each major social media platform to get deepfakes, manipulated media, and fraudulent content promptly flagged or removed, Blythe says. Organizations should also be aware of what technological steps they can take to identify deepfakes quickly.

Establish resources to address victims’ emotional distress. Organizations often forget this step, Blythe says, but deepfakes can be hugely damaging to victims’ sense of privacy and safety, which can result in post-traumatic stress and drastic changes in personality or well-being.

This emotional distress is present even when a video doesn’t seem as salacious as deepfake pornography (which is increasingly rapidly—the 2023 State of Deepfakes report found that 98 percent of all deepfake videos online are pornographic).

Consider if a CEO is depicted dancing drunk on a table at a party, a CSO shoplifting, or a CFO gambling with company funds. These videos cast aspersions on an executive’s character, and that can undermine his or her confidence in how they present themselves to the world, Blythe says. This necessitates emotional support, such as employee assistance programs (EAPs), which often refer crisis cases over to therapists, crisis counselors, and other trained professionals who can help victims sift through their feelings about a reputational attack, rebuild personal resilience, and navigate any necessary workplace adjustments, he adds.

That support needs to extend long beyond the initial release of the manipulated media, too. Post-traumatic stress disorder and other serious reactions can set in weeks after the initial trauma, and having corporate-supported clinical intervention at the ready can reassure the victim and their colleagues, Blythe explains.

“Crisis preparedness says we’ve got those resources established in advance,” he says. “We know exactly where we’re going to call. It’s just like a poison control line. If my dog got poisoned, I know the poison control number is right here on my refrigerator, so I know who to call if I’ve got a problem. Same thing here, we don’t want to have to look around and then find whatever comes in front of us, and that may not be the best resource. Now’s the time to find the best resources for these things, not during the crisis.”

Many of those steps in crisis response and communication remain the same as in a typical all-hazards response plan, DelBuono says. Your key players are likely the same, including security and investigations, the communications team, crisis-oriented public and press relations professionals, legal counsel, the C-suite, and more. However, because of the unique nature of deepfakes and their frequently high-profile targets, it’s worth adding a scenario or addendum to the crisis management playbook specific to deepfake disinformation.

For example, DelBuono says, a deepfake of your CEO makes the rounds where she allegedly proclaims a major change in the strategic business plan, such as ceasing production of a core product, restructuring the company, or starting layoffs.

“What needs to be done very quickly is for the CEO to be seen,” DelBuono says. “In terms of deepfakes, you have to use the same tactics as the people who did it to you.”

This reverses DelBuono’s usual crisis communications guidance. The need for speed in crisis response often means that the first few pieces of information coming out of the company are incomplete or will need to be corrected later, so in most crises, DelBuono recommends using alternates as the spokesperson so that the CEO can come out later with the full and cohesive message. This ensures that the CEO can maintain credibility during the crisis recovery.

But in the case of a deepfake, the person who was imitated should be front and center—if possible, and if it would not cause the individual more emotional distress—to refute the claim. Ideally, this should happen in the presence of other people who can verify the true, real-life event and undercut any potential claims that the response is fake, too, DelBuono says. This could be a video-recorded press conference, interview, town hall, or other forum.

Organizations should have a shortlist of verified and credible journalists and influencers—even those who are critical of the organization at times—that they can call in for a last-minute press conference or interview. Those journalists likely already know about the deepfake and will want to set the record straight in their outlets, according to DelBuono. This gives the CEO an opportunity to not just discredit the fake video but to reiterate the organization’s values and actions.

“It can’t just be ‘Oh, this wasn’t me,’” he says. “It gives you an opportunity to refute those things mentioned in the deepfake, but then it’s also a perfect opportunity to reiterate your organizational values to all those other stakeholders—shareholders, customers, employees, suppliers. So, it’s not just ‘I didn’t do that’ but ‘We would never do that, and here is the reason why we wouldn’t do that—these are our values, as we’ve demonstrated in the past.’”

This emphasis on past and present values reminds stakeholders—especially consumers—how they felt about your organization before the deepfake and what actions the organization is taking to rectify the situation, such as reiterating safety protocols or executive oversight, DelBuono adds.

While every organization has its skeptics and detractors who will refuse to give up on the bias-affirming message in a deepfake—especially those on the far ends of the political spectrum, DelBuono says—this type of straightforward and steady communication from the victim organization can go a long way towards reassuring people that the company has the matter in hand and remains a trustworthy entity.

 

Claire Meyer is managing editor of Security Management. Connect with her on LinkedIn or via email at [email protected].

 

arrow_upward