Skip to content
Menu
menu
Chinese President Xi Jinping arrives at the airport in California in November 2023 for an economic summit
Chinese President Xi Jinping arrives at San Francisco International Airport for a summit with U.S. President Joe Biden and to attend the Asia-Pacific Economic Cooperation APEC Economic Leaders' Meeting, in San Francisco on 14 November 2023. Xi was received by California Governor Gavin Newsom, U.S. Treasury Secretary Janet Yellen, and other U.S. representatives at the airport. (Photo by Xie Huanchi, Xinhua, Getty Images)

Emerging Technology Highlights New Converged Risks and Asymmetric Threats

The ubiquity of technology companies in daily life today puts them in the security spotlight, with increasing media attention, political scrutiny, and complex, overlapping threats.

In November 2023, after visiting with U.S. President Joe Biden, Chinese President Xi Jinping met with titans of the U.S. business community.  It was "a hot ticket for CEOs of America's most prominent companies," CNBC reported. It was also historic, given it had been six years since Xi's last visit to the United States and in light of existing strains in diplomacy. The event drew the kind of media attention typically reserved for a gathering of movie stars and pop singers.

Consider the security complexities beneath the surface of these news stories and the challenges of assembling so many world leaders and billionaires in one place. So many things have changed in recent years, especially the way that digital and physical security strategies have intertwined. Executive protection and corporate security professionals now need to factor in a range of digital concerns into their planning.

Smartphones, Social Media, and AI

While discussing the reception for Xi with a colleague, they remarked, "I wonder who has the task of informing Tim Cook he can't bring his smartphone in."

Apple CEO Cook was present at the U.S.-China Business Council and the National Committee on U.S.-China Relations dinner reception alongside Xi last November. In such gatherings, the likelihood that someone is using electronic eavesdropping is considerable—an aspect most of us possess only vague notions about. While the event's device policy remained undisclosed, we are beginning to see restrictions in lower-stakes environments.

In some situations, we are already seeing corporate offices asking guests to secure their devices using tamper-proof tape. The entertainment sector has also embraced "phone-free experience" regulations, mandating that attendees store their phones in signal-blocking pouches at concerts or comedy shows, such as the most recent "Netflix is a Joke Presents" featuring John Mulaney.

This emerging trend underscores the implication of device regulations at major events—an evolving practice likely to be noted in upcoming high-profile gatherings and could lead to changes as the U.S. election season ramps up. Even political rallies might change attendees' attitudes toward smartphone use.

This issue gains greater urgency due to two crucial factors. First, the proliferation of generative artificial intelligence (AI) simplifies the creation of highly credible forgeries, with campaign event footage likely to serve as prime source material. Second, social media acts as a catalyst, amplifying the dissemination of hoaxes and misinformation. This frequently leads to millions of people encountering falsified content long before fact-checkers and watchdogs even know it exists. While most of us are keenly aware that these types of forgeries could enter the world of politics, we should not be surprised if it happens in other environments. Recently, for example, a high school athletic director in Baltimore, Maryland, was arrested after allegedly creating AI-based forgeries of his boss making racist comments

These concerns have resided within cybersecurity's purview for a while, yet asymmetric threats have tangible consequences. The 2024 Homeland Threat Assessment by Homeland Security's Office of Intelligence and Analysis underscores generative AI's transformative yet potentially disruptive impact on cybersecurity. For example, it emphasizes that AI's proliferation and accessibility enable malicious actors to amplify disinformation campaigns through sophisticated manipulation of digital platforms. This capability not only undermines public trust and democratic processes but also poses a significant asymmetric threat, potentially inciting aggression or violent behaviors that exploit societal vulnerabilities.

The report highlights two primary contexts where generative AI could function as an asymmetric threat: First, in foreign influence operations, AI-driven disinformation campaigns manipulate public opinion and amplify divisive narratives across social media and digital channels. Second, in domestic terrorism, where lone actors or small cells radicalized by extremist ideologies use AI to enhance operational effectiveness, evade detection, and perpetrate acts of violence aimed at destabilizing communities and advancing ideological agendas. These capabilities underscore the evolving challenge for security efforts to adapt and defend against AI-enabled threats that exploit asymmetric advantages in the digital age.

In recent years, the security world has increasingly emphasized protective intelligence. For high-profile events involving high net-worth individuals, CEOs, and government officials, this means delving into attendees' open-source backgrounds, understanding the socio-political and travel landscape, and keeping tabs on any movements or chatter that might indicate a security threat or potential risk. This thorough procedure demands a comprehensive understanding of diverse potential threat origins, spanning from identifiable individuals of interest with malicious motives to the nuanced clues from fringe information depots hinting at potential disruptions.

However, with the advent of social media and AI, discerning real threats from benign and flippant comments online is challenging. It will only become more complex as technology improves, and malicious actors continue to adopt these tools to target events and individuals.

Collaboration is Crucial

Coordinating efforts between private security firms and law enforcement during large-scale events poses significant challenges. This includes managing extensive motorcades and protecting VIPs with multiple close protection agents, which increases operational complexity. Aligning diverse terminology and industry-specific language among stakeholders may require dedicated collaboration sessions to establish a unified approach. Comprehensive strategies covering entry and exit plans, evacuation procedures, and identifying individuals (or groups) of interest must be meticulously planned, communicated in advance, and strictly followed. Additionally, potential disruptions to communications could degrade coordination efforts, leaving groups vulnerable to threats.

Managing and sharing risk information among all parties involved is essential for safeguarding assets, data, and, above all, people. Integrating digital technologies further complicates these challenges, adding new dimensions to security considerations.

Facilitating Business

Security practitioners have a tough job—they must be right 100 percent of the time, while threat actors only need to be right once. The rise of social media and the proliferation of digital eavesdropping tools have made it easier to see risks in nearly everything. The picture seems much murkier when these factors collide with the complexity of managing an event headlined by the world's foremost business and government leaders.

And yet, the technology that raises security concerns among executives also provides them with powerful tools to bolster global safety. Our world is currently undergoing a significant transformation. Leveraging digital tools enables more accurate monitoring, forecasting, and mitigation of risks during major events. Additionally, these advancements bring public benefits, such as reducing the need for road closures and security cordons.

The primary benefit lies in supporting these major events, simplifying connections for decision-makers, fostering leadership, and enabling cohesive communications among themselves and the world. A flexible and robust security approach, adapting to each attendee's unique needs and profiles, is what such formal confabs demand. Synchronization becomes incredibly challenging when balancing the significant public visibility of these events with the stringent confidentiality required for their successful execution. The key for public and private organizations in preparing for their events lies in striking an effective balance between human insight and digital intelligence, ensuring responsible use and dissemination of data while harnessing its undeniable benefits for the betterment of society.

 

Chuck Randolph is the chief security officer at Ontic, where he also leads the Ontic Center for Connected Intelligence and hosts the Ontic Protective Intelligence podcast. With more than 30 years of experience, Randolph previously held critical roles as Microsoft's senior director for global protective operations and strategic intelligence and AT-RISK International's director of operations. His leadership spanned international operational management, threat intelligence, and risk reporting. Apart from his corporate career, Randolph also served as a Lieutenant Colonel in the U.S. Army for 30 years, contributing to operations, information operations, and intelligence. He co-founded the Corporate Executive Protection Leadership Council (CEPLC) and the International Protective Security Board (IPSB). Additionally, he was chair emeritus of the Pan-Asian Regional Council (PARC) for the U.S. Department of State's Overseas Security Advisory Council (OSAC) and is a Cyber Security Sector Committee member. Furthermore, he is the chairman of the Critical Infrastructure Research Forum at Sam Houston State University.

 

arrow_upward