As artificial intelligence continues to become more intertwined in everyday life, its impact on cybersecurity is becoming more complex and psychological. Often referred to as the great accelerator of threat levels, AI is enabling threat actors to launch faster, more sophisticated attacks. Among the most troubling developments is the rise of deepfake technology, which is transforming from a novelty into a high-risk cybersecurity threat. These realistic synthetic media files, whether video, audio, or images, are now being used to deceive and manipulate on a scale that challenges traditional security protocols.

In 2024, the World Economic Forum identified misinformation and disinformation, including deepfakes, as one of the top ten global risks in the near term. Separately, Sumsub, a global identity verification platform, reported at least a 400-percent increase in deepfake-related fraud attempts from 2022 to 2023. Most of these attacks targeted financial institutions, signaling a dangerous trend: synthetic media is no longer limited to political manipulation. It is now a corporate and financial weapon, with implications for governance, internal operations, and public trust.

Daniel Tobok, a cybersecurity expert with nearly thirty years of industry experience, believes deepfakes mark a critical shift in the nature of cyber threats. “The cybersecurity battleground is no longer just technical, it’s psychological,” says Tobok. “Deepfakes challenge the very foundation of trust, making individuals question what they see and hear. Once that uncertainty sets in, attackers gain the upper hand.”

Tobok’s expertise is rooted in more than ten thousand cyberattack reviews and thousands of successful recovery missions. His approach, forged through decades of hands-on work, informs his philosophy of Cyber Certainty, a framework that urges companies to move beyond reactive measures and adopt a forward-looking posture of resilience. Today, that means preparing for threats that target human perception itself.

Deepfakes have the power to convincingly replicate voices, simulate executive appearances on video calls, and fabricate emails from leadership, all with a level of detail that can bypass even vigilant teams. “This isn’t a futuristic scenario, it’s happening now,” Tobok says. “Companies must be digitally diligent and cyber sensitive in how they verify identity and intent.”

These attacks are particularly insidious because they exploit trust. A convincingly altered video or audio clip of a company executive can damage reputations, move markets, or disrupt internal morale before it is exposed as fake. “Cyber attackers are no longer just breaching networks, they’re breaching belief,” says Tobok. “It’s not just about stealing data anymore. It’s about controlling the narrative.”

Beyond internal risk, companies must also be cautious in their external relationships. Deepfakes may be used to impersonate consultants, suppliers, or investors, which can lead to data leaks, contract disputes, or compliance failures. In Tobok’s view, Cyber Certainty™ means expanding verification practices across all interactions, both inside and outside the organization. “It’s no longer enough to hear the right voice or see a familiar face. Trust has to be validated on multiple fronts,” he says.

Organizations must now operate with a heightened level of caution and clarity because deepfakes are a test of collective awareness. By becoming digitally diligent and investing in Cyber Certainty™, companies can position themselves to act with confidence in a landscape that is increasingly infused with deception. “The threat is no longer just someone breaking into your system. It’s someone manipulating what your system shows, says, or believes. In that kind of environment, truth becomes the most valuable asset of all,” Tobok concludes. 

Written in partnership with Tom White