skip to content

The rise of AI deepfakes and how to spot them

Article

A staggering $25 million was stolen last year from the global engineering consultancy Arup in a brazen AI-enabled fraud. Utilizing highly sophisticated technology, fraudsters convincingly replicated the voices and appearances of company executives, tricking an employee into authorizing a massive fund transfer.

This incident underscores the escalating threat of AI-powered financial scams and the alarming speed at which technology is being weaponized for high-stakes deception.

The rapid advancement of artificial intelligence, particularly the proliferation of large language models trained on extensive video and photographic datasets, has dramatically simplified the creation of convincing impersonations.

Recent technological breakthroughs in terms of LLM efficiency continue to fuel a global technology race that further proliferates the ability of attackers to push the boundaries of synthetic media capabilities.

As social media platforms become inundated with AI-generated content, the ability to effectively detect real from fake is diminishing. With continuous learning and evolution, driven by mass exposure and feedback, deepfake technology poses a significant challenge.

Businesses and individuals must confront a sobering reality – we are entering an era where seeing is no longer believing.

Increasing sophistication of AI crime

Technology is advancing at an unprecedented pace, making it easier than ever to create realistic deepfakes. What once required sophisticated expertise, and expensive equipment can now be achieved with readily available tools at a fraction of the cost. Generative AI plays a pivotal role in this shift, enabling the seamless creation of hyper-realistic media with minimal effort.

BDO Canada Partner Rishan Lye

AI generated image of BDO Canada Partner Rishan Lye

AI generated image of BDO Canada Partner Rishan Lye


Progression of AI-generated imagery in one year: Camera image of BDO Partner, Rishan Lye (left), AI-generated image from 2023 (center), and AI-generated image from 2024 (right).

Despite the growing risks, there is no universal standard for combating deepfakes. While some organizations and governments are developing regulations and ethical frameworks, adoption remains inconsistent.

This lack of uniformity creates opportunities for bad actors to exploit the technology without consequence for financial fraud, misinformation campaigns, and identity theft.  

Impacts of deepfake crimes

AI-generated synthetic media is reshaping the landscape of cybercrime, enabling unprecedented fraud and threatening individuals and businesses. The consequences range from financial losses and reputational harm to severe psychological distress for victims. 

Deepfakes can destroy reputations by fabricating misleading videos or audio clips, making it appear as if individuals said or did things they never did. This is particularly dangerous in politics as doctored content can manipulate public opinion, influence elections, and undermine trust in governments.

Once misinformation spreads online, damage control becomes nearly impossible, as false narratives often persist even after being debunked.

By mimicking real people with astonishing accuracy, malicious actors enable criminals to commit identity fraud on an unprecedented scale. They can impersonate executives, government officials, or even family members to manipulate financial transactions, gain access to private accounts, or execute scams like business email compromise attacks.

 As deepfake technology advances, traditional security measures, such as verbal authentication over the phone, are becoming increasingly unreliable.

Biometric verification such as facial recognition and voice authentication—is considered highly secure, but deepfake technology is challenging that. These sophisticated fakes are beginning to defeat well-founded security processes, making it increasingly difficult for humans to distinguish real from fake identities.

This growing cybersecurity challenge requires businesses to rethink and enhance their fraud detection technology and processes to stay ahead. The millions of dollars stolen through the Arup scam is an eye opener for all companies to consider solutions, including facial recognition technology and processes like carding in the moment, that can differentiate between real people and fake personas.

We must continue to better consider parameters such as isolating certain facial features, spotting face spoofing, eye reflections, skin texture, unnatural movements, and people being able to appropriately react at the moment.

One of the most distressing aspects of deepfake crimes is their use of non-consensual content, such as fake explicit videos or defamatory material. Victims suffer immense psychological trauma, including anxiety, social stigma, and professional fallout.

Beyond personal harm, these crimes can also seriously threaten brands and businesses. Companies can face reputational damage if AI-generated forgeries falsely depict them engaging in unethical practices, such as environmental destruction or labour exploitation, leading to public backlash and financial losses.

Conversely, companies guilty of actual misconduct may manipulate the narrative by dismissing authentic evidence as fake, shielding themselves from accountability. The lack of legal frameworks in many jurisdictions further complicates the pursuit of justice, leaving victims to battle the consequences with limited recourse. 

Red flags to detect deepfake threats

While the increasing sophistication of AI-generated media can make synthetic media appear highly realistic, we can watch out for certain inconsistencies that often expose its artificial nature.

Visual indicators

AI-generated faces may exhibit telltale signs of manipulation. Look for unnatural facial expressions, where emotions seem forced or fail to align with the speaker’s tone. Subtle distortions, such as inconsistencies in skin texture, flickering, or blurring around the edges of the face, can also indicate tampering.

Additionally, slight delays or awkward transitions in lip movements may suggest that the footage has been artificially generated.

These indicators can be spotted now. However, AI models are perpetually improving at mimicking facial micro expressions, refining texture details, and eliminating artifacts, making fakes harder to detect in the future.

Audio indicators

Voice replication technologies have improved significantly, but imperfections remain. Be cautious of robotic or monotone speech, where intonation and emphasis feel off. AI-generated voices may also struggle with natural pauses, resulting in lagged responses that don’t align with real-time conversations.

Furthermore, an absence of background noise—or an unnatural level of consistency in ambient sound—can be a warning sign, as authentic recordings typically capture environmental variations.

Contextual cues

Beyond visual and audio indicators, discrepancies in communication style can be a strong giveaway. A message or video that contains unusual phrasing, grammatical errors, or a shift in tone that doesn’t match the speaker’s typical style should raise suspicion. Pay attention to the origin of the media—was it shared from an unfamiliar source or an unverified platform?

Businesses or operational teams should consider implementing validation processes such as unique behavioural tests, like pre-agreed codewords shared only with authorized personnel or real-time challenges such as time-sensitive or context-based verification questions.

Urgency is often used as a manipulation tactic, pressuring the recipient to act quickly before verifying authenticity. If a request deviates from standard protocols or demands immediate action, it is most likely a scam.

How to protect your business from deepfake attacks?

Businesses must remain vigilant against emerging threats that exploit digital misinformation and identity manipulation. While no system is entirely foolproof, proactive and preventive measures can significantly reduce the risk of fraud and reputational damage.

Cyber vigilance and spreading awareness

Organizations must adopt a forward-thinking approach to cybersecurity, treating fraud detection as an ongoing process rather than a reactive measure. Train your employees on how deceptive content is generated and used to manipulate public perception.

This can help them recognize potential threats. Regular cybersecurity and fraud awareness programs ensure that staff members remain informed about evolving attack tactics.

Leveraging AI-powered detection tools

Sophisticated AI-driven detection tools will likely be the best way to determine authenticity of content as its effectiveness accelerates beyond human perception. Businesses should invest in reliable AI solutions that can detect inconsistencies in video and audio, helping to identify fraudulent communications before they cause harm.

To stay ahead of bad actors, organizations should also implement dynamic verification processes. Frequent updates to authentication methods make it more difficult for attackers to predict security patterns, reducing their chances of success.

Blockchain technology

This offers a powerful defence against data manipulation. By storing information on a decentralized, tamper-proof ledger, businesses can ensure that records remain authentic and unchanged. This is particularly useful for securing contracts, verifying digital identities, and maintaining the integrity of sensitive communications.

Address misinformation risks on social media

Social media platforms, particularly TikTok, have become major sources of news consumption in several regions. The unaccountable and unregulated collection of information creates an ideal environment for bad actors to manufacture misleading narratives.

With the vast amount of user-generated content, it becomes easier to fabricate convincing messages that align with specific agendas. Governments and businesses must monitor such platforms for potential misinformation that could impact their brand or industry, leveraging AI tools and fact-checking organizations to counter false narratives.

Fraud, Corruption & Disputes in Business

Recognize and mitigate your risks.

Explore our insights

How BDO can help

In today’s rapidly evolving threat landscape, protecting sensitive information and maintaining trust are paramount. With experience across a wide range of scenarios where security is critical—such as handling sensitive data and managing remote work environments—we help organizations stay ahead of potential threats.

Our cybersecurity team proactively tests your environment for vulnerabilities or active threats, providing actionable insights to strengthen your security posture. Our risk advisory team works alongside leadership to implement enterprise-wide frameworks that address insider threats, enhance governance, and establish strong financial controls.

Meanwhile, our dispute advisory team provides expert analysis and resolution strategies to navigate financial disputes, particularly as AI-driven crimes continue to rise. With our integrated approach, we help you mitigate risk, maintain compliance, and build resilience in an increasingly digital world.

For more information, please reach out to us.

Contact us

This site uses cookies to provide you with a more responsive and personalised service. By using this site you agree to our use of cookies. Please read our privacy statement for more information on the cookies we use and how to delete or block them.

Accept and close