A staggering $25 million was stolen last year from the global engineering consultancy Arup in a brazen AI-enabled fraud. Utilizing highly sophisticated technology, fraudsters convincingly replicated the voices and appearances of company executives, tricking an employee into authorizing a massive fund transfer.
This incident underscores the escalating threat of AI-powered financial scams and the alarming speed at which technology is being weaponized for high-stakes deception.
The rapid advancement of artificial intelligence, particularly the proliferation of large language models trained on extensive video and photographic datasets, has dramatically simplified the creation of convincing impersonations.
Recent technological breakthroughs in terms of LLM efficiency continue to fuel a global technology race that further proliferates the ability of attackers to push the boundaries of synthetic media capabilities.
As social media platforms become inundated with AI-generated content, the ability to effectively detect real from fake is diminishing. With continuous learning and evolution, driven by mass exposure and feedback, deepfake technology poses a significant challenge.
Businesses and individuals must confront a sobering reality – we are entering an era where seeing is no longer believing.
Increasing sophistication of AI crime
Technology is advancing at an unprecedented pace, making it easier than ever to create realistic deepfakes. What once required sophisticated expertise, and expensive equipment can now be achieved with readily available tools at a fraction of the cost. Generative AI plays a pivotal role in this shift, enabling the seamless creation of hyper-realistic media with minimal effort.



Progression of AI-generated imagery in one year: Camera image of BDO Partner, Rishan Lye (left), AI-generated image from 2023 (center), and AI-generated image from 2024 (right).
Despite the growing risks, there is no universal standard for combating deepfakes. While some organizations and governments are developing regulations and ethical frameworks, adoption remains inconsistent.
This lack of uniformity creates opportunities for bad actors to exploit the technology without consequence for financial fraud, misinformation campaigns, and identity theft.
Impacts of deepfake crimes
AI-generated synthetic media is reshaping the landscape of cybercrime, enabling unprecedented fraud and threatening individuals and businesses. The consequences range from financial losses and reputational harm to severe psychological distress for victims.
Once misinformation spreads online, damage control becomes nearly impossible, as false narratives often persist even after being debunked.
As deepfake technology advances, traditional security measures, such as verbal authentication over the phone, are becoming increasingly unreliable.
This growing cybersecurity challenge requires businesses to rethink and enhance their fraud detection technology and processes to stay ahead. The millions of dollars stolen through the Arup scam is an eye opener for all companies to consider solutions, including facial recognition technology and processes like carding in the moment, that can differentiate between real people and fake personas.
We must continue to better consider parameters such as isolating certain facial features, spotting face spoofing, eye reflections, skin texture, unnatural movements, and people being able to appropriately react at the moment.
Beyond personal harm, these crimes can also seriously threaten brands and businesses. Companies can face reputational damage if AI-generated forgeries falsely depict them engaging in unethical practices, such as environmental destruction or labour exploitation, leading to public backlash and financial losses.
Conversely, companies guilty of actual misconduct may manipulate the narrative by dismissing authentic evidence as fake, shielding themselves from accountability. The lack of legal frameworks in many jurisdictions further complicates the pursuit of justice, leaving victims to battle the consequences with limited recourse.
Red flags to detect deepfake threats
While the increasing sophistication of AI-generated media can make synthetic media appear highly realistic, we can watch out for certain inconsistencies that often expose its artificial nature.
Visual indicators
AI-generated faces may exhibit telltale signs of manipulation. Look for unnatural facial expressions, where emotions seem forced or fail to align with the speaker’s tone. Subtle distortions, such as inconsistencies in skin texture, flickering, or blurring around the edges of the face, can also indicate tampering.
Additionally, slight delays or awkward transitions in lip movements may suggest that the footage has been artificially generated.
These indicators can be spotted now. However, AI models are perpetually improving at mimicking facial micro expressions, refining texture details, and eliminating artifacts, making fakes harder to detect in the future.
Audio indicators
Voice replication technologies have improved significantly, but imperfections remain. Be cautious of robotic or monotone speech, where intonation and emphasis feel off. AI-generated voices may also struggle with natural pauses, resulting in lagged responses that don’t align with real-time conversations.
Furthermore, an absence of background noise—or an unnatural level of consistency in ambient sound—can be a warning sign, as authentic recordings typically capture environmental variations.
Contextual cues
Beyond visual and audio indicators, discrepancies in communication style can be a strong giveaway. A message or video that contains unusual phrasing, grammatical errors, or a shift in tone that doesn’t match the speaker’s typical style should raise suspicion. Pay attention to the origin of the media—was it shared from an unfamiliar source or an unverified platform?
Businesses or operational teams should consider implementing validation processes such as unique behavioural tests, like pre-agreed codewords shared only with authorized personnel or real-time challenges such as time-sensitive or context-based verification questions.
Urgency is often used as a manipulation tactic, pressuring the recipient to act quickly before verifying authenticity. If a request deviates from standard protocols or demands immediate action, it is most likely a scam.
How to protect your business from deepfake attacks?
Businesses must remain vigilant against emerging threats that exploit digital misinformation and identity manipulation. While no system is entirely foolproof, proactive and preventive measures can significantly reduce the risk of fraud and reputational damage.
Cyber vigilance and spreading awareness
Organizations must adopt a forward-thinking approach to cybersecurity, treating fraud detection as an ongoing process rather than a reactive measure. Train your employees on how deceptive content is generated and used to manipulate public perception.
This can help them recognize potential threats. Regular cybersecurity and fraud awareness programs ensure that staff members remain informed about evolving attack tactics.
Leveraging AI-powered detection tools
Sophisticated AI-driven detection tools will likely be the best way to determine authenticity of content as its effectiveness accelerates beyond human perception. Businesses should invest in reliable AI solutions that can detect inconsistencies in video and audio, helping to identify fraudulent communications before they cause harm.
To stay ahead of bad actors, organizations should also implement dynamic verification processes. Frequent updates to authentication methods make it more difficult for attackers to predict security patterns, reducing their chances of success.
Blockchain technology
This offers a powerful defence against data manipulation. By storing information on a decentralized, tamper-proof ledger, businesses can ensure that records remain authentic and unchanged. This is particularly useful for securing contracts, verifying digital identities, and maintaining the integrity of sensitive communications.
Address misinformation risks on social media
Social media platforms, particularly TikTok, have become major sources of news consumption in several regions. The unaccountable and unregulated collection of information creates an ideal environment for bad actors to manufacture misleading narratives.
With the vast amount of user-generated content, it becomes easier to fabricate convincing messages that align with specific agendas. Governments and businesses must monitor such platforms for potential misinformation that could impact their brand or industry, leveraging AI tools and fact-checking organizations to counter false narratives.
How BDO can help
In today’s rapidly evolving threat landscape, protecting sensitive information and maintaining trust are paramount. With experience across a wide range of scenarios where security is critical—such as handling sensitive data and managing remote work environments—we help organizations stay ahead of potential threats.
Our cybersecurity team proactively tests your environment for vulnerabilities or active threats, providing actionable insights to strengthen your security posture. Our risk advisory team works alongside leadership to implement enterprise-wide frameworks that address insider threats, enhance governance, and establish strong financial controls.
Meanwhile, our dispute advisory team provides expert analysis and resolution strategies to navigate financial disputes, particularly as AI-driven crimes continue to rise. With our integrated approach, we help you mitigate risk, maintain compliance, and build resilience in an increasingly digital world.