Telli Menüü

From Canada to Estonia, AI Scams are Costing Millions. Here’s What You Can Do.

Artificial intelligence (AI) is rapidly evolving, and with it, the sophistication of scams and fraud. As the technology becomes more accessible, scammers no longer need elaborate setups to inflict harm. Expert authorities in both Canada and Estonia warn that they’re becoming increasingly difficult to detect.

Photo by Tima Miroshnichenko on Pexels.
Photo by Tima Miroshnichenko on Pexels.

Canada sounds the alarm

On March 24th, Competition Bureau Canada issued a warning that scammers are deploying deepfake technology (synthetic images, video, or audio clips that realistically impersonate someone else) to impersonate senior government officials, fabricating their likeness to extract personal information and push disinformation.

The timing of the warning coincides with Fraud Prevention Month, during which authorities have invested in campaigns offering expert guidance on how to spot these scams. The urgency is warranted. Research from KPMG found that, in 2025, nearly three-quarters (72%) of Canadian businesses reported losing as much as 5% of their annual profits to AI-powered scams. And, contrary to popular belief that older populations are more vulnerable to scams, findings from Scotiabank’s Fraud Awareness Poll revealed that “younger Canadians are also vulnerable, with nearly one-in-three (29%) saying they fell for a scam in the past year.”

What had previously offered Estonians some protection… was AI’s inability to convincingly replicate the Estonian language… That linguistic shield, however, is eroding as AI language models grow more advanced. 

Estonia’s wake-up call

On the other side of the pond, Estonia is facing similar crises. Although the country has been celebrated as one of the world’s most digital advanced nations, its tech-forward society has also proven vulnerable. 

According to the e-Estonia site, 2025 saw over 10,000 cyber incidents that inflicted real harm by disrupting systems and causing financial loss. Of those, nearly 2,800 were phishing cases, and more than 4,500 involved people being tricked into surrendering passwords, eID PIN codes, or account access. Estonian individuals and businesses lost a total of approximately 29 million euros. 

What had previously offered Estonians some protection, as writer Johanna-Kadri Kuusk notes in e-Estonia, was AI’s inability to convincingly replicate the Estonian language. Grammatical errors and awkward phrasing were reliable red flags that raised eyebrows, helping people steer clear of inevitable harm. That linguistic shield, however, is eroding as AI language models grow more advanced.

How to stay safe 

When encountering any video, image, or audio of a public figure promoting something out of character or creating a sense of urgency, it’s worth pausing to look for red flags. Typical giveaways have been misplaced body parts, inconsistent lighting, or unnatural audio quality. However, as AI-generated deepfakes grow more advanced, these giveaways are disappearing. While just a few years ago it was almost always clear whether an image was fake, now it’s nearly impossible. Consider these two images: 

Image generated by Nano Banana (@immasiddx on X)
Image generated by Nano Banana (@immasiddx on X)

A quick glance at the image above makes it seem like it could be real. However, upon further inspection, this illusion quickly unravels. Its airbrushed quality and almost uncanny perfection are telltale signs of an AI-generated image.

The image below tells a different story. Even with a closer look, it still looks genuine. There are no obvious flaws. The grain makes it look like it could have been taken on an iPhone. Even the background is convincingly natural.

But, both images were generated by Google’s Nano Banana (top) and Nano Banana Pro (bottom).

Image generated by Nano Banana Pro (@immasiddx on X)
Image generated by Nano Banana Pro (@immasiddx on X)

The saying “don’t believe everything you hear on the internet” is hardly new. Photoshop and video editing software have been around for decades. Photo manipulation goes back even further. In his attempt to rewrite history, Stalin famously had political rivals erased from photographs as tools of propaganda and censorship.

… the best thing you can do to protect yourself amid rapidly advancing technologies is to educate yourself and maintain a critical eye. Ask yourself if the information you’re engaging with is coming from a reliable source, if the urgency you feel is being manufactured, and if what you’re being asked to do serves your interests or someone else’s.

But what AI introduces is speed, scale, and accessibility. Fraud that once required technical skill and effort can now be executed in seconds, by almost anyone, anywhere. 

While that is an unsettling reality, it doesn’t have to be a paralyzing one. When it comes to fraud prevention, the best thing you can do to protect yourself amid rapidly advancing technologies is to educate yourself and maintain a critical eye. Ask yourself if the information you’re engaging with is coming from a reliable source, if the urgency you feel is being manufactured, and if what you’re being asked to do serves your interests or someone else’s. Cross-reference the information you’re being told with official sources. And, if in doubt, consult with a trusted friend or family member and report the incident to the authorities. 

Embracing new technology without fear

The lesson from Estonia, as Kuusk argues, is that we cannot let the fear surrounding new technologies force us to retreat from digital life. These technologies are not going anywhere. Falling behind will widen the digital divide, where those without the skills to use them will be left vulnerable. Instead, we must be proactive. For the government and private sector, that means investing in stronger cybersecurity infrastructure. For the consumer, that involves strengthening media literacy skills and learning to question what we see. AI carries real potential to improve our lives, but also to upend them. Which outcome we get depends partly on how seriously we work to stay informed, but also on how effectively our governments invest in the infrastructure and education needed to keep pace with those who actively exploit it.

This article was written by Natalie Jenkins as part of the Local Journalism Initiative.

Loe edasi