Are you ready for a shock? AI scams are exploding, preying on loneliness and misinformation, and they're getting incredibly sophisticated. Imagine a world where you can't trust what you see online – it's closer than you think. Let's dive into how these scams work and what you can do to protect yourself.
Take Singapore, for example. A seemingly innocent social media post on Threads shows a woman, "Nicolette Smith," looking invitingly at the camera. The caption? Something suggestive, often with slightly broken English. "I’ve been wanting to see you for a long time,” one post reads, “Where are you from? I’ll come to you.” The goal is simple: lure unsuspecting users into a false sense of connection. And it's working.
Hundreds of people are responding, many genuinely believing this "Nicolette" is real. "I’m a retired single widower," one user writes, laying bare their vulnerability. Others eagerly share their locations, hoping for a connection. This account, which started in August, has already amassed over 16,000 followers. But here's where it gets controversial... Meta, the company behind Threads and Facebook, hasn't commented on these types of inauthentic accounts. Is this a sign they're struggling to keep up, or is something else at play?
And it's not just "Nicolette." Dozens of similar accounts exist, some posing as men, all using the same engagement-baiting tactics. This raises a critical question: how are these accounts able to flourish? It's a sign of how algorithms, designed to maximize engagement, can inadvertently amplify inauthentic content.
But the problem extends far beyond romance scams. Consider Complaint Singapore, a large Facebook group. A user recently shared a post claiming an 11-week-old baby died after receiving 20 vaccines, accompanied by what appeared to be an AI-generated image. While most commenters recognized the falsehood, a disturbing minority embraced it. "That’s why I am firmly refusing vaccine for my daughter since newborn," one user wrote, highlighting the real-world consequences of misinformation.
This post originated from a Facebook page called Health and Happiness, which pumps out a constant stream of AI-generated and stolen content to over 470,000 followers. They make misleading claims about health, like sleeping on your left side reducing heartburn, or a 13-year-old being "cured" of terminal brain cancer. And this is the part most people miss: even when the information is based on real news (like the AFP story mentioned), it's often sensationalized beyond recognition, twisting reality to grab attention.
These examples are just the tip of the iceberg. On TikTok, the account @selelehsg gets tens of thousands of views with AI-generated videos depicting dramatic (and fake) scenarios in Singapore. One video of a market stallholder arguing with an old man racked up over 1.5 million views! Despite captions mentioning AI, many viewers don't realize the videos aren't real. Other creators are even less transparent, posting AI-voiced narratives with fictitious accounts of current events.
Take the TikTok account @syinshyqer, for instance. They create videos stitching together unrelated clips with text-to-speech voice-overs, often fabricating or exaggerating current events. Their video about the Mumbai terror attack, filled with inaccuracies, has garnered over 700,000 views since October. While much of this content seems random, it's often driven by a clear economic motive.
When targeting lonely individuals, these scams morph into "pig-butchering scams." The scammer builds trust before extracting money through cryptocurrency or fraudulent investments. Researchers have tracked over US$75 billion in cryptocurrency flowing from over 4,000 victims into accounts largely based in Southeast Asia between January 2020 and February 2024. It's a massive, highly organized operation.
But the monetization doesn't stop at direct fraud. Platforms themselves contribute to the misinformation economy. TikTok, for example, pays creators in certain regions based on engagement. AI-generated videos are even used to hawk products on TikTok Shop, earning commissions for each sale. Other platforms, like Facebook and YouTube, also reward video engagement with payments. This creates a perverse incentive for individuals to churn out low-effort, AI-enabled content – what some online commentators call "AI slop." One content creator in the Philippines told NPR he made US$9,000 in a month using AI-generated videos. That's more than some people earn in a year!
Even when viewers suspect AI involvement, they may not grasp the extent of the manipulation. A video of an American pastor preaching about billionaires being the only minorities we should fear reached over 10 million users. Another, showing a man rescuing a baby falling from a building, garnered 52 million views. Many viewers didn't realize these videos were made using Sora, OpenAI's new video generation tool. Some telltale signs include unusual cropping, blurring to hide watermarks, or choppy editing.
To be sure, AI-enabled misinformation existed even before Sora. During a recent election in Singapore, a surge of manipulated videos targeting candidates appeared on TikTok. Companies like OpenAI, Microsoft, and Adobe are trying to combat this with measures like invisible metadata indicating AI provenance. But the tool has been inconsistent in successfully flagging AI-generated content. A Washington Post investigation found that only one out of eight major social media platforms (YouTube) disclosed that videos generated with Sora were AI-generated, and even that disclosure was hidden.
And think about this: even with metadata, it's easy to circumvent these safeguards. Plus, much inauthentic material doesn't even rely on AI-generated visuals. Part of the problem lies in distinguishing between inauthenticity and harmless editing. Meta changed its "made with AI" label to "AI info" after photographers complained that it mislabeled their Photoshop-edited images.
Ultimately, the issue isn't just about new AI technologies. Misinformation has always been intertwined with social media. YouTube, for example, has long been criticized for hosting content farms that produce massive amounts of misinformation for profit. Meta internally projected that around 10% of its 2024 annual revenue (US$16 billion) comes from running advertisements for scams and banned goods. Their research also suggests that Meta's platforms are involved in a third of all successful scams in the US.
So, what's the solution? Without addressing the fundamental conflict within social media – maximizing engagement at the expense of accuracy and well-being – inauthentic material will likely remain a permanent feature of our online lives. For now, one popular response is videos by creators debunking viral misinformation. In response to the AI-generated pastor clip, American TikTok creator Jeremy Carrasco pointed out that a quick look at the account's profile would have revealed its inauthentic origins. "That basic research didn’t stop many big influencers from reposting this,” he says.
This raises some important questions for you: Do you think social media platforms are doing enough to combat AI scams and misinformation? What steps can individuals take to protect themselves from these threats? Share your thoughts in the comments below!