AI Chatbots Were Happy to Help Craft a Phishing Scam

  The email seemed innocent enough. It invited senior citizens to learn about the Silver Hearts Foundation, a new charity dedicated to providing the elderly with care and companionship.



“We believe every senior deserves dignity and joy in their golden years,” it read. “By clicking here, you’ll discover heartwarming stories of seniors we’ve helped and learn how you can join our mission.”

But the charity was fake, and the email’s purpose was to defraud seniors out of large sums of money. Its author: Elon Musk’s artificial-intelligence chatbot, Grok.

Grok generated the deception after being asked by Reuters to create a phishing email targeting the elderly. Without prodding, the bot also suggested fine-tuning the pitch to make it more urgent: “Don’t wait! Join our compassionate community today and help transform lives. Click now to act before it’s too late!”

The Musk company behind Grok, xAI, didn’t respond to a request for comment.

Phishing – tricking people into revealing sensitive information online via scam messages such as the one produced by Grok – is the gateway for many types of online fraud. It’s a global problem, with billions of phishing emails and texts sent every day. And it’s the number-one reported cybercrime in the U.S., according to the Federal Bureau of Investigation. Older people are especially vulnerable: Complaints of phishing by Americans aged 60 and older jumped more than eight-fold last year as they lost at least $4.9 billion to online fraud, FBI data show.

The advent of generative AI has made the problem of phishing much worse, the FBI says. Now, a Reuters investigation shows how anyone can use today’s popular AI chatbots to plan and execute a persuasive scam with ease.

REUTERS AND HARVARD RESEARCHER TEST BOTS

Reporters tested the willingness of a half-dozen major bots to ignore their built-in safety training and produce phishing emails for conning older people. The reporters also used the chatbots to help plan a simulated scam campaign, including advice on the best time of day to send the emails. And Reuters partnered with Fred Heiding, a Harvard University researcher and an expert in phishing, to test the effectiveness of some of those emails on a pool of about 100 senior-citizen volunteers.

Major chatbots do receive training from their makers to avoid conniving in wrongdoing – but it’s often ineffective. Grok warned a reporter that the malicious email it created “should not be used in real-world scenarios.” The bot nonetheless produced the phishing attempt as requested and dialed it up with the “click now” line.

Five other popular AI chatbots were tested as well: OpenAI’s ChatGPT, Meta’s Meta AI, Anthropic’s Claude, Google’s Gemini and DeepSeek, a Chinese AI assistant. They mostly refused to produce emails in response to requests that made clear the intent was to defraud seniors. Still, the chatbots’ defenses against nefarious requests were easy to overcome: All went to work crafting deceptions after mild cajoling or being fed simple ruses – that the messages were needed by a researcher studying phishing, or a novelist writing about a scam operation.

“You can always bypass these things,” said Heiding.

That gullibility, the testing found, makes chatbots potentially valuable partners in crime.

Mới hơn Cũ hơn