A new AI platform called Xanthorox markets itself as a tool for cybercrime, but its real danger may lie in how easily such systems can be built—and sold—by anyone
This article includes a reference to violent sexual assault.
Reports of a sophisticated new artificial intelligence platform started surfacing on cybersecurity blogs in April, describing a bespoke system whispered about on dark web hacker forums and created for the sole purpose of crime. But despite its shadowy provenance and evil-sounding name, Xanthorox isn’t so mysterious. The developer of the AI has a GitHub page, as well as a public YouTube channel with screen recordings of its interface and the description “This Channel Is Created Just for Fun Content Ntg else.” There’s also a Gmail address for Xanthorox, a Telegram channel that chronicles the platform’s development and a Discord server where people can pay to access it with cryptocurrencies. No shady initiations into dark web criminal forums required—just a message to a lone entrepreneur serving potential criminals with more transparency than many online shops hawking antiaging creams on Instagram.
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
“Jailbreaking”—disabling default software limitations—became mainstream in 2007 with the release of the first iPhone. The App Store had yet to exist, and hackers who wanted to play games, add ringtones or switch carriers had to devise jailbreaks. When OpenAI launched the initial version of ChatGPT, powered by its large language model GPT-3.5, in late 2022, the jailbreaking began immediately, with users gleefully pushing the chatbot past its guardrails. One common jailbreak involved fooling ChatGPT by asking it to role-play as a different AI—one that had no rules and was allowed to write phishing e-mails. ChatGPT would then respond that it indeed couldn’t write such material itself, but it could do the role-playing. It would then pretend to be a nefarious AI and begin churning out phishing e-mails. To make this easier, hackers introduced a “wrapper”—a layer of software between an official AI model and its users. Rather than accessing the AI directly through its main interface, people could simply go through the easier-to-use wrapper. When they input requests for fake news stories or money laundering tips, the wrapper repackaged their prompts in language that tricked ChatGPT into responding.
As for the criminals making the bots, these episodes taught them two lessons: Wrapping an AI system is cheap and easy, and a slick name sells. Chester Wisniewski, director and global field chief information security officer at the cybersecurity firm Sophos, says scammers often scam other would-be scammers, targeting “script kiddies”—a derogatory term, dating to the 1990s, for those who use prewritten hacking scripts to create cyberattacks without understanding the code. Many of these potential targets reside in countries with few economic opportunities, places where running even a few successful scams could greatly improve their future. “A lot of them are teenagers, and a lot are people just trying to provide for their families,” Wisniewski says. “They just run a script and hope that they’ve hacked something.”
Though security experts have expressed concerns along the lines of AI teaching terrorists to make fertilizer bombs (like the one Timothy McVeigh used in his 1995 terrorist attack in Oklahoma City) or to engineer smallpox strains in a lab and unleash them upon the world, the most common threat posed by AIs is the scaling up of already-common scams, such as phishing e-mails and ransomware. Yael Kishon, AI product and research lead at the cyberthreat intelligence firm KELA, says criminal AIs “are making the lives of cybercriminals much easier,” allowing them to “generate malicious code and phishing campaigns very easily.” Wisniewski agrees, saying criminals can now generate thousands of attacks in an hour, whereas they once needed much more time. The danger lies more in amplifying the volume and reach of known forms of cybercrime than in the development of novel attacks. In many cases, AI merely “broadens the head of the arrow,” he says. “It doesn’t sharpen the tip.”
Yet aside from lowering the barrier to becoming a criminal and allowing criminals to target far more people, there now does appear to be some sharpening. AI has become advanced enough to gather information about a person and call them, impersonating a representative from their gas or electric company and persuading them to promptly make an “overdue” payment. Even deepfakes have reached new levels. Hong Kong police said in February that a staff member at a multinational firm, later revealed to be the British engineering group Arup, had received a message that claimed to be from the company’s chief financial officer. The staffer then joined a video conference with the CFO and other employees—all AI-generated deepfakes that interacted with him like humans, explaining why he needed to transfer $25 million to bank accounts in Hong Kong—which he then did.
Even phishing campaigns, scam e-mails sent out in bulk, have largely shifted to “spear phishing,” an approach that attempts to win people’s trust by using personal details. AI can easily gather the information of millions of individuals and craft a personalized e-mail to each one, meaning that our spam boxes will have fewer messages from people claiming to be a Nigerian prince and far more from impersonations of former colleagues, college roommates or old flames, all seeking urgent financial help.
Kishon, who predicts that dark AI tools will increase cyberthreats in the years ahead, doesn’t see Xanthorox as a game changer. “We are not sure that this tool is very active because we haven’t seen any cybercrime chatter on our sources on other cybercrime forums,” she says. Her words are a reminder that there is still no gigantic evil chatbot factory available to the masses. The threat is the ease with which new models can be wrapped, misaligned and shipped before the next news cycle.
Yet Casey Ellis, founder of the crowdsourced cybersecurity platform Bugcrowd, sees Xanthorox differently. Though he acknowledges that many details remain unknown, he points out that earlier criminal AI didn’t have advanced expert-level systems—designed to review and validate decisions—checking one another’s work. But Xanthorox appears to. “If it continues to develop in that way,” Ellis says, “it could evolve into being quite a powerful platform.” Daniel Kelley, a security researcher at the AI e-mail-security company SlashNext, who wrote the , believes the platform to be more effective than WormGPT and FraudGPT. “Its integration of modern AI chatbot functionalities distinguishes it as a more sophisticated threat,” he says.
Perhaps the scariest part of Xanthorox is the creator’s chatter with his 600-plus followers on a Telegram channel that brims with racist epithets and misogyny. At one point, to show how truly criminal his AI is, the creator asked it to generate instructions on how to rape someone with an iron rod and kill their family—a prompt that seemed to echo the rape and murder of a 22-year-old woman in Delhi, India, in 2012. (Xanthorox then proceeded to detail how to murder people with such an object.) In fact, many posts on the Xanthorox Telegram channel resemble those on “the Com,” a hacker network of Telegram and Discord channels that Krebs described as the “cybercriminal hacking equivalent of a violent street gang” on his investigative news blog KrebsOnSecurity.
Unsurprisingly, much of the work to protect against criminal AI, such as detecting deepfakes and fraudulent e-mails, has been done for companies. Ellis believes that just as spam detectors are built into our current systems, we will eventually have “AI tools to detect AI exploitation, deepfakes, whatever else and throw off a warning in a browser.” Some tools already exist for home users. Microsoft Defender blocks malicious Web addresses. Malwarebytes Browser Guard filters phishing pages, and Bitdefender rolls back ransomware encryption. Norton 360 scans the dark web for stolen credentials, and Reality Defender flags AI-generated voices or faces.
The existence of so many AI systems that can be repurposed for large-scale and personalized crime means that we live in a world where we should all look at incoming e-mails the way city people look at doorknobs. When we get a call from a voice that sounds human and asks us to make a payment or share personal information, we should question its authenticity. But in a society where more and more of our interactions are virtual, we may end up trusting only in-person encounters—at least until the arrival of robots that look and speak like humans.
Deni Ellis Béchard is Scientific American’s senior tech reporter. He is author of 10 books and has received a Commonwealth Writers’ Prize, a Midwest Book Award, and a Nautilus Book Award for investigative journalism. He holds two master’s degrees in literature, as well as a master’s degree in biology from Harvard University. His most recent novel, We Are Dreams in the Eternal Machine, explores the ways that artificial intelligence could transform humanity.
Source: www.scientificamerican.com