Pandora’s Box: How Unrestricted LLMs Threaten Crypto Security
Background
From OpenAI’s GPT series to Google’s Gemini and a wide range of open-source models, advanced AI is rapidly transforming how we work and live. Yet alongside these technological breakthroughs, a darker undercurrent is emerging — the rise of unrestricted or malicious large language models (LLMs).
Unrestricted LLMs refer to language models that have been deliberately modified, fine-tuned, or “jailbroken” to bypass the built-in safety features and ethical safeguards of mainstream models. Developers of reputable LLMs typically invest significant resources to prevent misuse, such as the generation of hate speech, misinformation, malicious code, or illegal instructions. However, in recent years, individuals and groups — often motivated by cybercrime — have begun to develop or repurpose models free from such constraints.
This article explores representative examples of these unrestricted LLM tools, examines how they are being weaponized in the crypto industry, and discusses the emerging security challenges and possible countermeasures.
How Are Unrestricted LLMs Abused?
Tasks that once required specialized skills — such as writing malicious code, crafting phishing emails, or orchestrating fraud — can now be carried out by anyone, even those with zero programming knowledge, thanks to the assistance of unrestricted LLMs. Attackers simply need access to open-source model weights and source code. By fine-tuning the models on datasets containing harmful content, biased language, or illegal instructions, they can build customized offensive tools.
This trend introduces several security risks:
- Proliferation of Tailored Malicious Models: Attackers can modify models to target specific scams or user groups, generating highly deceptive and convincing content.
- Bypassing Content Filters: Locally deployed, fine-tuned models can evade the safety mechanisms of mainstream LLMs.
- Generation of Diverse Attack Payloads: LLMs can be used to rapidly produce phishing website code variants or scam scripts optimized for different social platforms.
- Fueling an Underground AI Ecosystem: The accessibility and flexibility of open-source models foster the development and trade of malicious AI applications.
Below are common unrestricted LLMs and examples of how they’re exploited in crypto-related scenarios:
WormGPT: The “Black” Version of GPT
WormGPT is a malicious LLM marketed openly on underground forums. Its developer explicitly claims that it has no ethical safeguards, positioning it as the “black version” of GPT. Based on open-source models like GPT-J 6B, it is trained on data related to malware and malicious techniques. A one-month subscription costs as little as $189.
In the crypto space, WormGPT has been abused in the following ways:
- Phishing Emails/Messages: Impersonating crypto exchanges, wallets, or reputable projects to send fake “account verification” requests, tricking users into clicking malicious links or revealing private keys or seed phrases.
- Malicious Code Generation: Assisting low-skilled attackers in writing malware that can steal wallet files, monitor clipboards, or log keystrokes.
- Automated Scam Operations: Automatically responding to potential victims and luring them into fake airdrops or investment schemes.
DarkBERT: A Double-Edged Sword of the Dark Web
DarkBERT is an LLM developed by the Korea Advanced Institute of Science and Technology (KAIST) in collaboration with S2W Inc. It is pre-trained on data from the dark web — including forums, marketplaces, and leaked information — with the original intent of helping cybersecurity researchers and law enforcement agencies understand the dark web ecosystem and identify emerging threats.
However, DarkBERT’s intimate knowledge of illicit tradecraft and darknet content could be weaponized. If similar models were fine-tuned or replicated by bad actors, the consequences could be severe. Potential abuses in the crypto space include:
- Targeted Social Engineering: Collecting personal information about crypto users or project teams to craft convincing scams.
- Imitating Criminal Techniques: Reproducing well-documented tactics for crypto theft and money laundering found on darknet forums.
FraudGPT: The Swiss Army Knife of Cybercrime
Touted as an upgrade to WormGPT, FraudGPT offers a broader set of malicious capabilities. Sold via darknet marketplaces and hacker forums, subscriptions range from $200 to $1,700 per month.
In the crypto context, FraudGPT is used for:
- Fake Project Fabrication: Generating realistic whitepapers, websites, roadmaps, and marketing materials for fraudulent ICOs or IDOs.
- Phishing Page Generation: Rapidly creating clones of popular exchange login pages or wallet connection interfaces.
- Social Media Bot Campaigns: Producing fake comments and hype to promote scam tokens or smear competitors.
- Social Engineering Attacks: Mimicking human dialogue to establish trust and manipulate users into disclosing sensitive information or performing risky actions.
GhostGPT: An Amoral AI Assistant
GhostGPT is another chatbot specifically advertised as having no moral constraints. In the crypto space, it enables:
- Advanced Phishing Attacks: Generating highly realistic phishing emails that mimic KYC alerts, security warnings, or account suspension notices from major exchanges.
- Smart Contract Exploit Code: Assisting attackers with no programming background in creating contracts containing hidden backdoors or malicious logic for rug pulls or DeFi attacks.
- Polymorphic Crypto Stealers: Writing self-modifying malware to steal wallet files, private keys, and seed phrases. Its polymorphic nature makes it hard to detect using signature-based security tools.
- Social Engineering: Equipping bots on Discord or Telegram with realistic scam dialogue for fake NFT mints, airdrops, or investment schemes.
- Deepfake Scams: When combined with voice cloning tools, GhostGPT can generate fake audio impersonations of project founders, investors, or exchange executives for phone-based fraud or BEC attacks.
Venice.ai: A Gateway to Unfiltered AI Access
Venice.ai provides access to multiple LLMs, including some with minimal safety restrictions. Positioned as an open gateway for exploring the “full power” of language models, it markets itself as a provider of cutting-edge, unmoderated LLMs. However, this also presents abuse risks:
- Unfiltered Content Generation: Attackers can use less-restricted models to generate phishing templates, disinformation, or attack vectors.
- Lowered Prompt Engineering Barriers: Even users unfamiliar with jailbreak techniques can produce restricted outputs with ease.
- Accelerated Fraud Scripting: Attackers can rapidly test how different models respond to malicious prompts, optimizing scam strategies and narratives.
Final Thoughts
The emergence of unrestricted LLMs marks a paradigm shift in cybersecurity. These models drastically lower the barrier to launching sophisticated, large-scale, and highly deceptive attacks.
In this evolving landscape, stakeholders across the security ecosystem must collaborate to address the new risks:
- Advance Detection Capabilities: Invest in tools that can identify and block phishing content, contract exploits, and malware generated by LLMs.
- Improve Jailbreak Resistance: Strengthen model defenses against prompt injection and misuse while developing watermarking and provenance tools to trace harmful outputs in critical sectors like finance and code generation.
- Establish Ethical and Regulatory Safeguards: Promote responsible development practices and implement oversight mechanisms to curb the creation and abuse of malicious models at the source.
About SlowMist
SlowMist is a blockchain security firm established in January 2018. The firm was started by a team with over ten years of network security experience to become a global force. Our goal is to make the blockchain ecosystem as secure as possible for everyone. We are now a renowned international blockchain security firm that has worked on various well-known projects such as HashKey Exchange, OSL, MEEX, BGE, BTCBOX, Bitget, BHEX.SG, OKX, Binance, HTX, Amber Group, Crypto.com, etc.
SlowMist offers a variety of services that include but are not limited to security audits, threat information, defense deployment, security consultants, and other security-related services. We also offer AML (Anti-money laundering) software, MistEye (Security Monitoring) , SlowMist Hacked (Crypto hack archives), FireWall.x (Smart contract firewall) and other SaaS products. We have partnerships with domestic and international firms such as Akamai, BitDefender, RC², TianJi Partners, IPIP, etc. Our extensive work in cryptocurrency crime investigations has been cited by international organizations and government bodies, including the United Nations Security Council and the United Nations Office on Drugs and Crime.
By delivering a comprehensive security solution customized to individual projects, we can identify risks and prevent them from occurring. Our team was able to find and publish several high-risk blockchain security flaws. By doing so, we could spread awareness and raise the security standards in the blockchain ecosystem.