News
  • Nidhi Kolhapur
    author-profile

    Nidhi Kolhapur right arrow

    Author

    Nidhi is a Certified Digital Marketing Executive and Passionate crypto Journalist covering the world of alternative currencies. She shares the latest and trending news on Cryptocurrency and Blockchain.

    • author facebook
    • author twitter
    • linkedin

  • 1 minute read

AI Gone Rogue? Solana Wallet Hacked in First-Ever AI Poisoning Attack

Story Highlights
  • A Solana wallet was compromised through a malicious API provided by ChatGPT, resulting in a loss of approximately $2,500.

  • This incident highlights the potential risks of relying solely on AI-generated outputs, especially in complex fields like blockchain development.

The cryptocurrency world has witnessed its first recorded case of AI poisoning, where a Solana wallet was compromised, resulting in an estimated loss of $2,500. This incident demonstrates both the potential and the dangers of using AI tools like ChatGPT in Web3 development, as they can inadvertently aid the misuse of compromised assets.

How the Attack Happened

On November 21, 2024, a user tried to create a meme token sniping bot for the Solana-based platform Pump.fun using ChatGPT for assistance. Instead of providing secure guidance, the AI chatbot suggested a fraudulent API link disguised as a tool for Solana services.

The fake API, created by scammers, included a backdoor that intercepted the walletโ€™s private keys in plaintext and transferred the funds, including SOL, USDC, and meme tokens, to the attackersโ€™ wallet. This wallet is linked to 281 similar thefts from other compromised wallets.

Investigations traced the malicious API to GitHub repositories where scammers had intentionally planted Trojan-infected Python files, targeting unsuspecting developers

Understanding AI poisoning

AI poisoning involves introducing corrupted data into the training of AI models, distorting their outputs. In this case, malicious GitHub repositories likely affected ChatGPTโ€™s responses, leading it to generate insecure API recommendations.

Although OpenAI has not been directly implicated, this event highlights the risks AI systems can pose when applied in specialized areas like blockchain.

Yu Xian, founder of blockchain security firm SlowMist, described the attack as a “wake-up call” for developers. He warned that the growing datasets used for AI training are increasingly vulnerable to tampering, enabling scammers to exploit popular tools like ChatGPT.

How to Stay Protected

To reduce the risk of similar incidents, security experts recommend the following precautions:

  1. Verify All Code and APIs: Avoid relying entirely on AI-generated outputs. Conduct thorough audits of code and APIs before using them.
  2. Use Separate Wallets: Keep test wallets and significant assets separate to ensure that experimental bots or unverified tools cannot access valuable funds.
  3. Monitor Blockchain Activity: Work with trusted blockchain security firms, like SlowMist, to identify and respond to emerging threats.

As blockchain continues to grow, both developers and investors must stay alert to prevent increasingly sophisticated fraud. Balancing innovation with security is critical to protecting the cryptocurrency ecosystem.

Show More

Related Articles

Back to top button