A malicious AI model repository impersonating OpenAI's Privacy Filter topped the Hugging Face trending charts, executing a sophisticated six-stage attack to steal developer credentials.
A malicious AI model repository impersonating OpenAI's Privacy Filter topped the Hugging Face trending charts, executing a sophisticated six-stage attack to steal developer credentials.

A fraudulent repository on the Hugging Face AI platform impersonating an OpenAI privacy tool was downloaded 244,000 times in under 18 hours, delivering an information-stealing malware that compromised developer credentials and crypto wallets.
"The repo itself typosquatted OpenAI's legitimate Privacy Filter release, copying its model card almost verbatim," AI security firm HiddenLayer, which discovered the campaign, said in a report.
The fake repository, named Open-OSS/privacy-filter, used hundreds of automated bot accounts to inflate its "like" count to 667, helping it reach the #1 trending spot. The included loader.py script initiated a six-stage attack, ultimately deploying a Rust-based infostealer that harvested browser passwords, Discord tokens, and SSH keys.
The incident highlights a critical vulnerability in the AI supply chain, where attackers can exploit the trust-based nature of open-source platforms. By impersonating popular models and manipulating social proof, they can turn the developer community itself into a distribution network for malware, threatening to embed security risks deep within corporate and personal projects.
The attack was a multi-stage process designed for stealth and effectiveness. After a user ran the initial Python script, a series of actions executed without any visible signs to the user. The script first displayed fake model-loading outputs to appear legitimate while it disabled security checks in the background.
It then pulled an encoded command from a public JSON paste site—a method that allows attackers to update payloads without altering the repository itself. This command was passed to PowerShell, which downloaded a second script from a domain, api.eth-fastscan.org, mimicking a blockchain analytics service. This second script downloaded the final payload: a custom infostealer written in Rust. To avoid detection, the malware added itself to Windows Defender's exclusion list before running with elevated privileges via a scheduled task that deleted itself immediately after execution.
The infostealer was designed to be thorough. It exfiltrated saved passwords, session cookies, and encryption keys from Chrome and Firefox browsers. It also targeted Discord tokens, cryptocurrency wallet seed phrases, SSH keys, and FTP credentials, packaging the stolen data into a compressed JSON file sent to attacker-controlled servers.
This was not an isolated event. HiddenLayer researchers identified at least six other malicious repositories uploaded by a separate Hugging Face account named "anthfu." These repositories impersonated other popular AI models, including Qwen3, DeepSeek, and Bonsai, and used the same malicious loader script pointing to the same command-and-control infrastructure.
The campaign demonstrates a clear playbook for supply chain attacks against the AI developer community. Instead of breaching a platform directly, attackers can publish a convincing lookalike, use bots to game trending algorithms, and wait for unsuspecting developers to download the malware.
If you cloned the Open-OSS/privacy-filter repository and ran any file from it on a Windows machine, security experts advise treating the device as fully compromised. All credentials stored in browsers should be changed, crypto funds moved to new wallets, and any SSH or FTP keys should be considered stolen and rotated immediately.
This article is for informational purposes only and does not constitute investment advice.