Nation-state hackers deliver malware from “bulletproof” blockchains - Ars Technica
arstechnica.com/security/2025/10/hackers-bullet…
Some excerpts:
Since February, Google researchers have observed two groups turning to a newer technique to infect targets with credential stealers and other forms of malware. The method, known as EtherHiding, embeds the malware in smart contracts, which are essentially apps that reside on blockchains for Ethereum and other cryptocurrencies. Two or more parties then enter into an agreement spelled out in the contract. When certain conditions are met, the apps enforce the contract terms in a way that, at least theoretically, is immutable and independent of any central authority.
- The decentralization prevents takedowns of the malicious smart contracts because the mechanisms in the blockchains bar the removal of all such contracts.
- Similarly, the immutability of the contracts prevents the removal or tampering with the malware by anyone.
- Transactions on Ethereum and several other blockchains are effectively anonymous, protecting the hackers’ identities.
- Retrieval of malware from the contracts leaves no trace of the access in event logs, providing stealth
- The attackers can update malicious payloads at anytime
Creating or modifying smart contracts typically cost less than $2 per transaction, a huge savings in terms of funds and labor over more traditional methods for delivering malware.
Layered on top of the EtherHiding Google observed was a social-engineering campaign that used recruiting for fake jobs to lure targets, many of whom were developers of cryptocurrency apps or other online services. During the screening process, candidates must perform a test demonstrating their coding or code-review skills. The files required to complete the tests are embedded with malicious code.
7 Comments
Comments from other communities
Web 3.0 has always been a joke. AI has more actual uses
They are both useful and both jokes.
Depends what exactly we are talking about.
AI has no use. It only subtracts value and creates liabilities.
AI != chatbots
Just saying.
I think theres a point where you have to realize the topic of discussion is about LLMs like ChatGPT, and that point was around the time we compared it to Web 3.0, something that people hate and associate with tech bros and evil corporations.
The meaning of words change based on context.
There is a point when one can just admit they are wrong, or twist words to convince themselves they were right.
Y’all need to understand that AI is coming and is going to replace a lot of things. I don’t know why some of you keep pretending it has no use case. This tech is going to leave you behind if you don’t use it.
LLMs are at a standstill since 2021, I would argue the current models were around in the late 80s they’re just using more compute time now, but it’s being marketed as the future to confuse a billion dopes like you who don’t understand technology. It’s the ultimate ponzi scheme, the companies are making no money but their evaluation keeps rising.
To clarify, OpenAI wrote a paper proving their model would not reach human output accuracy ever. They proved that the costs of gaining the same level of benefit from GPT3 to GPT4 as GPT2 to GPT3 would cost literally EXPONENTIAL amount of resources, which was proven again in practice when they actually did it a couple of years later. To improve it again would cost more power than mankind currently produces total, but the end result will still be hallucinating liability filled garbage because in 2022 Deepmind proved with LITERALLY INFINITE POWER AND TRAINING DATA that it would not reach human output, that the hard limit didn’t even reach the mid-90s.
You are arguing with the AI companies and researchers. Ya’ll need to understand that AI, as it is, is a fucking scam.
The paper from OpenAI: https://arxiv.org/pdf/2001.08361
The followup paper from DeepMind: https://arxiv.org/pdf/2203.15556
I’m no AI fan by any means, but it’s really good at pointing directions, or rather, introducing you to topics that you didn’t know how to start researching.
I often find myself asking: “Hey AI, I want to do this very specific thing but I don’t really know what it is called, can you help me?”. And sure enough I get the starting point, so I can close that down and search on my own.
Otherwise, trying to learn anything in depth there is just a footgun.
^(edit: typo)
I’m seconding this and adding to it. AI is terrible for factual information but great at relative knowledge and reframing.
I use it as a starting off point in writing research when I can’t get relevant search results.
Most recently, I asked it about urban legends in modern day Louisiana and got a list for more in-depth searches, most were accurate.
It’s good at mocking up accents and patterns of speech relative to a location/time as well.
Unfortunately, an LLM lies about 1 in 5 to 1 in 10 times: 80% to 90% accuracy, with a proven hard limit by OpenAI and Deepmind research papers that state even with infinite power and resources it would never approach human language accuracy. Add on top of that the fact that the model is trained on human inputs which themselves are flawed, so you multiply an average person’s rate of being wrong.
In other words, you’re better off browsing forums and asking people, or finding books on the subject, because the AI is full of shit and you’re going to be one of those idiot sloppers everybody makes fun of, you won’t know jack shit and you’ll be confidently incorrect.
They just explained how to use AI in a way where “truth” isn’t relevant.
No way the vast majority of people are getting things right more than 80% of the time. On their owned trained tasks, sure, but random knowledge? Nope. The AI holds a more intelligent conversation than most of humanity. It says a lot about humanity.
You literally don’t understand.
The human statements are the baseline, right or wrong, and the AI struggles to maintain numbers over 80% of that baseline.
Take however often a person is wrong and multiply it: that’s AI. They like to call it “hallucination” and it will never, ever, go away: in fact it will get worse as it has already polluted its own datasets which it will pull from and produce even worse output like noise coming from an amp in a feedback loop.
I was about to say don’t insult web 3.0, but you’re actually right. AI at least has useful applications to begin with
PieFed DK
To clarify some common misconceptions, here, the blockchain itself isn’t compromised, it’s simply that people can store any data or information on a blockchain: It’s a permanent immutable multiple ledger system.
I quite didn’t understand what happens after the malware is up in the block chain. Do I get infected if something sends me currency? Or would it take some action from me, like willingly entering a contract?
No. it’s no where near that scary. The advantage this offers is that a piece of malware can dynamically pull parts of the code down from a place that is difficult to block, and where it cannot be changed or removed.
Right now, Malware devs typically do this by spinning up a hosted webserver using a stolen credit card or cryptocurrency and then hosting it there until the webhost takes it offline. This development will ensure that those bits of code will be up and accessible forever.
In addition to Godort’s statement, it in theory could potentially be used on systems that handle transactions and take note of the information stored on blockchain and don’t sanitize inputs, but sanitizing inputs is one of the most basic tasks and at that point it becomes the bank’s or brokerage’s problem not yours.
This capability has never been demonstrated, as it would require a lot of convoluted prerequisites to work in this manner.
Most of these tokens which store data are NFTs to begin with.
lol right , and sanitizing inputs is still a top 10 OWASP , plus the smart contracts are compiled instead of interpreted like they should be.
Very cool. Thanks for sharing the link.
Malware has been a problem for years with Ethereum. Folding Ideas was talking about it in his pivotal NFT video.