Pages:
Author

Topic: What would it take to make a 51% attack on the whole bitcoin network? (Read 2612 times)

newbie
Activity: 39
Merit: 0
1) OK, it is usefull to make ASIC less effective. But it is not enough. At least, they can use 20,000-40,000 GPUs to make the 51% attack. We need one more solution for this case.
Modern GPUs are memory-limited too. My HD6990 has >3000 cores and only 4GB RAM. This is ~1M per core. And this is only a good starting point for further thoughts.
Also I mentioned earlier: if Bitcoin ever reach popularity comparable to Bittorrent that 51% attack should take much more GPUs, especially when POW is memory-bounded.
But there is one more issue. Favoring GPUs or any other highly-specialized solution for minig doesn't encourage wider Bitcoin popularization. We should keep the doors open for newcomers. And in average they don't have top-level GPUs.

2) What new POW do you suggest? Your scheme with N=32M/32=1M ? IMHO, it is not good because calculation of 2^20 sha256 is too time-consuming. Each node (not only miners) needs to calculate POW each time when recieves a new block. Do you have another one idea? We can remember results of intermediate steps 0,8,16,24,32,40,48,63 of sha256 calculation, for example. It is 8 times more information,
but 2^17 calculations of sha256 is very time-consuming too.
2^20 sha256 computations over 32-byte vectors is equal to 1 sha256 computation over 32M file. It takes less than a second on my box. So I considering this to be acceptable.
Moreover I don't insist on hashing something million times. That was only an example. Digging a little I've found interesting paper: Exponential Memory-Bound Functions for Proof of Work Protocols. Scheme that described in the paper is also two-dimensional (by memory amount and successive POW criteria). And I don't have any idea how to adjust memory requirements and "leading-zeros count" together with difficulty.

Yes, if we will make "net of trust", then an attacker with more then 50% of computational power can create "alternative reality" in bitcoin, i mean alternative block chain, that will begin with the same generic block but then diverses with the "real" block chain. In fact, in this case can exist more then two block chains. And for a node that connects to the net, there will be impossible understand wich of them to connect and belief. IMHO, this problem can be solved only by out-of-bitcoin methods. All sites that work with bitcoin may\have to publish in what reality\block chain they work. I belief that they can detect which block chain is real. And usual users will check from time to time that they live in the same reality. It is not excellent, but I don't see any better solution. At least, by this approach it is possible to eliminate other problems of 50%+ attack:

Reverse transactions that the atacker sends while he's in control
Prevent some or all transactions from gaining any confirmations
Prevent some or all other generators from getting any generations
That would be something like Reeple, not like Bitcoin.
newbie
Activity: 13
Merit: 0
Not agree. Idea isn't in just adding memory requirements to the POW but making the POW seriously memory-constrained. Say 1 POW needs 1 million sequential computations and 32M of RAM at a whole for each computation and that 32M can't be shared between POWs because there are different data. So the only way to implement such POW in ASIC is to add 32M of RAM to the chip wasting its area and dramatically increasing its cost and reducing its efficiency. And you can't make 32M of on-ASIC RAM cheaper than 32M of DDR module RAM.

1) OK, it is usefull to make ASIC less effective. But it is not enough. At least, they can use 20,000-40,000 GPUs to make the 51% attack. We need one more solution for this case.

2) What new POW do you suggest? Your scheme with N=32M/32=1M ? IMHO, it is not good because calculation of 2^20 sha256 is too time-consuming. Each node (not only miners) needs to calculate POW each time when recieves a new block. Do you have another one idea? We can remember results of intermediate steps 0,8,16,24,32,40,48,63 of sha256 calculation, for example. It is 8 times more information,
but 2^17 calculations of sha256 is very time-consuming too.

Bad idea. No one can prevent me from making millions of nodes each of which trusts to each other. Newly connected nodes have to trust my malicious sub-network with high probability only because of its size.
If you propose to dedicate one bootstrap node and make it trusted by default (hard-coding certificate into client for example) you just invent PKI in its traditional form and that trusted-by-default node would become a central authority and would perform central-bank like functions. That's not we're all want to happen with Bitcoin.

Yes, if we will make "net of trust", then an attacker with more then 50% of computational power can create "alternative reality" in bitcoin, i mean alternative block chain, that will begin with the same generic block but then diverses with the "real" block chain. In fact, in this case can exist more then two block chains. And for a node that connects to the net, there will be impossible understand wich of them to connect and belief. IMHO, this problem can be solved only by out-of-bitcoin methods. All sites that work with bitcoin may\have to publish in what reality\block chain they work. I belief that they can detect which block chain is real. And usual users will check from time to time that they live in the same reality. It is not excellent, but I don't see any better solution. At least, by this approach it is possible to eliminate other problems of 50%+ attack:

Reverse transactions that the atacker sends while he's in control
Prevent some or all transactions from gaining any confirmations
Prevent some or all other generators from getting any generations
newbie
Activity: 39
Merit: 0
I worked for 5 years for a company that creates ASIC chips, and I understand the process well enough. My estimation is that it is possible to create such an ASIC that will feed the same power as Radeon 5870 (for example), and calculate 20-100 times more hashes. It is not surprising because the chip is special-purpose, and GPU is universal. I suppose that it is possible to create such an ASIC in 6-12 months. The power of network now is approximately 12 Thash, that is equivalent to ~ 20000-40000 GPUs. That means that 2000 such chips would be enough to attain more than 50% of the computing power. I think, the cost of creating such a number of chips is approximately 5 million dollars. Indeed, the major part of this money would be spent on designing the chip, not on manufacture. Therefore, to create 10,000 chips like this, one would need approximately 10 million dollars. And 10,000 chips is 80% of the nets computing power.

So, I think that ASIC is the most possible way to accomplish 51% attack.
Agree.

What can we invent to prevent net destruction? I have thought about it but I have not found any decision. IMHO, change POW method is useful only for the first time, because we can change our miners quickly, but it is not possible to change an ASIC chip. They would need to make another one (+3-6 months). Adding memory requirements in POW method is good idea, but it changes not much. Let us say, not 10 but 20 million dollars per 80% of the nets computing power.
Not agree. Idea isn't in just adding memory requirements to the POW but making the POW seriously memory-constrained. Say 1 POW needs 1 million sequential computations and 32M of RAM at a whole for each computation and that 32M can't be shared between POWs because there are different data. So the only way to implement such POW in ASIC is to add 32M of RAM to the chip wasting its area and dramatically increasing its cost and reducing its efficiency. And you can't make 32M of on-ASIC RAM cheaper than 32M of DDR module RAM.

I think the better way to find the decision is to think about changing a way of cooperation of nodes. For example it is possible to create a system of trust between nodes. If one node makes some suspicious actions (destributes a new block that not contains a majority part of transactions or new block chain that removes the last 10 blocks), its "rank" decreases. If a node destribute good information, its "rank" increased. Information from nodes with too low "rank" is skipped. Its only a raw idea, I know. I just want to show the direction of how else it could be.
Bad idea. No one can prevent me from making millions of nodes each of which trusts to each other. Newly connected nodes have to trust my malicious sub-network with high probability only because of its size.
If you propose to dedicate one bootstrap node and make it trusted by default (hard-coding certificate into client for example) you just invent PKI in its traditional form and that trusted-by-default node would become a central authority and would perform central-bank like functions. That's not we're all want to happen with Bitcoin.

Anyway, what we can do just now (except finding decision) is to recognize the problem. And change in wiki the status of this vulnerability from "Probably not a problem" to "Might be a problem".
You damn right. It's "Might be a problem".
newbie
Activity: 13
Merit: 0
I worked for 5 years for a company that creates ASIC chips, and I understand the process well enough. My estimation is that it is possible to create such an ASIC that will feed the same power as Radeon 5870 (for example), and calculate 20-100 times more hashes. It is not surprising because the chip is special-purpose, and GPU is universal. I suppose that it is possible to create such an ASIC in 6-12 months. The power of network now is approximately 12 Thash, that is equivalent to ~ 20000-40000 GPUs. That means that 2000 such chips would be enough to attain more than 50% of the computing power. I think, the cost of creating such a number of chips is approximately 5 million dollars. Indeed, the major part of this money would be spent on designing the chip, not on manufacture. Therefore, to create 10,000 chips like this, one would need approximately 10 million dollars. And 10,000 chips is 80% of the nets computing power.

So, I think that ASIC is the most possible way to accomplish 51% attack.

What can we invent to prevent net destruction? I have thought about it but I have not found any decision. IMHO, change POW method is useful only for the first time, because we can change our miners quickly, but it is not possible to change an ASIC chip. They would need to make another one (+3-6 months). Adding memory requirements in POW method is good idea, but it changes not much. Let us say, not 10 but 20 million dollars per 80% of the nets computing power.

I think the better way to find the decision is to think about changing a way of cooperation of nodes. For example it is possible to create a system of trust between nodes. If one node makes some suspicious actions (destributes a new block that not contains a majority part of transactions or new block chain that removes the last 10 blocks), its "rank" decreases. If a node destribute good information, its "rank" increased. Information from nodes with too low "rank" is skipped. Its only a raw idea, I know. I just want to show the direction of how else it could be.

Anyway, what we can do just now (except finding decision) is to recognize the problem. And change in wiki the status of this vulnerability from "Probably not a problem" to "Might be a problem".
What do you think?


newbie
Activity: 39
Merit: 0
Personally though, I think the biggest threat is actually botnets, so the dominance of GPUs is actually preferable to me.
For now it's true. But if you look to the BitTorrent network size you'll see more than 100 million peers. I think if Bitcoin would ever reach such popularity botnets shouldn't be a problem.
legendary
Activity: 1428
Merit: 1093
Core Armory Developer
Quote
And in general, GPUs aren't as memory limited as you are thinking.  The typical highish end video card that we use has more memory on it than a generic server from a couple of years ago.  Any change that makes hashing memory intensive enough to stop GPU mining will kill the network by forcing nodes to drop out.

I don't agree.   I don't see how the GPUs can be "not as memory limited as you are thinking."  We know the total RAM on the card, and we can count on a little bit extra for local memory and registers, but not on the order of 100s of MB.  I program CUDA on NVIDIA GPUs, and you're pretty much limited by the RAM amount stated on the box (and OpenCL isn't much different).  You can use CPU RAM if you want, but there's massive latencies associated with transferring data over the PCIe bus.  Perhaps the latencies aren't all that important in this application but the point is to make the computation memory-limited, not computation-limited.

Right now the computation requires virtually no RAM, beyond a few round constants and storing the original 80-byte header.  A GPU with 1600 shaders might have 2 GB of RAM-- that's 1-2MB per core.  But a CPU typically has 250-1000MB/core.  If the each thread only required 100 MB RAM, then even semi-older computers could apply all their core(s) to computation without impacting system usability dramatically (if at all).  But it would limit the 1600 shaders on a GPU to only 20 simultaneous threads (about 1-2% GPU capacity).  

As for GZIP... the point was to apply a process that needs the entirety of the data in order to run and can't be broken into smaller pieces.  I'm sure there's a billion existing algorithms that can do this, I just guessed that gzip was one of them (incorrectly).

Personally though, I think the biggest threat is actually botnets, so the dominance of GPUs is actually preferable to me.  You don't have to agree, though.
newbie
Activity: 18
Merit: 0
Nope, sorry.  Gzip is a streaming compressor.

And in general, GPUs aren't as memory limited as you are thinking.  The typical highish end video card that we use has more memory on it than a generic server from a couple of years ago.  Any change that makes hashing memory intensive enough to stop GPU mining will kill the network by forcing nodes to drop out.

+1 and +1
kjj
legendary
Activity: 1302
Merit: 1026
Nope, sorry.  Gzip is a streaming compressor.

And in general, GPUs aren't as memory limited as you are thinking.  The typical highish end video card that we use has more memory on it than a generic server from a couple of years ago.  Any change that makes hashing memory intensive enough to stop GPU mining will kill the network by forcing nodes to drop out.
legendary
Activity: 1428
Merit: 1093
Core Armory Developer
That's a problem. Is there any existing well-known hashing algorithm which favors generic CPUs and have some properties preventing it to be implemented in highly-effective GPU/FPGA/ASIC for acceptable cost?

You don't need a raw hashing algorithm to impose these kinds of requirements on the system.  You just need a unparallelizable process that ultimately leads to a random string of bits--i.e. only the last step needs to be hashing.  This is a silly, off-the-top-of-my-head idea, but it should illustrate the point:

The current process for POW looks like this:

Code:
a = Header
POW = Hash256(a)

Instead you could make it:

Code:

a = Header
b = [Hash(a) Hash^2(a) Hash^3(a) ... Hash^N(a)]
c = Gzip(b)
POW = Hash256(c)

The idea here is that each thread must compute N sequential hashes and save all the intermediate results so it can make one huge string and gzip it (I believe gzip needs the entire string in memory to operate... if not, then pick a different operation that does require it).  Then the result is hashed to 32 bytes to get the final proof-of-work.  

The reason something like this works is that N can be chosen to enforce any kind of per-thread memory requirement.   If N is 10,000,000, then each thread has to hold 320 MB of data at once to do the GZIP.  That means that a 6990 would only be able to run 10 threads at once:  possibly faster than your 4-core CPU, maybe not.  Most GPUs with 1 GB of mem would only run 2 or 3 threads.  At best, your GPU would be like an extra CPU, and a modern quad-core computer with 4 GB of RAM could probably run fine with this in the background (or the user could scale it down to 1-2 cores when they need the resources).
sr. member
Activity: 252
Merit: 250
We've seen what happens to the BTC market when non-security breaches happen (not related to the functional security of BTC itself), what do you think is going to happen when someone pulls off a massive double-spend once which actually does reflect on the security of BTC? 

I don't know what would happen. The market as it is is crazy enough so I don't even dare to try to guess Smiley. What I do know is that (1) there is no 100% security, (2) bad things that could happen, will happen sooner or later, (3) problems get fixed.

Obviously, we should try to make the system better *before* attacks can destroy it. But even is it does get destroyed (I don't believe this will happen, I think Bitcoin being declared illegal is much more probable), I think that it's already proven that people want such a system. And what people want, people get. It might not be called Bitcoin, but in the end, it doesn't even matter.
legendary
Activity: 1428
Merit: 1093
Core Armory Developer
Botnet owners will not do this, because they can use their power for much more profitable "services".

This is a bad argument.  It assumes that people are rational, and/or have some kind of nominal, predictable intentions.  People don't always do things for good reasons, or maybe that have reasons others didn't think about.

  • Perhaps a government/company pays a botnet owner to do it
  • Perhaps the government takes over a botnet and decides to kill two birds with one stone.
  • Perhaps a botnet owner thinks that BTC has no future, so he finds someone who is willing to let him short-sell $1million in BTC and then he crashes the network for immediate payout.

They don't need to persist this attack forever to do damage.  It only takes one time to permanently scar BTC (maybe make the final KO), and it doesn't make me sufficiently happy to say "see, I was right!"  I want to see it not happen, and not have it possible to happen.   We've seen what happens to the BTC market when non-security breaches happen (not related to the functional security of BTC itself), what do you think is going to happen when someone pulls off a massive double-spend once which actually does reflect on the security of BTC? 
newbie
Activity: 39
Merit: 0
My suggestion was to have two chains verifying all transactions. 
The one we have today + another with a different hashing algorithm which favors generic cpus.

That's a problem. Is there any existing well-known hashing algorithm which favors generic CPUs and have some properties preventing it to be implemented in highly-effective GPU/FPGA/ASIC for acceptable cost?
member
Activity: 89
Merit: 10

My suggestion was to have two chains verifying all transactions. 
The one we have today + another with a different hashing algorithm which favors generic cpus.

This way you'd need both a ton of gpus AND a huge botnet to attack the system.

As bitcoin grows large globally there would eventually be no government nor botnet in the world who could catch it.

sr. member
Activity: 252
Merit: 250
This type of attack is considered "probably not a problem": https://en.bitcoin.it/wiki/Weaknesses#Attacker_has_a_lot_of_computing_power . For someone to do it, it would need not only to have such processing power, but to also maintain the attack indefinitely. Running costs of a few millions a month will only be justified if the attacker gains more than that AND he has no other means of achieving the same results.

Botnet owners will not do this, because they can use their power for much more profitable "services". Governments can filter or make Bitcoin illegal with less effort. So that leaves us a very rich private person who really-really hates Bitcoin. Doesn't seem to be a problem.
mrb
legendary
Activity: 1512
Merit: 1028
Not sure if someone could get their hands on that many gpus.

Why? Tianhe-1A is made of 7168 GPUs. 21k high-end GPUs may take months to be provisioned by a manufacturer, but it is certainly within reach with adequate budget.
hero member
Activity: 980
Merit: 506
Not sure if someone could get their hands on that many gpus.
legendary
Activity: 1428
Merit: 1093
Core Armory Developer

Lets see...

Take over a couple of the mining pools.

or

start a new mining pool with a zero fee and get people to switch over (hey, even lie about how many GHashes you have to get more people on board), then use this to attack


This brings up a really good point.  The miners who are computing for the pools should be aware of what the "current" block really is (they are regular nodes, too, aren't they?).  If so, they could be programmed to pay attention to the current longest-chain, and reject doing work for the pool if it sends headers to be hashed that use a different previous-block hash.  This assumes that the only type of attack someone would want to execute is a blockchain rewrite/double-spend attack. 

This would seem to shut down this kind of attack very quickly.  And as you pointed out... some pools already have a "dangerous" amount of computing power.  Though now that I think about it, my miners are using Phoenix which actually doesn't learn of the blockchain.  Perhaps this is a worthy functionality to try to fit into the GPU-miner code (such as Pheonix). 

(Yes, yes, I know, there's no good reason for a pool operator to attack the network.  But people don't always do things for good reasons.  I'd rather just avoid the possibility entirely)
newbie
Activity: 15
Merit: 0

Lets see...

Take over a couple of the mining pools.

or

start a new mining pool with a zero fee and get people to switch over (hey, even lie about how many GHashes you have to get more people on board), then use this to attack

legendary
Activity: 1428
Merit: 1093
Core Armory Developer
First of all, this is exactly the question I was asking in my post here (Anyone else concerned about the global hashrate?).  This scares me because you're right, it's prohibitive, but still completely feasible for an individual/corporation/mafia/government with enough motivation. 

One thing that should offer some comfort is that running 20,000 GPUs is a lot harder in practice than theory.  You're going to have to buy, at minimum, a warehouse and probably have your own power plant as you would need about 10 megawatts of constant output to run those GPUs.  Not to mention hire lots of people to build and maintain all those systems.  Alternatively, you could buy a 100,000 FPGAs and get away with 1 MW of power, but at substantially higher cost.  Of course, you don't need to maintain it very long to execute the attack, but it's still a pretty substantial investment of time an money, well in excess of the cost of the raw hardware.

My concern was more about botnets.  It would seem that the biggest botnet is capable of generating the same order of magnitude as the current global hashrate, though it's tough to tell if they could achieve 10% or 51%.  Either way, it concerns me, and it's not going to get any better, unless BTC goes up to $100 and then people start buying more hardware.  In times like now, it appears to be profitable to run existing hardware, but too risky/long-term to justify buying new hardware.  If BTC floats this way until the generation rate drops from 50 BTC/block to 25 BTC/block, I think we're "screwed."  I put it in quotes, because the feasibility of an attack does not guarantee it.  But certainly, the fact that it's feasible will have an impact on general acceptance of the currency.

In regards to botnets, I think the current POW is actually better.  If users were solely using their CPUs for such calculations, then even a "small" botnet would have no problem overpowering the base of 20,000 BTC users using 1-2 CPUs.  The computers controlled by botnets typically don't have big GPUs, and thus are at a significant disadvantage relative to the base of BTC nerds with 20+ GPUs in their basement.  Of course this may not hold when talking about the threat of governments that want to attack the network, but I would guess their existing hardware is similarly limited-- at least right now, most "computing clusters" still consist mainly of CPUs and not GPUs.  Getting up to 15 THash/s on CPUs is a lot harder.
newbie
Activity: 39
Merit: 0
A Radeon 6990 has 4 gigabytes of ram.

If the task is "find a number that bcrypts/scrypts to less than a given hash target," I don't see anything that would stop a GPU programmer from implementing bcrypt/scrypt on the CPU and parallelizing at the try-different-nonces level.

Maybe I'm missing something; I'm probably biased because I worked at SGI from 1988 to 1996 and saw first-hand the evolution of GPUs from very-special-purpose chips with very limited memory to very-general-purpose vector-processing pipelines with very fast access to lots of memory.
HD6990 has more than 3000 processing units so bcrypt/scrypt/whatevercrypt can be relatively easy parallelized.
My idea is to make each processing unit requiring a lot of RAM for intermediate data thus adding "GByte" dimension to "GHash" race and then keeping GByte-GHash profile similar to that one of the general-purpose PCs somehow.
Pages:
Jump to: