Pages:
Author

Topic: [ANSWERED] Why is bitcoin proof of work parallelizable ? - page 3. (Read 4669 times)

legendary
Activity: 1246
Merit: 1016
Strength in numbers

The point is: The current proof of work scheme makes it possible to parallelize and have pools. A pool could thus become a very strong adversary which is not what we want - right? A non-parallelizable proof of work scheme has the consequence that nobody can become stronger than a, say, 4.5 GHz overclocked single core pentium. This is what we want.


If I can get X power by doing Y. How am I not going to be able to get 2X power by doing Y twice? How could you possibly tell the difference between two people doing Y and one person doing Y twice? Not to mention (again) that the pool is actually just a bunch of people separately doing Y and sharing results.
full member
Activity: 195
Merit: 100
IMHO, the short answer is that Bitcoin is a distributed network. If you want lots of people around the world to work on the same problem, you need to parallelize it one way or another.

You are aware that they are not working on the same problem? Every miner is working on a different problem (with a different bounty transaction, different time stamp etc.). This cannot be the reason.

This question has already been asked in many ways. If you can limit mining to a single network node (CPU), dedicated miners will just set up multiple nodes for themselves.

Exactly. This improves the probabilistic convergence speed of the algorithm. That's just my claim. Hopefully I can come around to produce a more formal proof for this the next days.

Moreover, limiting generation per person throws away all incentives for developing the network. In its current form, the network is stronger against attacks, because people are rewarded for spending computational power on it. In other words, who is going to do any work, if you just give everyone a big chunk of money?

The point is: The current proof of work scheme makes it possible to parallelize and have pools. A pool could thus become a very strong adversary which is not what we want - right? A non-parallelizable proof of work scheme has the consequence that nobody can become stronger than a, say, 4.5 GHz overclocked single core pentium. This is what we want.
full member
Activity: 195
Merit: 100
A  parallelizable problem also makes it easy to tune the difficulty.

Why would parallelizability and tunability relate to each other? In the Rivest Shamir paper I cited above there is a nice non-parallelizable puzzle which explicitly contains the "number fo computational seconds" as tuning parameter.

If you were able to claim a block reward by finding large primes, the block interval would be a lot less predictable.

Finding large primes would be a very bad non-parallelizable proof-of-work because most prime-finding algorithms are very nicely parallelizable - especiall the probabilistic algorithms. Thus: Non-parallelizable proof-of-works usually are not based on finding large primes.

Also, the chosen method (cryptographic hashing with a result below a target value) is a problem that is difficult to solve, but easy to verify. The hash serves a dual prupose: poof-of-work and integrity verification. Combining the two makes pre-computing the work ahead of time nearly impossible (exception: block-chain fork).

Which is true for all proof-of-work concepts by definition and also for the non-parallelitable ones we find in the literature.
full member
Activity: 195
Merit: 100
How is this whine about pools related to parallelization? 1 cpu or 1 gpu, nothing would change, we would still have pools with cpu, cause otherwise to find a block in solo it will take us years.

The incentive aspect can be solved by adapting difficulty as written in my original post.
full member
Activity: 195
Merit: 100
If you used a non-parallelizable test, the same person would win every time: the person with the fastest single serial processor.  That would be an even dumber distribution than the existing one.

No. This is not specific for non-parallelizable.

The current Bitcoin proof of work leads to a stochastic situation since the solution algorithm is stochastic in nature and every miner (rather: pools) works on a different task.

Both aspects of course must be built into the non-parallelizable proof of work. This can be done, for example by extending the algorithm in the Rivest shamir Wagner paper (see above) by nonce aspects. I have not yet worked out the details or implemented this, but assume it is straight forward.
 
full member
Activity: 195
Merit: 100
Provide any example of
Quote
"A non-parallelizable proof of work would still serve the same goals as a parallelizable proof of work but would solve these problems."

Very few things can't be parallelized.  A mining pool is one example of that.  Provide an example of a problem that an individual can solve but a group of individuals can't solve more quickly.

The problem to solve must, of course, be inherently serial. Searching (guessing) a nonce for a single hash condition is not. Thus we should think more along the lines of hash chains.

There is some literature on this although it is not so widely known.

Colin Boyd presented a talk at the CANS 2007 conference on "Toward Non-parallelizable Client Puzzles"

The classical reference if from Rivest, Shamir, wagner: Timelock puzzles and timed release crypto.

Green, Juen, Fatemieh, Shankesi et al discuss the GPU issue in "Reconstructing Hash reversal based Proof of Work Schemas".

All in all, there are some 20 papers on this and the math is straight forward.
sr. member
Activity: 520
Merit: 253
555
IMHO, the short answer is that Bitcoin is a distributed network. If you want lots of people around the world to work on the same problem, you need to parallelize it one way or another.

This question has already been asked in many ways. If you can limit mining to a single network node (CPU), dedicated miners will just set up multiple nodes for themselves. If you want to limit mining power per person, you've just thrown away the whole idea of an anonymous distributed network. Also, do you want to ban people from working together towards a common goal?

Moreover, limiting generation per person throws away all incentives for developing the network. In its current form, the network is stronger against attacks, because people are rewarded for spending computational power on it. In other words, who is going to do any work, if you just give everyone a big chunk of money?
legendary
Activity: 1008
Merit: 1001
Let the chips fall where they may.
A  parallelizable problem also makes it easy to tune the difficulty. If you were able to claim a block reward by finding large primes, the block interval would be a lot less predictable.

Also, the chosen method (cryptographic hashing with a result below a target value) is a problem that is difficult to solve, but easy to verify. The hash serves a dual prupose: poof-of-work and integrity verification. Combining the two makes pre-computing the work ahead of time nearly impossible (exception: block-chain fork).
legendary
Activity: 1148
Merit: 1008
If you want to walk on water, get out of the boat
Quote
* Larger pools have a significant share of the total hash capacity. This increases their chances for a 50% attack.
* The speed with which the block chain "converges" (or in the case of a fork "heals") is faster, when many competitors with small computing power are in competition, as compared to the situation, when there are less competitors or competitors with higher (aggregated) computing power.
* Bitcoin was meant to be peer 2 peer; pooling is in contradiction to this. The pools are the banks of tomorrow.
How is this whine about pools related to parallelization? 1 cpu or 1 gpu, nothing would change, we would still have pools with cpu, cause otherwise to find a block in solo it will take us years.

And the "50% attack from pool" thing can be easily solved, there can still be pool but in a way that they CANNOT control the work
full member
Activity: 372
Merit: 114
The aim is to distribute coins via lottery -- every nonce you test is a lottery ticket you bought.

If you used a non-parallelizable test, the same person would win every time: the person with the fastest single serial processor.  That would be an even dumber distribution than the existing one.
donator
Activity: 1218
Merit: 1079
Gerald Davis
Provide any example of
Quote
"A non-parallelizable proof of work would still serve the same goals as a parallelizable proof of work but would solve these problems."

Very few things can't be parallelized.  A mining pool is one example of that.  Provide an example of a problem that an individual can solve but a group of individuals can't solve more quickly.

Remember multiple GPU (or multiple people in a pool) aren't working together they are working independently and simply sharing the reward.  You could design a work unit so that it requires so much resources that one computer can only work on one solution at a time.  You simply shift the bottleneck from computation power to memory or disk access. Still even if you did so you wouldn't eliminate pools.

If one computer can work on a problem then two computers can work independently and share the reward.  That is all pools are doing.  They aren't working together.  Shares are worthless and don't help the pool, they are merely a metric to avoid cheating.  The only hash that is worth anything is the solution and it is worth 50BTC (plus fees).  So every share produced is worthless except the one which is the solution and it is worth everything.    A pool is simply an agreement that everyone works and then you split the 50 BTC.  Shares and RPC and long polling are simply mechanisms to enforce that sharing.

How exactly would you outlaw sharing?
full member
Activity: 195
Merit: 100

I am trying to understand why Bitcoin has a parallelizable proof of work.

The task of the proof of work in Bitcoin is to make it increasingly difficult for an attacker to "change" the "past" and to ensure some kind of convergence in the concept of the "longest chain" in the block structure. Also, the block bonus provides some incentive to the miner, which keeps the system running.

The currently employed proof of work schemes are parallelizable.

As a result of this, pooled mining is faster than GPU mining (250 cores) is faster than iCore 7 mining (8 cores) is faster than Celeron mining (1 core). In my opinion, this is a disadvantage, for a number of reasons:

* Larger pools have a significant share of the total hash capacity. This increases their chances for a 50% attack.
* The speed with which the block chain "converges" (or in the case of a fork "heals") is faster, when many competitors with small computing power are in competition, as compared to the situation, when there are less competitors or competitors with higher (aggregated) computing power.
* Bitcoin was meant to be peer 2 peer; pooling is in contradiction to this. The pools are the banks of tomorrow.

A non-parallelizable proof of work would still serve the same goals as a parallelizable proof of work but would solve these problems.

Of course, at the current difficulty, chances to "win" a block for a single miner would be small. This issue however can be solved easily:

* By increasing block speed and apropriately reducing the bounty, the economics could be kept the same in the long run, but the granularity of the block bounty would be adapted.
* If we want to keep block speed and bounty, we could still have miners operate several computers or GPU work on blocks, but every core would work on its own block variant. This would still increase the return on investment for the miner linearly, but it would not show the problems outlined above.

Therefore my question: Why are we having a parallelizable proof of work ??

Is there a good reason which I do not see or is it merely a historical accident (possibly waiting to be cured?) ?






Pages:
Jump to: