Author

Topic: Mining pools cooperation: Global block work partitioning! (Read 1084 times)

legendary
Activity: 1442
Merit: 1000
Also:  why is everyone always trying to find some way that centralizing everything would make it "better"?  All of bitcoin's strengths are in it being decentralized and peer-to-peer.
So then mining pools are a bad idea, right?
newbie
Activity: 27
Merit: 0
I feel like an idiot responding to this at all as this has been addressed at least dozens, if not hundreds of times before on these forums but...

There is NO duplication of work.  None at all. Every single worker is already working on it's own unique little piece of the puzzle.

Also:  why is everyone always trying to find some way that centralizing everything would make it "better"?  All of bitcoin's strengths are in it being decentralized and peer-to-peer.
legendary
Activity: 1442
Merit: 1000
or it's just that any worker attempts to create a block with a different set of included transactions all the time, so there are no actually duplicate hash calculations across the network?

This.

No worker anywhere is trying to do the exact same problem as anybody else.

So in reality there are workers that duplicate parts of another worker's work?
sr. member
Activity: 406
Merit: 250
or it's just that any worker attempts to create a block with a different set of included transactions all the time, so there are no actually duplicate hash calculations across the network?

This.

No worker anywhere is trying to do the exact same problem as anybody else.
legendary
Activity: 1442
Merit: 1000
Seeing that 80% of all hashing power rests within ~5-6 pools, I am pondering the issue of waste. Every pool is basically taking the next block and attempting to spread the partition the work required to solve it to it's connected workers. This way, there is no overlapping redundancy between the workers.

Would the pool operators agree to have a common master getwork() controller that partitions all available work between pools whom then partition it spread across all workers?

Is it possible or it's just that any worker attempts to create a block with a different set of included transactions all the time, so there are no actually duplicate hash calculations across the network?
Jump to: