Pages:
Author

Topic: Blocks are [not] full. What's the plan? - page 2. (Read 14343 times)

legendary
Activity: 1792
Merit: 1111
February 07, 2014, 06:12:06 AM
I still believe counting the total bitcoin-day-destroyed is the most practical way to address this issue. In this case empty blocks are disadvantaged. I guess Satoshi thought the same? How this could be exploited?
legendary
Activity: 1120
Merit: 1152
February 07, 2014, 05:40:42 AM
Interesting. However, even you have 100% certainty who defrauded you, this is not a conclusive evidence. It is equally probable that you are the one who stole the bond. Of course you can make it a 2-of-2 multisig scheme so the signature of the other pool is needed to steal the bond. However, this would violate the original purpose of having the fidelity bond, which should allow ANYONE to steal the bond by knowing the secret key.

Note, when I said "I" that was to mean "I the pool operator" - hopefully they know what they themselves are doing! Of course, obviously it's not really 100% once you take into account that maybe I've actually had my server compromised, but it's certainly a high enough % that the idea of fidelity-bonded padding doesn't work in practice between largish pools.
legendary
Activity: 1792
Merit: 1111
February 07, 2014, 04:28:34 AM
The whole idea of padding a block with junk is flawed. It couldn't be fixed.

It actually raises a really interesting theoretical cryptography question: Is it possible to create a long string (the padding bytes) such that you can prove it was not possible to derive the string from a smaller secondary string? (AKA a trapdoor)

You can probably come up with a scheme where the second string was some kind of secret, such that knowledge of it could be exploited, perhaps to steal the value of some fidelity bond. For instance you could computer the padding bytes as H(secret || i) with i in (0, n) and secret being some ECC privkey for a valuable txout; giving the privkey to other miners is obviously risky.

The problem is if anything I think that actually encourages centralization: I can safely give a small number of other mining pools that privkey if we have a legal agreement to only use it for the intended purpose. If my funds go missing, I have a pretty good idea who did it and can get the lawyers involved. The smaller the number of pools, the more powerful and enforcable this mechanisms is; with two pools I have 100% certainty who defrauded me. Unfortunately that's the exact opposite of what the padding idea is trying to accomplish...

Interesting. However, even you have 100% certainty who defrauded you, this is not a conclusive evidence. It is equally probable that you are the one who stole the bond. Of course you can make it a 2-of-2 multisig scheme so the signature of the other pool is needed to steal the bond. However, this would violate the original purpose of having the fidelity bond, which should allow ANYONE to steal the bond by knowing the secret key.
legendary
Activity: 1120
Merit: 1152
February 07, 2014, 12:00:44 AM
The whole idea of padding a block with junk is flawed. It couldn't be fixed.

It actually raises a really interesting theoretical cryptography question: Is it possible to create a long string (the padding bytes) such that you can prove it was not possible to derive the string from a smaller secondary string? (AKA a trapdoor)

You can probably come up with a scheme where the second string was some kind of secret, such that knowledge of it could be exploited, perhaps to steal the value of some fidelity bond. For instance you could computer the padding bytes as H(secret || i) with i in (0, n) and secret being some ECC privkey for a valuable txout; giving the privkey to other miners is obviously risky.

The problem is if anything I think that actually encourages centralization: I can safely give a small number of other mining pools that privkey if we have a legal agreement to only use it for the intended purpose. If my funds go missing, I have a pretty good idea who did it and can get the lawyers involved. The smaller the number of pools, the more powerful and enforcable this mechanisms is; with two pools I have 100% certainty who defrauded me. Unfortunately that's the exact opposite of what the padding idea is trying to accomplish...
legendary
Activity: 1792
Merit: 1111
February 06, 2014, 10:09:26 PM

It sounds like you are suggesting that miners will collude with each other to fix each other's invalid blocks. If that didn't happen, that any compression that a miner did to get his block under the minimum size would just result in his block being rejected.

It must happen because everyone want to save bandwidth. Those miners who refuse to collude will simply have higher chance to get orphaned so they will collude eventually.


Here's one potential argument against that: individuals running full nodes who just want everyone to play by the rules have no incentive to help miners cheat, so when their Bitcoin-QT software sees a small block get broadcast, then even if it is technically possible for them to 'fix' the block to make it larger, they don't want to, so they continue to wait for a larger block. Any cheating miners who start working off of the too-small block won't get any blocks they find accepted by users, because the cheating miners are building on a too-small block. So some corrupt miners can create their own fork of the blockchain if they want, which doesn't respect the min-blocksize protocol change, but no one will care because users will be using the chain that adheres to the protocol.

And here's a counter-argument suggesting that you and Cryddit are right: a group of cheating miners could do two things when they find a block: first, immediately broadcast the small/cheating version of the block. Then immediately broadcast the larger version of the block. Any miners who receive a cheating block will know that a valid block is probably going to follow very soon, so they fix the cheating block to make it valid, and start working off of it (otherwise they'd be worse off than other miners who did this). When they finally get the valid block, they think "yeah, I knew this block was going to come, I'm already working on it." When users of Bitcoin-QT get the cheating block, they will reject it, but they'll soon get the corresponding large block and accept it.

Did I miss anything?

Have the core developers thought much about if there is some clever trick to make it not work to broadcast an initial cheating block and then later broadcast a valid block?


Non-mining full nodes have no stake in this issue. All you need is one single full-node to fix cheating blocks and broadcast the junk-padded blocks, then all full-nodes will accept the chain. Eventually, non-mining full-nodes will also accept cheating blocks because, again, no one wants to waste bandwidth. Non-mining full-nodes will also keep the cheating blocks because, yet again, no one wants to waste disk space. Cheating blocks will become the new standard block.

The whole idea of padding a block with junk is flawed. It couldn't be fixed.
full member
Activity: 187
Merit: 162
February 06, 2014, 06:04:53 PM
Minimum block sizes don't work as miners can pad them with transactions between their own addresses.

That's fine though -- if a miner wants to hit the minimum size value by including transactions to themselves instead of junk, then that still neutralizes any advantage they could have gotten from broadcasting a small block.

A block of 1MB does not mean you need 1M bandwidth to transmit. If the junk is deterministic (and it will always be), no one will ever need to waste bandwidth to transmit the junk. So we are going back to the current system (i.e. bandwidth usage is proportional to the total transaction size. You save bandwidth by including less transactions)

It sounds like you are suggesting that miners will collude with each other to fix each other's invalid blocks. If that didn't happen, that any compression that a miner did to get his block under the minimum size would just result in his block being rejected.

Here's one potential argument against that: individuals running full nodes who just want everyone to play by the rules have no incentive to help miners cheat, so when their Bitcoin-QT software sees a small block get broadcast, then even if it is technically possible for them to 'fix' the block to make it larger, they don't want to, so they continue to wait for a larger block. Any cheating miners who start working off of the too-small block won't get any blocks they find accepted by users, because the cheating miners are building on a too-small block. So some corrupt miners can create their own fork of the blockchain if they want, which doesn't respect the min-blocksize protocol change, but no one will care because users will be using the chain that adheres to the protocol.

And here's a counter-argument suggesting that you and Cryddit are right: a group of cheating miners could do two things when they find a block: first, immediately broadcast the small/cheating version of the block. Then immediately broadcast the larger version of the block. Any miners who receive a cheating block will know that a valid block is probably going to follow very soon, so they fix the cheating block to make it valid, and start working off of it (otherwise they'd be worse off than other miners who did this). When they finally get the valid block, they think "yeah, I knew this block was going to come, I'm already working on it." When users of Bitcoin-QT get the cheating block, they will reject it, but they'll soon get the corresponding large block and accept it.

Did I miss anything?

Have the core developers thought much about if there is some clever trick to make it not work to broadcast an initial cheating block and then later broadcast a valid block?



legendary
Activity: 924
Merit: 1132
February 06, 2014, 04:53:52 PM

The point is that any block with size less than the minimum size would be disallowed by the protocol. So it wouldn't matter if all the other nodes knew what the junk values would be.

Doesn't matter.  If all the other nodes know what the junk values will be, then the other nodes will reconstruct the block (at the right size, with junk values) right after the block is transmitted to them (at the wrong size, without junk values). 

legendary
Activity: 1792
Merit: 1111
February 06, 2014, 05:33:26 AM
This is TOTALLY useless. If the content of the junk is dynamic but deterministic (e.g. repeatedly hashing the last block), miners don't need to transfer the junk because everyone know the content. If the content is unspecified, all miners will fill it with 0s. So, again, they don't need to transfer the junk because everyone know the content.

The point is that any block with size less than the minimum size would be disallowed by the protocol. So it wouldn't matter if all the other nodes knew what the junk values would be.

A block of 1MB does not mean you need 1M bandwidth to transmit. If the junk is deterministic (and it will always be), no one will ever need to waste bandwidth to transmit the junk. So we are going back to the current system (i.e. bandwidth usage is proportional to the total transaction size. You save bandwidth by including less transactions)

Think carefully before you reply.
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
February 06, 2014, 05:21:52 AM
This is TOTALLY useless. If the content of the junk is dynamic but deterministic (e.g. repeatedly hashing the last block), miners don't need to transfer the junk because everyone know the content. If the content is unspecified, all miners will fill it with 0s. So, again, they don't need to transfer the junk because everyone know the content.

The point is that any block with size less than the minimum size would be disallowed by the protocol. So it wouldn't matter if all the other nodes knew what the junk values would be.

Minimum block sizes don't work as miners can pad them with transactions between their own addresses.
full member
Activity: 187
Merit: 162
February 06, 2014, 05:18:57 AM
This is TOTALLY useless. If the content of the junk is dynamic but deterministic (e.g. repeatedly hashing the last block), miners don't need to transfer the junk because everyone know the content. If the content is unspecified, all miners will fill it with 0s. So, again, they don't need to transfer the junk because everyone know the content.

The point is that any block with size less than the minimum size would be disallowed by the protocol. So it wouldn't matter if all the other nodes knew what the junk values would be.
legendary
Activity: 1792
Merit: 1111
February 06, 2014, 04:35:50 AM
This is exactly what I was about to post. It seems like an elegant solution with good incentives (if someone includes a reasonable fee with their transaction, the miner would rather have it in their block than some junk).

I haven't thought that deeply about this, but it may not be necessary to have the minimum block size be equal to the maximum block size. Could the minimum size of the next block be calculated by every node based on some property of the N previous blocks? The intuition is that we'd want the minimum size to be maybe 10% larger than the block size that we predict we'll need to include all transactions that pay a reasonable fee. So perhaps if this code were working right now, the min block size would be 250KB instead of 1MB, but the max block size would still be 1 MB.



This is TOTALLY useless. If the content of the junk is dynamic but deterministic (e.g. repeatedly hashing the last block), miners don't need to transfer the junk because everyone know the content. If the content is unspecified, all miners will fill it with 0s. So, again, they don't need to transfer the junk because everyone know the content. If you require different junk for different block, all miners will simply fill it with the current block height. If you require "random" junk, you must have a public algorithm to determine "randomness" so it's no longer random and miners will make it deterministic again. No miner will break this consensus because everyone want to save bandwidth

full member
Activity: 187
Merit: 162
February 06, 2014, 02:23:41 AM
Here it comes my idea about this issue:

Miners include few transactions in blocks because more transactions equals more probability of orphaned block so lets equal all blocks:

Miners should craft a block normally, so lets imagine they generate a 250KB block. Before they send it to other node they have to concatenate junk bytes (random (?)) to the block data, so all blocks are 1MB.

When a node sees this block, they broadcast it and when they finish they delete this junk bytes and they only the block.


Pros:
- All blocks "are" 1MB in terms of relaying them.
- We avoid other more tecnical mecanisms
Cons:
- Bitcoin QT needs some bandwith more because now all blocks are 1MB.


If one day we need to rise the 1MB block limit this process will be the same but all blocks will require to be 10MB (for example). We only need to concatenate junk to them.


How to perform this hard fork?
Bitcoin core developers can release an update that includes this fix but only enforcing it when the blockchain reaches the block 277000 (30 days later) so we give some time for people and miners to update their software.





This is exactly what I was about to post. It seems like an elegant solution with good incentives (if someone includes a reasonable fee with their transaction, the miner would rather have it in their block than some junk).

I haven't thought that deeply about this, but it may not be necessary to have the minimum block size be equal to the maximum block size. Could the minimum size of the next block be calculated by every node based on some property of the N previous blocks? The intuition is that we'd want the minimum size to be maybe 10% larger than the block size that we predict we'll need to include all transactions that pay a reasonable fee. So perhaps if this code were working right now, the min block size would be 250KB instead of 1MB, but the max block size would still be 1 MB.

legendary
Activity: 2576
Merit: 1186
February 06, 2014, 01:48:00 AM
Rumour has it the Chinese pools can't get a decent internet connection in China and are too cheap to bother setting up a remote block-making node, so they just make tiny blocks and push the workload onto the other miners.
I don't have first-hand knowledge if this is true or not, so research it before acting on it...
Feel free to suggest they get in touch with me if it turns out to just be a technical problem of some sort (these pools aren't participating in the regular pool-operator communciations networks, so I'm not sure how to reach them off-hand).
full member
Activity: 140
Merit: 100
February 06, 2014, 01:04:05 AM
When max block is 1MB and pools are using less than 200kB and in Discus Fish's case 48kB, this is something that doesn't need to happen, I'm guessing, as I'm no tech wizard unlike you.
full member
Activity: 140
Merit: 100
February 06, 2014, 01:00:40 AM
But it's not equal and bitcoin's transactions have slowed considerably.  Cry
legendary
Activity: 2576
Merit: 1186
February 06, 2014, 12:58:23 AM
Note: the equivalent Ltc transfer took less than 4 minutes. (I know it's meant to be faster, but still...)
Obviously an unused network is going to find room for your transaction with lower fees (although is it really "equivalent" in that case?).
Reality is that all things equal, Litecoin is not any faster, though!
full member
Activity: 140
Merit: 100
February 06, 2014, 12:34:13 AM
Note: the equivalent Ltc transfer took less than 4 minutes. (I know it's meant to be faster, but still...)
full member
Activity: 140
Merit: 100
February 06, 2014, 12:23:13 AM
I don't disagree and I'm still super bullish bitcoin, but I think discus fish pool are doing block sizes of 48 kB.  Would be good if someone did something clever about that.   Smiley
legendary
Activity: 2576
Merit: 1186
February 06, 2014, 12:14:36 AM
I just tried to send BTC6.5 with a 0.0001 fee (per software recommendation) and it took 2.5 hours! Yes, this issue is impacting users like me now.
Sends are almost always instant with Bitcoin.

Confirmation may have taken 2.5 hours, but that's still relatively fast.
Outside of Bitcoin, it's typically 6+ months with expensive credit card fees or at best half a day with even more expensive wiring fees.
full member
Activity: 140
Merit: 100
February 06, 2014, 12:04:44 AM
I just tried to send BTC6.5 with a 0.0001 fee (per software recommendation) and it took 2.5 hours! Yes, this issue is impacting users like me now.
Pages:
Jump to: