Pages:
Author

Topic: Reasons to keep 10 min target blocktime? (Read 4384 times)

legendary
Activity: 1232
Merit: 1094
August 03, 2013, 04:32:23 AM
#31
Let's say you create blocks every 10 seconds, and append them to a P2Pool-like share-chain, which much follow the same rules as the Bitcoin blockchain in the sense that no transactions in a share must conflict with a transaction in a previous share. The effect of this is that fees become irrelevant, or at least that you can only choose which transactions to include from the previous 10 seconds of transactions. If a block becomes full after 5 minutes, you would be forced to mine the share chain and not be able to remove low fee transactions even if new high fee transactions come in. You only have a window of 10 seconds within which you can choose which transactions to include. Once that share is calculated you cannot remove transactions from the share chain.

This isn't necessarily true, the rule could be that you can't create a double spend.

If TX1 is in the fast-chain, then transactions that spend any of the inputs into TX1 cannot be added to main chain blocks.

There would be no problem with leaving TX1 out of your block as long as you don't include a double spend of any of TX1's input.

The fast-chain would have to be fast, signature checks could be skipped.  The owner of the UTXO would be the only one who knew the public key.  The only thing that would need to be checked would be that the public key provided hashes to the address.

The fast chain could restrict itself to only standard transactions.  Using more complex transactions would be slower.

There is a risk that when a transaction is broadcast someone changes the transaction, since they know the public key (and signature checks are skipped).  Some kind of fast signature would be useful.

A new "fast transaction" standard transaction could be added.  This would include a 4 byte nonce.

The rule could be that fast transactions must have a number of leading zeros in their hash.  If someone modified the transaction, then they would have to re-do the nonce updates.

The number of zeros could be dynamically controlled to try to keep spam low.  It should be low enough that it takes < 10 seconds to solve on a mobile device.

Is there an actual signature algorithm which gets high speed at the expense of lower security?  I guess any algorithm would work if the key size was lowered? 

Could an ECDSA 64 bit key give 5-10 mins of security and be very fast?
hero member
Activity: 572
Merit: 506
August 03, 2013, 02:39:16 AM
#30
If miners use software that verifies that blocks it finds are reported by the pool, and the pool publishes the headers of its shares, then no time is needed to verify the pool is working honestly.
Good solution, surprisingly simple.
hero member
Activity: 572
Merit: 506
August 03, 2013, 01:09:50 AM
#29
It seems to me, that the main idea is to use p2pool-like fast blockchain to provide faster confirmations.
Lets compare that to hardfork:

P2pool-like chain:
Pros: no hardfork needed
Cons: all major pools need to be convinced to use it. Bitcoin infrastructure needs to be upgraded to get the advantage of using that p2pool-like chain. At any moment significant minig power can switch back to good&old mining style, what would reduce usefulness of this solution. Any amount of p2pool miniconfirmations might turn to nothing if next block is found by a non-participating party.

Hardfork:
Pros: all bitcoin infrastructure gets upgraded at once.
Cons: It's a hardfork. There is a theoretical risk of increasing orphans rate.

I don't mention cost of running an SPV node, because that cost is very low anyway.

Btw, recently I sold several btc locally to several diffrent people. Not all of them understand how bitcoin works. A common bitcoin user just wants bitcoins to appear in his wallet. It was not very comfortable for me leaving them before they see their btc received. Even though I was trying to explain how everyting works, convincing them, that there is nothing to worry about.
legendary
Activity: 2053
Merit: 1356
aka tonikt
The main reason why the 10 min doesn't change is that the network does not agree on it.
If you could convince 50+% of miners to change the protocol, I have no doubts that they'd be able to handle a forked client which does it for them - whoever would had stayed on the satoshi branch would have been doomed. Or on any other branch.
If they had enough hashing power nobody would be able to stop it, in such case, but same applies to you trying to change the protocol.
But miners don't seek new rules in this protocol - they like the money as it is.
At least they are sane Smiley
So no worries
legendary
Activity: 1232
Merit: 1094
Good point. If the 1 MB limit is reached then it might reach 1 MB shortly after a new block is found, and transactions are just replaced with others than have higher fees after that.

However, miners could target old transaction "groups" based on the merkle tree.  If 4 transactions were overwritten but part of the same branch, then you could use that root instead of the entire hash.

It is basically a compression algorithm problem.

In fact, it could be implemented as exactly that on a peer to peer connection basis.

You send the entire block header + hashes and the compression algorithm compresses repeats.
legendary
Activity: 980
Merit: 1008
Publishing the hash, 32 bytes each, of 3500 transactions is 109 KB. But the block is not 1 MB right after a new block is found. It starts out being very small, so that the full block templates are small right after a new block is found, and get larger and larger until the next block is found. That's why I assume they are, on average, only 50% of 109 KB.

It depends.  It might be full size, but change as paying transactions overwrite free ones.

I think trying to have all miners target the same set of transactions would be a good thing.  Obviously, they would each have their own coinbase.

OTOH, the more complex you make it, the less willing miners might be to bother.
Good point. If the 1 MB limit is reached then it might reach 1 MB shortly after a new block is found, and transactions are just replaced with others than have higher fees after that.

I think miners should be free to choose whichever transactions they wish. I don't want to change which transactions they mine. That should be up to them completely. I think it might be worthwhile to allow diffs of diffs, to reduce network traffic further. I need to implement something first to find out how much processing power and RAM this will consume. I think the real constraint is bandwidth, calculating and reassembling diffs shouldn't take much processing power, even if it's multiple levels of diffs, I think.

Complexity shouldn't be a problem, as its hidden from the miners. I imagine they just connect to my program instead of bitcoind, and I say that the difficulty is 1/600th of what it really is. When I receive a share that is within the partial confirmation range I process it and send it into the partial confirmation P2P network, and when it's a valid block I send it on to bitcoind for it to be published. Bear in mind that only solo miners and pools would need to publish these partial confirmations. Pool miners send their shares to the pools anyway, and then the pool just needs to be connected to the partial confirmation P2P network.
legendary
Activity: 1232
Merit: 1094
Publishing the hash, 32 bytes each, of 3500 transactions is 109 KB. But the block is not 1 MB right after a new block is found. It starts out being very small, so that the full block templates are small right after a new block is found, and get larger and larger until the next block is found. That's why I assume they are, on average, only 50% of 109 KB.

It depends.  It might be full size, but change as paying transactions overwrite free ones.

I think trying to have all miners target the same set of transactions would be a good thing.  Obviously, they would each have their own coinbase.

OTOH, the more complex you make it, the less willing miners might be to bother.
legendary
Activity: 980
Merit: 1008
Code:

ref_block =
add_txs =
rm_txs =

Pretty much the same as I suggested.  However, it is smaller just to have

,

Since, any new hash will either replace an old one or extend the chain.

you could have

, <000....000>

to mean delete.
Right. The concept holds. How it's encoded is less important.

Nodes would then cache the last seconds of blocks, and they would be able to reconstitute the complete block from blocks in the cache by adding the transactions and removing the transactions from .

That assumes the new miner is the same as the old one.
No that's not necessary as far as I can see. You simply have a module which gets blocks above or equal to 1/600th of the difficulty submitted to it. This module simply keeps the last full block template or set of block templates in memory, and figures out which of the previous templates to publish a diff against. So it quickly calculates which of the templates from the last 60 seconds would give the smallest message size if a diff was produced against it, and publishes this to the network.

But as you mention, the 60 second cache time isn't really necessary if nodes can just ask for the last n block templates when they connect to the network.

If we assume a 1 MB block size

That's 1.7kB/s

Quote
and 300 bytes per transaction, that's around 3500 transaction hashes of 32 bytes each, every 60 seconds.

That's double relative to just downloading the chain and people are already complaining about it.
I think you are misunderstanding me.

If each block can be no larger than 1 MB, then if we assume each transaction is 300 bytes, then it contains around 3500 transactions.

Publishing the hash, 32 bytes each, of 3500 transactions is 109 KB. But the block is not 1 MB right after a new block is found. It starts out being very small, so that the full block templates are small right after a new block is found, and get larger and larger until the next block is found. That's why I assume they are, on average, only 50% of 109 KB.

Quote
I don't think clearing the cache every minute is a good plan.  Better to keep it for at least 1-2 blocks length.
Yeah that makes sense. If new nodes can just connect and request full block templates, then there's no need to have a short cache time.

I was originally thinking it would be a "broadcast only"-protocol, so that miners just broadcast partial confirmations and they cascade throughout the network through the other peers. This keeps traffic down, but it means that new nodes need to wait until their cache contains the relevant full block templates in order to verify blocks.

Quote
A miner sees a header and notices it has a hash that the miner doesn't have, so it asks his peers for it.  It would also show evidence of double spending.
Yes this is interesting too. It means that unless miners are deliberately mining double spends, it will be easier for the miners to find and resolve double spends.
legendary
Activity: 1232
Merit: 1094
Code:

ref_block =
add_txs =
rm_txs =

Pretty much the same as I suggested.  However, it is smaller just to have

,

Since, any new hash will either replace an old one or extend the chain.

you could have

, <000....000>

to mean delete.

Quote
Nodes would then cache the last seconds of blocks, and they would be able to reconstitute the complete block from blocks in the cache by adding the transactions and removing the transactions from .

That assumes the new miner is the same as the old one.

The needs to be a rule that miners try to keep their block similar to the template block.

For example, you could require that blocks have their transactions sorted according to their hash.

Quote
If we assume a 1 MB block size

That's 1.7kB/s

Quote
and 300 bytes per transaction, that's around 3500 transaction hashes of 32 bytes each, every 60 seconds.

That's double relative to just downloading the chain and people are already complaining about it.

I don't think clearing the cache every minute is a good plan.  Better to keep it for at least 1-2 blocks length.

Quote
Perhaps a Merkle Tree solution would be more efficient, since nodes really only care about their own transactions. I haven't done the calculations on this.

The problem is you need to give the full path.  I think a full template and diffs is at least as efficient.

Another way to "template" is to allow grouping of transactions directly.  Transactions are already sent anyway.

Encouraging miners to use mostly the same transactions is a good plan anyway.

A miner sees a header and notices it has a hash that the miner doesn't have, so it asks his peers for it.  It would also show evidence of double spending.
legendary
Activity: 980
Merit: 1008
Of course this would also require that the partial confirmations be broadcast along with the hash of all the transactions in the block, in order for nodes to know whether their transaction is being worked on, and for them to be able to verify the block (that its difficulty is greater than or equal to 1/600th of the block chain difficulty).

This could be split.  Miners could broadcast "template" blocks.

POW > 1/600: Just the header (every second)
POW > 1/60: Header + block hashes (every 10 seconds)

You could also make it more merkle based.

So, I send

header + all hashes

Later I just have to send

header + merkle root

If I change the block, I could send the updated hashes since the last time.

Clients that heard the first transmission could build up the new block.

Other miners could send

previous:
new hash:
transactions: 0,

Having said that, inherently, miners need to update the coinbase transaction for the "extra nonce".

At 500 transactions, sending all the hashes would 16kB, so it is not insignificant.

Another option would be to pay miners to include transactions.  If only 1% of transactions need to be included and there are 512 transactions, then you only need to send the path.  This gives 320 per transaction and 5 transaction, so 1.6kB.    You couldn't send that every second to every node.

Nodes could flag themselves as "HEADER_MONITOR" nodes, and support lots of headers.
I had thought of a "diff"-like approach to this.

We have a cache time, which basically determines how far back a miner can reference a previous block. So if the miner then changes the transactions in a block he or someone else has published, he would simply send:

Code:

ref_block =
add_txs =
rm_txs =

Nodes would then cache the last seconds of blocks, and they would be able to reconstitute the complete block from blocks in the cache by adding the transactions and removing the transactions from .

So if we use 60 seconds as a cache time it means we only need to send out "full blocks" (block header plus all transaction hashes needed to verify the block header) every 60 seconds.

If we assume a 1 MB block size and 300 bytes per transaction, that's around 3500 transaction hashes of 32 bytes each. But blocks are only 0.5 MB on average, because they start out with a size of 0 and end up with a size of 1 MB (if we assume the maximum block size is used, as a worst case scenario). So each full block message is 3500*32*0.5 bytes = 55 KB on average, every 60 seconds. That's 0.9 KB/s for the complete blocks. Add to that the block headers every second, which may contain insertions or deletions. It should be possible to keep the data rate at around 10 KB/s for 10 peers.

When a new node joins the network it can ask for the latest full block, so it's able to reconstitute blocks from "diff messages".

Perhaps a Merkle Tree solution would be more efficient, since nodes really only care about their own transactions. I haven't done the calculations on this.
legendary
Activity: 1232
Merit: 1094
Of course this would also require that the partial confirmations be broadcast along with the hash of all the transactions in the block, in order for nodes to know whether their transaction is being worked on, and for them to be able to verify the block (that its difficulty is greater than or equal to 1/600th of the block chain difficulty).

This could be split.  Miners could broadcast "template" blocks.

POW > 1/600: Just the header (every second)
POW > 1/60: Header + block hashes (every 10 seconds)

You could also make it more merkle based.

So, I send

header + all hashes

Later I just have to send

header + merkle root

If I change the block, I could send the updated hashes since the last time.

Clients that heard the first transmission could build up the new block.

Other miners could send

previous:
new hash:
transactions: 0,

Having said that, inherently, miners need to update the coinbase transaction for the "extra nonce".

At 500 transactions, sending all the hashes would 16kB, so it is not insignificant.

Another option would be to pay miners to include transactions.  If only 1% of transactions need to be included and there are 512 transactions, then you only need to send the path.  This gives 320 per transaction and 5 transaction, so 1.6kB.    You couldn't send that every second to every node.

Nodes could flag themselves as "HEADER_MONITOR" nodes, and support lots of headers.
legendary
Activity: 980
Merit: 1008
This is not equivalent to a shorter block time. The issue with this approach is that the miners who do not participate in this system have greater freedom in choosing which transactions to include in a block, and can thus make more money from fees. Perhaps this isn't an issue now, but it will become an issue at some point.
Wherever did I say it was optional?
I don't think I understand your proposal then. How can a soft fork not be optional? Isn't the fact that it's optional what makes it a soft fork?

This is not equivalent to a shorter block time. The issue with this approach is that the miners who do not participate in this system have greater freedom in choosing which transactions to include in a block, and can thus make more money from fees. Perhaps this isn't an issue now, but it will become an issue at some point.

Even as evidence it would be helpful.  Each miner broadcasts all headers with 1/64 of the standard difficulty.  This allows ties to be broken more quickly, so random reversals are much less likely.

It helps even with malicious reversals, as long as honest nodes are willing to mine against a slightly shorter chain.

Forks could be compared using total number of low POW headers on each block in the chain.

So, if there were 2 possible chains which fork at B/B', then the first chain would still win.

A(63) <- B(72) <- C(37) <- D(58)

and

A(63) <- B'(6) <- C'(9) <- D'(4) <- E(7) <- F(6)

Miners would know that almost all of the hashing power is against the first fork, so it will eventually overpower then 2nd fork.

Even if only 75% of the miners have that rule, the top fork will win.
I agree. It's an interesting concept.

I've been thinking of writing a proof-of-concept using a 1-second "block time". Ie. miners would publish blocks with 1/600th of the difficulty into a P2P network. This could both be useful for other miners, to see which chain is being worked on the most, but also by non-mining nodes, to see if their transactions have been picked up by the network.

The latter benefit was the cause of my initial interest. I thought it would be nice to be able to see if ones transaction has been picked up by the network, and is being worked on. As it is now you wait in ignorance for ~10 minutes, and at some point you get a single confirmation. With this system you'd get a "partial confirmation" every ~1 second on average, and you'd be able to get good response on, first of all, whether your transaction has been picked up, but also when it's discarded in favor of higher fee transactions. If you receive partial confirmations for your transaction for 2 minutes and they stop all of the sudden, then you'd want to resend your transaction with a higher fee if you want it included in the next block. This would increase feedback between miners and non-mining nodes and make fee discovery more efficient.

Of course this would also require that the partial confirmations be broadcast along with the hash of all the transactions in the block, in order for nodes to know whether their transaction is being worked on, and for them to be able to verify the block (that its difficulty is greater than or equal to 1/600th of the block chain difficulty).
legendary
Activity: 1232
Merit: 1094
This is not equivalent to a shorter block time. The issue with this approach is that the miners who do not participate in this system have greater freedom in choosing which transactions to include in a block, and can thus make more money from fees. Perhaps this isn't an issue now, but it will become an issue at some point.

Even as evidence it would be helpful.  Each miner broadcasts all headers with 1/64 of the standard difficulty.  This allows ties to be broken more quickly, so random reversals are much less likely.

It helps even with malicious reversals, as long as honest nodes are willing to mine against a slightly shorter chain.

Forks could be compared using total number of low POW headers on each block in the chain.

So, if there were 2 possible chains which fork at B/B', then the first chain would still win.

A(63) <- B(72) <- C(37) <- D(58)

and

A(63) <- B'(6) <- C'(9) <- D'(4) <- E(7) <- F(6)

Miners would know that almost all of the hashing power is against the first fork, so it will eventually overpower then 2nd fork.

Even if only 75% of the miners have that rule, the top fork will win.
staff
Activity: 4242
Merit: 8672
This is not equivalent to a shorter block time. The issue with this approach is that the miners who do not participate in this system have greater freedom in choosing which transactions to include in a block, and can thus make more money from fees. Perhaps this isn't an issue now, but it will become an issue at some point.
Wherever did I say it was optional?   I'm sorry if I was unclear. You can achieve equivalent behavior without a hard fork and without bloating up the headers with a higher block rate. I didn't say that you could achieve actual early consensus without imposing _any_ new rules (although if you only want evidence of intention thats another matter). You simply enforce the sharechain as a soft forking rule at the tip of the chain, and forget it with it buried non miners who don't care about this activity could ignore it.

You're over-constraining it with assumptions. 10 seconds? The poster referenced 5 minutes.
legendary
Activity: 980
Merit: 1008
I am concerned that a possible practical improvement is getting lost in the theory discussion.
You have studiously ignored what was probably the most important point I made.  There is no improvement possible here which requires a rule change. You could happily publish half-difficulty shares and share-chain like P2Pool if faster evidence of confirmation was needed.  In light of that alone, regardless of any speculation about perhaps 5 minutes might actually also be safe (speculation I consider highly risky because it's only safe under the current network load and topology), I believe your advocacy here has no chance of forward progress.
This is not equivalent to a shorter block time. The issue with this approach is that the miners who do not participate in this system have greater freedom in choosing which transactions to include in a block, and can thus make more money from fees. Perhaps this isn't an issue now, but it will become an issue at some point.

Let's say you create blocks every 10 seconds, and append them to a P2Pool-like share-chain, which much follow the same rules as the Bitcoin blockchain in the sense that no transactions in a share must conflict with a transaction in a previous share. The effect of this is that fees become irrelevant, or at least that you can only choose which transactions to include from the previous 10 seconds of transactions. If a block becomes full after 5 minutes, you would be forced to mine the share chain and not be able to remove low fee transactions even if new high fee transactions come in. You only have a window of 10 seconds within which you can choose which transactions to include. Once that share is calculated you cannot remove transactions from the share chain.

This creates an incentive for miners to not participate in this faster share chain, unless we can find some elegant way of compensating miners for their effort that doesn't bloat the block chain (I haven't been able to figure out how to do this yet).
donator
Activity: 2058
Merit: 1054
Meni's paper is a very good read.  Here is a table from his paper and the two notes that followed.
These two notes are actually part of a list of 7 notes which has some figures in the middle.
staff
Activity: 4242
Merit: 8672
I am concerned that a possible practical improvement is getting lost in the theory discussion.
You have studiously ignored what was probably the most important point I made.  There is no improvement possible here which requires a rule change. You could happily publish half-difficulty shares and share-chain like P2Pool if faster evidence of confirmation was needed.  In light of that alone, regardless of any speculation about perhaps 5 minutes might actually also be safe (speculation I consider highly risky because it's only safe under the current network load and topology), I believe your advocacy here has no chance of forward progress.
legendary
Activity: 1176
Merit: 1020
In the case where an attacker is purchasing his hashing power on-demand, wouldn't halving the block period also halve the cost of any n-block chain reversal, since on average the attacker would need to rent the same fraction q of total hashing power, but for only half the time?

Meni's paper is a very good read.  Here is a table from his paper and the two notes that followed.

•If the attacker controls more hashrate than the honest network, no amount of confir- mations will reduce the success rate below 100%.

•There is nothing special about the default, often-cited figure of 6 confirmations. It was chosen based on the assumption that an attacker is unlikely to amass more than 10% of the hashrate, and that a negligible risk of less than 0.1% is acceptable. Both these figures are arbitrary, however; 6 confirmations are overkill for casual attackers, and at the same time powerless against more dedicated attackers with much more than 10% hashrate.

donator
Activity: 2058
Merit: 1054
Seems like Meni is taking you up on the security side.  Statistics aside, lets not forget the importance of that first confirmation!  I've met dozens of people in coffee shops helping them to get some bitcoins.  In purely human terms, there is a big difference between 5 and 10 minutes.  5 minutes is fast food, 10 is not.  You expect putting gas in a car to take 5 minutes - 10 would be a drag.  5 minutes just feels a lot faster than 10.  And since that first confirmation is way, way (maybe 100x?) more secure than an unconfirmed one, everything happening twice as fast would be very convenient for the bitcoin community as it is today.  I agree that in the future there will be lots of solutions to these issues, but to get there from here we need to keep people happy and things convenient so that the community lives on.
On the other hand, in practice even 0-confirmation transactions are reasonably secure. There is no need to wait for confirmations in a coffee shop.

That's false, which is the main point I was trying to explain in https://bitcoil.co.il/Doublespend.pdf.
I'm speaking specifically about confirm count without regard to latency.
Under the assumption that orphaning isn't an issue, with a lower mean block time you need to wait less on average for a given level of security.

For example, let's say that with either 5 min or 10 min mean time, orphaning isn't an issue. Let's say also that you want a 10%-hashrate attacker to have less than 2% chance of successfully double spending, so you wait for 3 confirms.

If 10 min is the time constant, you have to wait 30 mins on average. If it's 5 min, you have to wait 15 mins. This is an advantage of a lower time constant, as suggested by the OP.

In the case where an attacker is purchasing his hashing power on-demand, wouldn't halving the block period also halve the cost of any n-block chain reversal, since on average the attacker would need to rent the same fraction q of total hashing power, but for only half the time?
Yes, but that's negligible since for <50% hashrate, success probability decreases exponentially with confirmations. In my example, if the block time is 5 min, the merchant can wait for 1 more confirmation (20 mins, still much less than 30), cutting success probability by a factor of 3 thus tripling the average cost of a successful attack.
sr. member
Activity: 461
Merit: 251
That's false, which is the main point I was trying to explain in https://bitcoil.co.il/Doublespend.pdf.
I'm speaking specifically about confirm count without regard to latency.
Under the assumption that orphaning isn't an issue, with a lower mean block time you need to wait less on average for a given level of security.

For example, let's say that with either 5 min or 10 min mean time, orphaning isn't an issue. Let's say also that you want a 10%-hashrate attacker to have less than 2% chance of successfully double spending, so you wait for 3 confirms.

If 10 min is the time constant, you have to wait 30 mins on average. If it's 5 min, you have to wait 15 mins. This is an advantage of a lower time constant, as suggested by the OP.

In the case where an attacker is purchasing his hashing power on-demand, wouldn't halving the block period also halve the cost of any n-block chain reversal, since on average the attacker would need to rent the same fraction q of total hashing power, but for only half the time?
Pages:
Jump to: