Pages:
Author

Topic: Blocks are [not] full. What's the plan? - page 6. (Read 14343 times)

newbie
Activity: 28
Merit: 0
November 29, 2013, 01:39:14 PM
#90
Personally (and I really believe block size is a completely different issue) I think the best option would be to raise the block limit once it becomes a bottleneck to a higher static limit.  The reason is that Bitcoin is very hard to undo and going from 1MB static to 10 MB static is a simple and well understood change.  Other systems while they may be more future proof are more complex and need more time for discussion, analysis, and testing.  I project the 1MB limit will become an issue within a year (or 18 months on the outside) and I wouldn't be confident with any radical change in the block system in a short period of time.
I think the issue has to be solved asap. When it becomes a bottleneck it is already too late. Because it will have an impact on bitcoin, it has to be tested for a longer period.
But afaik it's already very high on the dev's priority list.
donator
Activity: 1218
Merit: 1079
Gerald Davis
November 29, 2013, 12:56:46 PM
#89
Personally (and I really believe block size is a completely different issue) I think the best option would be to raise the block limit once it becomes a bottleneck to a higher static limit.  The reason is that Bitcoin is very hard to undo and going from 1MB static to 10 MB static is a simple and well understood change.  Other systems while they may be more future proof are more complex and need more time for discussion, analysis, and testing.  I project the 1MB limit will become an issue within a year (or 18 months on the outside) and I wouldn't be confident with any radical change in the block system in a short period of time.
sr. member
Activity: 336
Merit: 250
Cuddling, censored, unicorn-shaped troll.
November 29, 2013, 12:50:55 PM
#88
Well larger blocks are never going to be faster or as fast as smaller blocks.  The goal is to reduce the latency time per kB.  The faster a block can be broadcast the lower the "orphan cost" per tx.   Still larger blocks will always have a higher orphan rate but they also have higher gross revenue.
...snip...

This looks promising.

As the adoption rises, the number of tx processed by the network per day will need to be increased greatly.
I'm coming back here with the idea of dynamic minimum block size... Couldn't we just index that size (or min tx count/block) requirement on the current difficulty?

Miners could then reject small blocks that don't meet the requirement (so basically, miners trying to send small blocks to have less latency would get their blocks orphaned, which looks like a good way to reduce orphan cost?).

Combined with your proposition to reduce block size dramatically, and keeping in mind that the network will also become faster and faster, that would allow us to ensure have a better chance that the tx throughput remains consistent with the global need, progressively?
legendary
Activity: 2646
Merit: 1137
All paid signature campaigns should be banned.
November 29, 2013, 12:16:54 PM
#87
Is there any chance of just storing blocks as list of txid's, list of destroyed txouts, list of new txouts, without telling which txins/txouts go with which tx?

No or as a miner I could simply modify your txs keeping your tx inputs and making the outputs to me.  The security model of Bitcoin involves three layers.

1) Senders digitally sign the entire* tx to ensure it is immutable.
2) All nodes (not just miners) verify all txs and blocks are valid.
3) Miners place tx in blocks to create a consensus history.

*well a simplified form


Point and match Wink
donator
Activity: 1218
Merit: 1079
Gerald Davis
November 29, 2013, 12:10:40 PM
#86
Is there any chance of just storing blocks as list of txid's, list of destroyed txouts, list of new txouts, without telling which txins/txouts go with which tx?

No or as a miner I could simply modify your txs keeping your tx inputs and making the outputs to me.  The security model of Bitcoin involves three layers.

1) Senders digitally sign the entire* tx to ensure it is immutable.
2) All nodes (not just miners) verify all txs and blocks are valid.
3) Miners place tx in blocks to create a consensus history.

*well a simplified form

legendary
Activity: 2646
Merit: 1137
All paid signature campaigns should be banned.
November 29, 2013, 12:05:48 PM
#85

If you're gonna be elbows-deep in that code anyway, there's a major privacy upgrade you can do with block formats.

Is there any chance of just storing blocks as list of txid's, list of destroyed txouts, list of new txouts, without telling which txins/txouts go with which tx?

After all, if we're assuming that the clients have already *seen* the individual tx, they can check them anyway.  And look at the list of txid's to find out which ones they're missing, get those, and check them.

This doesn't make privacy absolute in any sense; it just makes it so that in order to trace tx and do data mining you have to be listening in realtime to get individual transactions instead of looking at the blockchain after the fact.  But that would still a big improvement.


I think this is a great idea.  One question:  the fees would then be the difference between these two large sums?  Would that work?
hero member
Activity: 826
Merit: 501
in defi we trust
November 29, 2013, 12:01:02 PM
#84
Briefly hitting 100 k transactions/day.
Now , if there are some selfish miners who include just a few tx on their blocks and we continue to grow we might hit the limits and the backlog will start to grow.
I'm grabbing some popcorn and waiting for the decisions on the block size.
legendary
Activity: 924
Merit: 1132
November 29, 2013, 11:57:54 AM
#83

If you're gonna be elbows-deep in that code anyway, there's a major privacy upgrade you can do with block formats.

Is there any chance of just storing blocks as list of txid's, list of destroyed txouts, list of new txouts, without telling which txins/txouts go with which tx?

After all, if we're assuming that the clients have already *seen* the individual tx, they can check them anyway.  And look at the list of txid's to find out which ones they're missing, get those, and check them.

This doesn't make privacy absolute in any sense; it just makes it so that in order to trace tx and do data mining you have to be listening in realtime to get individual transactions instead of looking at the blockchain after the fact.  But that would still a big improvement.

donator
Activity: 1218
Merit: 1079
Gerald Davis
November 29, 2013, 11:48:32 AM
#82
So a couple points of clarification.  The idea of including tx hashes in the block MESSAGE wouldn't change the size of the actual block.  It would simply be a new message type.  Remember bootstraping nodes also require older blocks and they won't have the txs either so the existing block message would be useful in cases like that.

Simplified version:
on newest block when latency is important = use header + tx hash message.
on older blocks when latency isn't an issue = use header + full tx message.

As for the block limit well that is another more complex issue.  Honestly I don't see it very important right now.  The number of full blocks since the genesis block is exactly 0.0%.  Even today with a backlog of txs the average block size is ~200KB and the largest block is ~600KB.  Raising the limit doesn't change the "orphan economics". 
newbie
Activity: 28
Merit: 0
November 29, 2013, 10:30:46 AM
#81
Why not a dynamic blocksize?
I think that an automatic dinamic block size is difficulty to design and implement and could lead to disparities among peers. The community should be in charge of deciding what the max block size should look like through planned increases.

Best regards,
Ilpirata79

Badly typed on my iPad

I understand your point, but I think if it is possible it would be the best solution. In fact, a dynamic blocksize would be something like "planned increase" (and decrease).
What I would like, that - if implemented after careful testing - it gives the chance for not having to hardfork the system again. This would make bitcoin highly stable.
sr. member
Activity: 353
Merit: 253
November 29, 2013, 10:26:10 AM
#80
Why not a dynamic blocksize?
I think that an automatic dinamic block size is difficulty to design and implement and could lead to disparities among peers. The community should be in charge of deciding what the max block size should look like through planned increases.

Best regards,
Ilpirata79

Badly typed on my iPad
legendary
Activity: 2646
Merit: 1137
All paid signature campaigns should be banned.
November 29, 2013, 10:23:53 AM
#79
Just do this:

For example lets look at this recent block:
https://blockchain.info/block-index/443364/0000000000000003b90c99433d07078d5498910442489383f18e250db0a843e2

301 tx and 480 KB.  If the block message was changed to be just tx hashes the block message would drop from 480 KB to ~10 KB or a 98% reduction in size.

then you do not need to do this:

we should schedule a change to the max block size to be 10mB in a year or two

and we can easily support more TPS than PayPal.
newbie
Activity: 28
Merit: 0
November 29, 2013, 10:07:21 AM
#78
Why not a dynamic blocksize?
sr. member
Activity: 353
Merit: 253
November 29, 2013, 09:51:40 AM
#77
Hi guys,
My opinion is that we should schedule a change to the max block size to be 10mB in a year or two,
So we match the tps of paypal. Would the world fall If we made such a change? In any case miners could be free to set a lower threshold If they believe it's appropriate.
Micro translations though, would still be Made off-chain, using a network a micro payment channels, as they require fast confirmation.

Best regards,
Ilpirata7

Badly typed on my iPad
sr. member
Activity: 406
Merit: 250
November 29, 2013, 07:32:30 AM
#76
donator
Activity: 1218
Merit: 1079
Gerald Davis
November 29, 2013, 12:32:41 AM
#75
Well larger blocks are never going to be faster or as fast as smaller blocks.  The goal is to reduce the latency time per kB.  The faster a block can be broadcast the lower the "orphan cost" per tx.   Still larger blocks will always have a higher orphan rate but they also have higher gross revenue.

The good news is the current method is about the slowest, most bloated method possible for broadcasting a block.  Any change is an improvement.

One proposal is to only include tx hashes in the block message.  Currently a block message consists of a block header and a list of all tx in the block.  Most nodes know of most or all of these tx, hell they already have verified them and included them in the memory pool.  This simply makes the block larger than it needs to be.  Instead the block message can consist of the block header and a list of tx hashes.   The average tx is ~ 400 bytes and a SHA-256 hash is 32 bytes so we are talking a 90%+ reduction in block message size and thus propagation time and thus orphan costs.

For example lets look at this recent block:
https://blockchain.info/block-index/443364/0000000000000003b90c99433d07078d5498910442489383f18e250db0a843e2

301 tx and 480 KB.  If the block message was changed to be just tx hashes the block message would drop from 480 KB to ~10 KB or a 98% reduction in size.

Another client side change would be removing the double verification of txs.  This may have already been completed I haven't looked at that code since before v 0.7.

Improving the efficiency of mining benefits all users not just miners as orphaned blocks are simply wasted energy.  They lower the effective security of the network.  Further size reduction is possible but requires some more significant changes to the protocol.
legendary
Activity: 924
Merit: 1132
November 29, 2013, 12:00:07 AM
#74
I thought the idea was to reduce the speed advantage of a smaller block making it less expensive in block awards to create a larger one. ie, subsidize the guy who finds a block *FIRST* regardless of the size of the competing blocks.  

I see your point though; it's a 'prisoners dilemma' for the miners.  In the current environment, a miner would lose revenue if switching behavior, even though the network would benefit with absolutely no change to mining revenues if all the miners switched at once.  Conversely, in an environment where the protocol were already deployed, a miner would lose revenue if he switched to issuing blocks with no preceding announcement.  In both cases whichever miner doesn't go with the majority loses due to increased rates of orphaned blocks compared to other miners with the same hashing power.

If you've got a good way to make the blocks propagate faster though, that will largely accomplish the same thing; get the first block distributed before the smaller block that it would otherwise be competing with is even found.  First strategy I suppose would be transmitting to the neighbors with the most peers and greatest bandwidth first, if a peer knows which neighbors those are.
donator
Activity: 1218
Merit: 1079
Gerald Davis
November 28, 2013, 11:13:43 PM
#73
No miners aren't going to switch to a new block until they receive and verify the existing block so pre-announcement or not the faster block will be received and verified and miners will switch to that.  Doing anything else would be subsidizing the owner of the slower block at the expense of their own revenue.

There are real solutions which can reduce the broadcast time by 90%+ or more.
legendary
Activity: 924
Merit: 1132
November 28, 2013, 11:06:55 PM
#72
The advantage is that the first block announced will win an even race (two blocks at the same level) depending on the speed with which the lightweight message propagates rather than the speed at which the block propagates.  When miners who haven't yet found a block revert to the first block they heard about, that block gets an overwhelming advantage in terms of being the block that the next will be based on.

The idea is to eliminate if possible the cost of block size in terms of orphan blocks.  Miners are limiting block size specifically because a large block can take about 15 seconds to propagate across the network -- a smaller one can propagate faster.  The risk to them is that somebody is putting out a smaller block that propagates faster than theirs.

A lightweight message will cross the network in 8 seconds -- meaning no advantage for smaller blocks released even as little as one second later.  The only advantage a smaller block still has is that it can reach miners a few seconds faster and get them to mine for just a few seconds before the block they heard about first gets there - even if the block they heard about first is bigger.  

If not for the "announce" messages, all the miners that got the later smaller block first would mine on it until the next block was found (ten minutes average) instead of until the earlier larger block reaches them ( < 15 seconds.  )

The difference between "under 15 seconds of mining effort" and "ten minutes of mining effort on average" gives the first block found an overwhelming advantage even if it's a full megabyte.

legendary
Activity: 1512
Merit: 1012
Still wild and free
November 28, 2013, 10:26:31 PM
#71
Nope.  The miners would still get to work on the first one they received.  Then, if they get the announced block within 15 seconds, they'll switch to the first block they heard about.  But if they don't get the announced block (and if it's a fakeout message they won't) then they don't change what they're doing at all.  Even if that includes *FINDING* a new block before the announced block arrives, in which case the announced block *will* become an orphan.


So If I understand properly the rational behind your proposal is to have the majority acceptance behing taken over the "light announcements" rather than full blocks (but still being resiliant to fake light-announcements).  Is there any advantage of doing this?
Pages:
Jump to: