Pages:
Author

Topic: Luke Jr's 300kb blocks (Read 907 times)

legendary
Activity: 2674
Merit: 1029
April 03, 2019, 08:25:33 AM
#36
and later to earn a pretty penny by scooping up routing fees
That's almost certainly not the case. The only time when fees could at all be high is when there aren't many people doing it.  What we've seen so far in lightning (and previously in joinmarket) is that fees rapidly race to pretty low values in competition.

Quote
and Lightning provides that incentive, especially in form of these plug-n-play physical nodes.
At the moment, but eliminating any need to run a node is a major focus of development effort for lightning developers.

There is an inherent incentive: radically improved security and privacy.  But it's only enough to overcome a certain (low) level of cost... thus the concern about managing that cost.

As an aside, a lot of that "node hardware" being sold won't keep up for that long due to limited memory/storage/speed.

S-Curve why not?

hero member
Activity: 718
Merit: 545
April 03, 2019, 05:35:13 AM
#35
Can I be cheeky and say - This problem has already been fixed. We just need to survive until the fix is implemented.

( I'm a programmer - not cryptographer. Slap me down on:error )

It's all to do with the new zk-STARKs. Like zk-SNARKS.. but faster smaller better hash based quantum secure version ?

They still take hours to process and compute and are still far too large, but that'll be 'fixed'. The pace of improvements is just going too fast at the moment Smiley

When it is - next 10/20 years - we'll be able to use a recursive fixed size zero-knowledge proof that proves the latest block is valid, it is linked to it's parent, a proof that the parent proof is valid, and a cumulative POW..  Roll Eyes

Some of this teck is already out there in various popow (Proof of Proof of Work) forms using the original SNARKs.

So - in 51 years - you'll have a zk-proof that the last 50 years of the blockchain is valid, with the cumulative Total POW, in about 10-20 MB and then the normal chain for the last year.
legendary
Activity: 3430
Merit: 3083
April 02, 2019, 03:43:55 AM
#34
I don't see how hard forks are possible anymore at all

There are alot of other hardfork changes that are totally non-controversial, so you're exagerrating
legendary
Activity: 1372
Merit: 1252
April 01, 2019, 10:19:17 PM
#33
and later to earn a pretty penny by scooping up routing fees
That's almost certainly not the case. The only time when fees could at all be high is when there aren't many people doing it.  What we've seen so far in lightning (and previously in joinmarket) is that fees rapidly race to pretty low values in competition.

Quote
and Lightning provides that incentive, especially in form of these plug-n-play physical nodes.
At the moment, but eliminating any need to run a node is a major focus of development effort for lightning developers.

There is an inherent incentive: radically improved security and privacy.  But it's only enough to overcome a certain (low) level of cost... thus the concern about managing that cost.

As an aside, a lot of that "node hardware" being sold won't keep up for that long due to limited memory/storage/speed.

I have asked this before, and not really had a straight answer, why should block size not be increased with a S-curve function? (https://en.wikipedia.org/wiki/Logistic_function). If I recall you put me to rights on the demand side issue, and sort of said yeah maybe ... but my memory is sketchy and it was a long time ago.

I would like a square answer as to why not?

It could forever and a day end the whole blocksize issue with almost nil impact on the current core philosophy.

What is the negative technical argument?

or negative any argument to this?



I don't think anything but linear is safe... you don't really know how hardware will progress across time, how much it will cost and so on. I don't see any solution to the so called "scaling on chain" that's why Bitcoin has become de-facto digital gold and not something that can be used realistically at scale (as in global usage) on-chain. This doesn't mean research on the field should stop, you never know... however what's clear is most of the effort in Bitcoin should be spent in review already-existing code rather than more exotic stuff. I mean Core had that inflation bug recently while other clients weren't affected. So Luke should be reviewing code instead of attempting stuff that probably will never have any consensus anyway.

Would you agree that if usage is up, and HD space cost goes down, bandwith cost goes down, the we are effectively seeing the 1MB becoming smaller? for no reason?

I.E. we can afford lager block at least to the extent the bandwidth and HD space costs fall and cpu power per /$ goes up?





Yes, I agree that we could afford doubling the blocksize right now and it would be far from the end of the world. However the main point being discussed here by those that consider all the game theory involved is: HOW do you make a blocksize increase without ending up in a clusterfuck of 2 competing "Bitcoins", with all the drama that always carries? (exchanges listing one or another, price crashing, miners speculating with hashrate, everyone claiming they own the real bitcoin....) Because of that, I don't see how hard forks are possible anymore at all, not mattering what the hardfork is about, there wouldn't be enough consensus, so you would end up with 2 competing coins.
legendary
Activity: 2646
Merit: 1722
https://youtu.be/DsAVx0u9Cw4 ... Dr. WHO < KLF
February 24, 2019, 05:40:08 AM
#32
My final take on this thread:

Do something , anything, about fast sync, not in Luke's approach but with his spirit: No SPVs, more full nodes.

Since OP has started this thread I'm banging my head over and over again




... snip ...

And apparently shrinking the blocksize is a solution now  Cheesy

Where the alternative solution is to make money (a digital cash !) 'heavier' ? ...

- https://news.mlh.io/i-hacked-the-middle-out-compression-from-silicon-valley-06-16-2015

"... Please let me know if I overlooked anything that could make me a member of the Three Comma Club. I want a boat. And doors that open vertically..."

- https://www.hoover.org/research/middle-out-economics

"... in which he advanced a middle-out thesis for economic growth: “The fundamental law of capitalism is, if workers don’t have any money, businesses . . . don’t have any customers.” ..."

  Roll Eyes

   Cheesy
legendary
Activity: 2898
Merit: 1823
February 24, 2019, 03:23:08 AM
#31
and later to earn a pretty penny by scooping up routing fees
That's almost certainly not the case. The only time when fees could at all be high is when there aren't many people doing it.  What we've seen so far in lightning (and previously in joinmarket) is that fees rapidly race to pretty low values in competition.

Quote
and Lightning provides that incentive, especially in form of these plug-n-play physical nodes.
At the moment, but eliminating any need to run a node is a major focus of development effort for lightning developers.

There is an inherent incentive: radically improved security and privacy.  But it's only enough to overcome a certain (low) level of cost... thus the concern about managing that cost.

As an aside, a lot of that "node hardware" being sold won't keep up for that long due to limited memory/storage/speed.

I have asked this before, and not really had a straight answer, why should block size not be increased with a S-curve function? (https://en.wikipedia.org/wiki/Logistic_function). If I recall you put me to rights on the demand side issue, and sort of said yeah maybe ... but my memory is sketchy and it was a long time ago.

I would like a square answer as to why not?

It could forever and a day end the whole blocksize issue with almost nil impact on the current core philosophy.

What is the negative technical argument?

or negative any argument to this?



I don't think anything but linear is safe... you don't really know how hardware will progress across time, how much it will cost and so on. I don't see any solution to the so called "scaling on chain" that's why Bitcoin has become de-facto digital gold and not something that can be used realistically at scale (as in global usage) on-chain. This doesn't mean research on the field should stop, you never know... however what's clear is most of the effort in Bitcoin should be spent in review already-existing code rather than more exotic stuff. I mean Core had that inflation bug recently while other clients weren't affected. So Luke should be reviewing code instead of attempting stuff that probably will never have any consensus anyway.

Would you agree that if usage is up, and HD space cost goes down, bandwith cost goes down, the we are effectively seeing the 1MB becoming smaller? for no reason?

I.E. we can afford lager block at least to the extent the bandwidth and HD space costs fall and cpu power per /$ goes up?


Have you recently tried doing the 200Gb initial blockchain download? It is a pain. It might be easy with your bandwidth, but not all Bitcoin users will have the access to high bandwidth, or upgrade to higher bandwidth. I believe they would quit.
legendary
Activity: 2674
Merit: 1029
February 23, 2019, 09:06:05 PM
#30
and later to earn a pretty penny by scooping up routing fees
That's almost certainly not the case. The only time when fees could at all be high is when there aren't many people doing it.  What we've seen so far in lightning (and previously in joinmarket) is that fees rapidly race to pretty low values in competition.

Quote
and Lightning provides that incentive, especially in form of these plug-n-play physical nodes.
At the moment, but eliminating any need to run a node is a major focus of development effort for lightning developers.

There is an inherent incentive: radically improved security and privacy.  But it's only enough to overcome a certain (low) level of cost... thus the concern about managing that cost.

As an aside, a lot of that "node hardware" being sold won't keep up for that long due to limited memory/storage/speed.

I have asked this before, and not really had a straight answer, why should block size not be increased with a S-curve function? (https://en.wikipedia.org/wiki/Logistic_function). If I recall you put me to rights on the demand side issue, and sort of said yeah maybe ... but my memory is sketchy and it was a long time ago.

I would like a square answer as to why not?

It could forever and a day end the whole blocksize issue with almost nil impact on the current core philosophy.

What is the negative technical argument?

or negative any argument to this?



I don't think anything but linear is safe... you don't really know how hardware will progress across time, how much it will cost and so on. I don't see any solution to the so called "scaling on chain" that's why Bitcoin has become de-facto digital gold and not something that can be used realistically at scale (as in global usage) on-chain. This doesn't mean research on the field should stop, you never know... however what's clear is most of the effort in Bitcoin should be spent in review already-existing code rather than more exotic stuff. I mean Core had that inflation bug recently while other clients weren't affected. So Luke should be reviewing code instead of attempting stuff that probably will never have any consensus anyway.

Would you agree that if usage is up, and HD space cost goes down, bandwith cost goes down, the we are effectively seeing the 1MB becoming smaller? for no reason?

I.E. we can afford lager block at least to the extent the bandwidth and HD space costs fall and cpu power per /$ goes up?



legendary
Activity: 1372
Merit: 1252
February 17, 2019, 10:43:17 PM
#29
and later to earn a pretty penny by scooping up routing fees
That's almost certainly not the case. The only time when fees could at all be high is when there aren't many people doing it.  What we've seen so far in lightning (and previously in joinmarket) is that fees rapidly race to pretty low values in competition.

Quote
and Lightning provides that incentive, especially in form of these plug-n-play physical nodes.
At the moment, but eliminating any need to run a node is a major focus of development effort for lightning developers.

There is an inherent incentive: radically improved security and privacy.  But it's only enough to overcome a certain (low) level of cost... thus the concern about managing that cost.

As an aside, a lot of that "node hardware" being sold won't keep up for that long due to limited memory/storage/speed.

I have asked this before, and not really had a straight answer, why should block size not be increased with a S-curve function? (https://en.wikipedia.org/wiki/Logistic_function). If I recall you put me to rights on the demand side issue, and sort of said yeah maybe ... but my memory is sketchy and it was a long time ago.

I would like a square answer as to why not?

It could forever and a day end the whole blocksize issue with almost nil impact on the current core philosophy.

What is the negative technical argument?

or negative any argument to this?



I don't think anything but linear is safe... you don't really know how hardware will progress across time, how much it will cost and so on. I don't see any solution to the so called "scaling on chain" that's why Bitcoin has become de-facto digital gold and not something that can be used realistically at scale (as in global usage) on-chain. This doesn't mean research on the field should stop, you never know... however what's clear is most of the effort in Bitcoin should be spent in review already-existing code rather than more exotic stuff. I mean Core had that inflation bug recently while other clients weren't affected. So Luke should be reviewing code instead of attempting stuff that probably will never have any consensus anyway.
legendary
Activity: 2674
Merit: 1029
February 17, 2019, 04:34:23 AM
#28
and later to earn a pretty penny by scooping up routing fees
That's almost certainly not the case. The only time when fees could at all be high is when there aren't many people doing it.  What we've seen so far in lightning (and previously in joinmarket) is that fees rapidly race to pretty low values in competition.

Quote
and Lightning provides that incentive, especially in form of these plug-n-play physical nodes.
At the moment, but eliminating any need to run a node is a major focus of development effort for lightning developers.

There is an inherent incentive: radically improved security and privacy.  But it's only enough to overcome a certain (low) level of cost... thus the concern about managing that cost.

As an aside, a lot of that "node hardware" being sold won't keep up for that long due to limited memory/storage/speed.

I have asked this before, and not really had a straight answer, why should block size not be increased with a S-curve function? (https://en.wikipedia.org/wiki/Logistic_function). If I recall you put me to rights on the demand side issue, and sort of said yeah maybe ... but my memory is sketchy and it was a long time ago.

I would like a square answer as to why not?

It could forever and a day end the whole blocksize issue with almost nil impact on the current core philosophy.

What is the negative technical argument?

or negative any argument to this?

legendary
Activity: 1372
Merit: 1252
February 17, 2019, 12:15:56 AM
#27
I saw this as well.

Is, His argument "full nodes are dropping" does not want to centralise to keep network strong?

where can we get a figure on how many full nodes, I saw coin dance had nodes but could not see the full nodes





I don't see his proposal would make people suddenly make the effort to run full nodes. The current growth of the blockchain is not that big of a deal within current settings. My folder is around 235 GB. A 4TB drive is pretty cheap these days, so should have you covered for years, and during these years I assume that disk sizes will keep growing as well.

How realistic is that it becomes impossible to host the blockchain on a single drive? I don't see it happening at the current linear growth.

Maybe it would be cool to have a way to host the blockchain on different HDDs. I don't see why this isn't possible, the client just must know where it left on the last file to keep downloading and validating on the next assorted HDD. The entire blockchain is hosted there so it counts as a full node.

Anyway my point was, his 300kb idea will not change the mind of people to do the effort to run a full node. It's a matter of mentality, not if we have 1MB or 300kb. The difference is not that big of a deal imo. People without the right mentality to run a full node will stay on Electrum or whatever.
legendary
Activity: 2674
Merit: 1029
February 16, 2019, 07:13:42 PM
#26
It's not as bad as it seems. If you're running a decent setup, the sync time is pretty reasonable actually. I set up a second node myself earlier this year, and it synced in like 11-12 hours, and that while I expected it to take a day at least. RPI's are a different story, but then again, run a decent setup and you don't have these problems.

In the end, the average person won't run a node even with a very small block size. Why? They just don't give a fuck. People who do give a fuck, and merchants, will continue to spec out their hardware to run their node in the most stable possible manner.
Are you arguing like "it is just fine, people should sit on the backbone (like me) and boot in half a day if they got real incentives, being a bitcoin whale (like me)" ? And you expect Luke to appreciate your argument and back-off?  Grin

Average users have incentives to join, like you and other bitcoin whales Tongue they just can't and it is getting worse as the time passes. A UTXO commitment/reconciliation protocol could change the scene radically, imo.



What LJr really wants to do is be PeerCoin
legendary
Activity: 2674
Merit: 1029
February 16, 2019, 07:10:20 PM
#25
It's a really old proposal. I thought it was trying to be too smart/subtle.


The idea was to reduce to 300kB base size, but also set a graduated increase schedule, based on absolute block heights (the 300kb step was set to take place at a blockheight back in 2017 IIRC). It finally reached 1MB base size again in 2024, and continued at a percentage rate (also IIRC). In other words, if the proposal was adopted today, we'd be past the 300kB stage already.

This was partly a psychologically based proposal, which is why people reacted badly, lol. I think Luke knew that 300kB base size would get laughed off, but he figured that since the blockchain grows constantly, that the closer we get to 2024 (when 1MB base would be reached again), the more people might begin to realise that reducing from the 1MB base size was smarter than it sounded back when the blockchain was a more manageable size.

Note that all of this is using base block figures, the real possible block size would be x2-4 the base block (so 300kb would in fact be 600-1200kB if all transactions in a given block are segwit txs).

Bear in mind that as we're still not in 2024, Luke's plan may actually work, and reducing the base from 1MB to whatever the schedule would be stepped to now (which has of course increased beyond 300kB) might look good to some people. It would probably still take some convincing, but there's still 5 years left on the clock.


ok if this is the case this identical to my argument of some sort of increase over time at some rate ax^n where n is 0.05? or some such, you could probably work n out as a function of blockspace, network load, usage, user base (estimates of course) and text improvement curves HD space and bandwidth.

Edit

Sorry I mean {\displaystyle f(x)={\frac {L}{1+e^{-k(x-x_{0})}}}} Logistical function (S-curve) for block size and have been saying that now for about 2 years???
legendary
Activity: 2674
Merit: 1029
February 16, 2019, 07:06:11 PM
#24
I saw this as well.

Is, His argument "full nodes are dropping" does not want to centralise to keep network strong?

where can we get a figure on how many full nodes, I saw coin dance had nodes but could not see the full nodes



legendary
Activity: 1456
Merit: 1177
Always remember the cause!
February 13, 2019, 11:31:19 AM
#23
My final take on this thread:

Do something , anything, about fast sync, not in Luke's approach but with his spirit: No SPVs, more full nodes.

Since OP has started this thread I'm banging my head over and over again


legendary
Activity: 2170
Merit: 1427
February 13, 2019, 10:53:33 AM
#22
That's almost certainly not the case. The only time when fees could at all be high is when there aren't many people doing it.  What we've seen so far in lightning (and previously in joinmarket) is that fees rapidly race to pretty low values in competition.
Definitely agree with more competition resulting in lower fees, but it comes down to transactional volumes in the end. Many hops make a pretty penny (pretty enough to continue running a node) after a month or so. I strongly believe that Lightning is capable of that with enough adoption.

As an aside, a lot of that "node hardware" being sold won't keep up for that long due to limited memory/storage/speed.
That's a valid concern. These physical nodes indeed have a shelf life which I seem to have ignored. Thanks for pointing out.
staff
Activity: 4326
Merit: 8951
February 13, 2019, 10:06:11 AM
#21
and later to earn a pretty penny by scooping up routing fees
That's almost certainly not the case. The only time when fees could at all be high is when there aren't many people doing it.  What we've seen so far in lightning (and previously in joinmarket) is that fees rapidly race to pretty low values in competition.

Quote
and Lightning provides that incentive, especially in form of these plug-n-play physical nodes.
At the moment, but eliminating any need to run a node is a major focus of development effort for lightning developers.

There is an inherent incentive: radically improved security and privacy.  But it's only enough to overcome a certain (low) level of cost... thus the concern about managing that cost.

As an aside, a lot of that "node hardware" being sold won't keep up for that long due to limited memory/storage/speed.
legendary
Activity: 2170
Merit: 1427
February 13, 2019, 09:54:24 AM
#20
Careful with those assumptions,  if you filter out nodes from a couple popular VPS providers and nodes that have obvious behavioral tells that they're fake... the picture looks much less rosy.  A lot of "nodes" are sybils setup-- presumably-- to track transaction origins.

That makes sense, but still, even with that in consideration, the fact that we see how more and more individuals are running a lightning node at this stage for geeky purposes (hundreds of physical nodes have been sold, and there is plenty of demand for more, but little to no supply to meet it), and later to earn a pretty penny by scooping up routing fees, we'll see the ratio between 'fake' nodes and legit ones become completely different.

People need an incentive to run a node at home, and Lightning provides that incentive, especially in form of these plug-n-play physical nodes.
staff
Activity: 4326
Merit: 8951
February 13, 2019, 09:50:44 AM
#19
would disappear if the base size (1MB) was higher than the weigh limit (600kWU).
There isn't any _size_ limit in the protocol or implementation at all anymore, the weight limit completely replaced the blocksize limit. There isn't any "base size" in the protocol.  So a limit to 600k weight would result in blocks roughly 15% of current sizes given current usage patterns (though probably more since usage would change).

legendary
Activity: 3430
Merit: 3083
February 13, 2019, 09:37:00 AM
#18
So Luke's new proposal is actually a way to soft-fork to 600k weight units, which is not at all the same as 300kB.

That's truly bizarre as an idea, apologies to Luke. That would mean keeping the base size at 1MB, and only being able to use 600kB within that base limit. The segwit discount would still reduce fees for segwit tx's, but the incentive to use segwit tx's as a way to boost capacity would disappear if the base size (1MB) was higher than the weigh limit (600kWU). Don't see the rationale for that at all.


I'm disappointed with the press circus Luke has contributed to here, -- it's not the first time he's set things up perfectly for his words to be taken out of context and then been so so surprised at what happened. But he does make useful contributions, and in the fullness of time drawing more attention to the initial sync problem may be one too, even though I disagree with the approach.

It seems like Luke has a fascination for exploring possibilities without much reasoning as to why the ends are desirable. In the case of the actual BIP141 segwit soft fork, that approach was great, as Luke was motivated to figure out a way to implement segwit. Someone with an "it'll never work" attitude would never have done so.


Blockstream has unpublished code that implements an alternative serialization that reduces tx sizes by around 25%.   I don't think it would actually improve IBD time except for very fast computers on fairly slow internet connections... initial sync is more utxo-update bound than bandwidth bound for most users. It might even slow it down, since the compact serialization is slower to decode. On a ludicrously fast machine (24 core 3GHz, nvme storage, syncing from local hosts over 10gbe) sync currently only proceeds at about 50mbit/sec.  I've been nagging them to publish it.  Their interest is in using it to increase capacity on the sat signal, but it's more generally useful.

50Mbit/s is high-end validation performance? Interesting.


I expect and hope that all the IBD activity will move into the background. After that happens, then the time it takes is less important than the resources-- and at that point a 25% bandwidth improvement would look pretty good.

Are you referring to the hybrid SPV concept? (SPV synchronisation finishes first, IBD continues in the background). Or new UTXO set tech?
staff
Activity: 4326
Merit: 8951
February 13, 2019, 09:27:20 AM
#17
"The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man."

That said ... there are degrees. Smiley

I'm disappointed with the press circus Luke has contributed to here, -- it's not the first time he's set things up perfectly for his words to be taken out of context and then been so so surprised at what happened. But he does make useful contributions, and in the fullness of time drawing more attention to the initial sync problem may be one too, even though I disagree with the approach.

As you can see, there is steady/healthy growth.
Careful with those assumptions,  if you filter out nodes from a couple popular VPS providers and nodes that have obvious behavioral tells that they're fake... the picture looks much less rosy.  A lot of "nodes" are sybils setup-- presumably-- to track transaction origins.

Wasn't there an idea from sipa to change how transactions are serialized that reduced the entire chain's size? If one cares about IBD, one ought to be most interested in proposals that remediate the historic chain size as well as those that improve it forwards in time.
Blockstream has unpublished code that implements an alternative serialization that reduces tx sizes by around 25%.   I don't think it would actually improve IBD time except for very fast computers on fairly slow internet connections... initial sync is more utxo-update bound than bandwidth bound for most users. It might even slow it down, since the compact serialization is slower to decode. On a ludicrously fast machine (24 core 3GHz, nvme storage, syncing from local hosts over 10gbe) sync currently only proceeds at about 50mbit/sec.  I've been nagging them to publish it.  Their interest is in using it to increase capacity on the sat signal, but it's more generally useful.

I expect and hope that all the IBD activity will move into the background. After that happens, then the time it takes is less important than the resources-- and at that point a 25% bandwidth improvement would look pretty good.
Pages:
Jump to: