Pages:
Author

Topic: Post your SegWit questions here - open discussion - big week for Bitcoin! - page 24. (Read 84849 times)

copper member
Activity: 2856
Merit: 3071
https://bit.ly/387FXHi lightning theory
If we aren't continually filling blocks then that is a disaster.

Do you mean like this, this and several other blocks before them.

There are just released and are NOT full (as I understand it). If the limit is 1,000KB and some of these are 1KB off then there's a problem isn't there? Most transactions if simply sent, 1 sending address --> 1 recieving address, (which are most likely) are less than 500Bytes (most less than 300bytes, then these blocks aren't being filled as there is space for at least another TWO transactions to fit in that block?

Maybe improving the network to not do this is a better place to start than segwit? (although I'll accept segwit when it comes live (After 95% adoption)).
staff
Activity: 4326
Merit: 8951
If we aren't continually filling blocks then that is a disaster.
legendary
Activity: 1372
Merit: 1252
IIRC most are in favor of a well designed dynamic block size algorithm. However all current dynamic block size proposals can be relatively easily gamed by miners or others to either push the limit to something large and undesirable or to something small and still undesirable.

I'm not convinced the design problem can be solved, although Satoshi famously solved a supposedly insolvable problem to get us to where we are, so never say never.

But the issue with dynamic sizing is this: there will always be some practical absolute maximum blocksize, above which block validation would take too long for some critical proportion of the network to handle. So it will always be necessary to have some margin of safety below that absolute maximum as a de facto maximum.

And what, in practice, is the difference between that outcome and the current approach of making the blocksize a consensus rule? Miners can already choose less than the practical maximum, and they do. Increasingly less so, but occasionally blocks far less than 1MB make it into the chain. I'm failing to see how that state of affairs differs from having a capped dynamic size TBH, other than it being simpler. I would be happy to be proved wrong (dynamic resizing was my initial preference when the debate about blocksize in the community began).


Indeed, a dynamic blocksize sounds so elegant, since anything that doesn't require consensus and just acts as dictated by an algorithm would be ideal to be implemented in a system like bitcoin, as you said, it is easily exploitable. I've heard Monero adresses the spam trolls with a dynamic fee, but im not sure how it works... as far as I know, we already have a dynamic fee (the higher the transaction demand the higher the fee) so I don't see how they are solving the problem.
legendary
Activity: 3430
Merit: 3083
What if the cost of solution is bigger than the cost of armies?

You're off topic, but your analogy is, in fact, a direct mapping of actual reality: actual armies (and other instruments of force/violence) are what prop up central banking hegemony. And if securing the Bitcoin network were more expensive than mustering an army that could take on every global superpower simultaneously, then it might be a better idea to "simply" do the latter. Would you like to start a new thread for this bizarre tangent you're leading us into? In the appropriate sub, maybe?
legendary
Activity: 1260
Merit: 1019
The so-called "Byzantine Generals" problem, where a message is sent securely
over an insecure transmission channel, the cost of the solution is the energy
used for proof-of-work hashing
You say the words without thinking they meaning.
The original problem has several subjectives: generals, armies and messengers.
There are not "miners-who-try-hashes-for-a-profit" in this math scheme.
What if the cost of solution is bigger than the cost of armies?

legendary
Activity: 3430
Merit: 3083
...although Satoshi famously solved a supposedly insolvable problem...
Sorry, what problem?
(Bonus question: what is the cost of solution?)

The so-called "Byzantine Generals" problem, where a message is sent securely over an insecure transmission channel, the cost of the solution is the energy used for proof-of-work hashing

Why am I being quizzed about facts that we're both aware of?
legendary
Activity: 1260
Merit: 1019
...although Satoshi famously solved a supposedly insolvable problem...
Sorry, what problem?
(Bonus question: what is the cost of solution?)
legendary
Activity: 3430
Merit: 3083
IIRC most are in favor of a well designed dynamic block size algorithm. However all current dynamic block size proposals can be relatively easily gamed by miners or others to either push the limit to something large and undesirable or to something small and still undesirable.

I'm not convinced the design problem can be solved, although Satoshi famously solved a supposedly insolvable problem to get us to where we are, so never say never.

But the issue with dynamic sizing is this: there will always be some practical absolute maximum blocksize, above which block validation would take too long for some critical proportion of the network to handle. So it will always be necessary to have some margin of safety below that absolute maximum as a de facto maximum.

And what, in practice, is the difference between that outcome and the current approach of making the blocksize a consensus rule? Miners can already choose less than the practical maximum, and they do. Increasingly less so, but occasionally blocks far less than 1MB make it into the chain. I'm failing to see how that state of affairs differs from having a capped dynamic size TBH, other than it being simpler. I would be happy to be proved wrong (dynamic resizing was my initial preference when the debate about blocksize in the community began).
staff
Activity: 3458
Merit: 6793
Just writing some code
Thank you for your detailed answer and explanations.... If you don't, mind... so two years from now... with segwit, we somehow start filling blocks again..now what ? Is the problem not compounded ?

I'm not just asking tech here, but socially as well, in two years , if segwit is overwhelmed, the nay sayers will start the "i told you so " carnival. 

Please, i actually have no idea how we could truly scale bitcoin, hence my choice of the best of both worlds. Worst case would ruin us but perhaps a self adjusting max block weight would serve us better ?
By that point in time, there should be multiple things available: 1) a well liked hard fork proposal that contains a block weight increase as well as several other things and 2) second layer solutions such as LN or sidechains. Right now, all available solutions are essentially just "kicking the can down the road" meaning that nothing will truly fix the problem, just delay the inevitable. The Bitcoin network cannot scale to VISA levels just by block size alone, it requires second layer solutions such as LN and sidechains in order to scale up that high. Hopefully by the time the block size becomes a problem again there will the these second layer solutions.

Core devs WANT to raise teh blocksize to 2MB, contrary to the popular btc reddit FUD, but they want to do it right, and right means segwit goes first, it's as simple as that. You have neutral people like Andreas.A advocating for segwit first before hardforking too. I don't know what else those tools need to realize they are wrong. If only we could all cooperate and get segwit going as soon as possible, we could potentially fuel the current rocket into another solar system, since with segwit a lot of cool features will be possible.

I think there was only one Core dev that wanted to stay in 1MB (or even making it smaller). The rest want 2MB.
Many of the Core devs are in favor of even larger block sizes (segwit has a max of 4 MB). IIRC most are in favor of a well designed dynamic block size algorithm. However all current dynamic block size proposals can be relatively easily gamed by miners or others to either push the limit to something large and undesirable or to something small and still undesirable.
legendary
Activity: 1372
Merit: 1252
legendary
Activity: 3430
Merit: 3083
perhaps a self adjusting max block weight would serve us better ?

You're demonstrating the same problem with your presentation here.

achow101 said nothing that could lead you to this conclusion, and you've said nothing to qualify the statement, so I'm going to ask you a question again.


What reasoning can you provide for wanting an algorithm that adjusts the size of witness blocks?
full member
Activity: 380
Merit: 103
Developer and Consultant
Quote
current transactions have a quadratic scaling problem, it would be possible to create 32MB of transaction(s) on one block that would take half hour to process with current bitcoind on the current network protocol, bringing the network to a standstill. By the way, segwit transactions don't have this scaling problem, they scale linearly.

I am only just beginning to spread my wings in the deeper aspects of the protocol, you have just given me some interesting tid bit for me to pursue.

But does this not re-enforce my statement ? Segwit + 32 MB blocks...?

No. Why would it?

Ask yourself this question: why was the 32 MB limit abandoned in favour of a 1MB limit

i don't need to ask myself, i know.

Patronizing attitude aside, you did not address the issue.

I think you may be having problems interpreting my intentions


There was no content in what you or ck said that supported the idea that a 32 MB limit would be feasible. And yet you made a positive statement to the contrary (in bold above).

And so, I was asking you the most helpful question I could, in order to help you understand. If you're more interested in losing control of your ego/emotions, then you're definitely asking yourself the wrong questions (and asking in the wrong forum/website also, we don't help people with their emotional outburst problems here)


sorry if my statement put you off, your response was pretty curt and lacked some shall i say finesse. I wanted to say, that since we already know we will fill up the blocks even with segwit, why not shift the conversation from "blockweight == 4 MB" and make it really about true scaling while maintaining functionality? A 32 MB wweight is larger as makes the "purists" like myself , stop making noise and really think.

Again, i'm sorry, in hindsight perhaps i did misinterpret
full member
Activity: 380
Merit: 103
Developer and Consultant
legendary
Activity: 3430
Merit: 3083
Quote
current transactions have a quadratic scaling problem, it would be possible to create 32MB of transaction(s) on one block that would take half hour to process with current bitcoind on the current network protocol, bringing the network to a standstill. By the way, segwit transactions don't have this scaling problem, they scale linearly.

I am only just beginning to spread my wings in the deeper aspects of the protocol, you have just given me some interesting tid bit for me to pursue.

But does this not re-enforce my statement ? Segwit + 32 MB blocks...?

No. Why would it?

Ask yourself this question: why was the 32 MB limit abandoned in favour of a 1MB limit

i don't need to ask myself, i know.

Patronizing attitude aside, you did not address the issue.

I think you may be having problems interpreting my intentions


There was no content in what you or ck said that supported the idea that a 32 MB limit would be feasible. And yet you made a positive statement to the contrary (in bold above).

And so, I was asking you the most helpful question I could, in order to help you understand. If you're more interested in losing control of your ego/emotions, then you're definitely asking yourself the wrong questions (and asking in the wrong forum/website also, we don't help people with their emotional outburst problems here)
staff
Activity: 3458
Merit: 6793
Just writing some code
Patronizing attitude aside, you did not address the issue. I am not campaigning for 32 MB blocks, simply making my position known and asking ..."why not?" It's a hell of a lot more expensive now to spam transactions , even with a script just to troll, you pay a significant amount and nothing short of a bored billionaire or state can sustain that.... now let's be honest, if a billionaire seriously decided to put the screws to bitcoin, we'd all feel it. they were willing to increase the overall size to just below 4 MB , why not just REVERT to 32 MB and have segwit?
First of all, the maximum block size is not actually 32 MB but rather the maximum message size. This effectively sets the upper limit of any maximum, so with segwit and largest possible blocks, that would be 8 MB max block size but 32 MB max block weight.

While segwit makes sighashing linear, having 32 times the maximum means that it will take at most 32 times longer to verify the worst case block. That can take up to several minutes.

That aside, making the maximum block size (with segwit and all, so actually max block weight) 32 MB puts a significant strain on all full nodes. That is a theoretically maximum of 32 MB every ten minutes, which amounts to ~4.6 GB every day. This means a few things: the blockchain will grow at a maximum rate of ~4.6 GB every single day, a lot of download bandwidth will be eaten up, and even more upload bandwidth will be eaten up. This means that it will become very difficult for regular users to maintain proper full nodes (i.e. default settings as people normally do, no bandwidth limiting). This will hurt decentralization as full nodes will be centralized to high bandwidth high powered servers likely located in data centers. At the very least, it becomes very costly to maintain a full node.

Besides the cost of operating a full node, having such a large maximum makes starting up a new full node even more expensive than it already is. The full node first has to download the entire blockchain. Right now it is at 100 GB. Should the blockchain grow at 4.6 GB per day, that would become very large, very quickly. People would be spending hours, probably days, to download and verify the entire thing.

Now you might say that this won't happen as this is the worst case scenario. However, with these proposals you always need to think of the worst case scenario. If the worst case scenario cannot be handled, then the proposal needs to be changed such that the worst case scenario can be handled. You can't just say that the worst case scenario probably won't happen because there is still a chance that the worst case can happen, and that is not good, especially with changing consensus being so difficult now.
full member
Activity: 380
Merit: 103
Developer and Consultant
Quote
current transactions have a quadratic scaling problem, it would be possible to create 32MB of transaction(s) on one block that would take half hour to process with current bitcoind on the current network protocol, bringing the network to a standstill. By the way, segwit transactions don't have this scaling problem, they scale linearly.

I am only just beginning to spread my wings in the deeper aspects of the protocol, you have just given me some interesting tid bit for me to pursue.

But does this not re-enforce my statement ? Segwit + 32 MB blocks...?

No. Why would it?

Ask yourself this question: why was the 32 MB limit abandoned in favour of a 1MB limit

i don't need to ask myself, i know.

Patronizing attitude aside, you did not address the issue. I am not campaigning for 32 MB blocks, simply making my position known and asking ..."why not?" It's a hell of a lot more expensive now to spam transactions , even with a script just to troll, you pay a significant amount and nothing short of a bored billionaire or state can sustain that.... now let's be honest, if a billionaire seriously decided to put the screws to bitcoin, we'd all feel it. they were willing to increase the overall size to just below 4 MB , why not just REVERT to 32 MB and have segwit?
legendary
Activity: 2674
Merit: 3000
Terminated.
-snip-
But does this not re-enforce my statement ? Segwit + 32 MB blocks...?

I hope their idea works out, i am planning to change most of my monetary transactions to bitcoin based ones....would be a shame if it dies.
That would be the *right* path towards decentralization. Good luck with a decentralized coin in which malicious entities can spam up to ~140 GB worth of data per month (and this is by just counting the 32 MB blocks).

-snip-
Unfortunately, fast, cheap, global, decentralized, onchain massive volume transactions all in one doesn't exist. If someone came up with that idea, then that someone would release a new coin and this new coin would go from 0 to hero, but as of right now, it's a pipe dream.

So best we got is a "small-ish" or conservative block size, then building on top. I don't see any other method that would remain as decentralized. Im ok with an increase to 2MB, but we must do it right, and right means we must get segwit activated before doing so.
If you take a look in r/btc you will find that some of these *people* tend to quote Satoshi often to suit their agenda. They seem to be fine with using SPV wallets (which is an absurd trade-off). You're right about the in-existence of the mentioned combination. Either it's a decentralized coin with small TPS (1 layer) or it's a centralized coin with a high TPS.
legendary
Activity: 1372
Merit: 1252
Quote
current transactions have a quadratic scaling problem, it would be possible to create 32MB of transaction(s) on one block that would take half hour to process with current bitcoind on the current network protocol, bringing the network to a standstill. By the way, segwit transactions don't have this scaling problem, they scale linearly.

I am only just beginning to spread my wings in the deeper aspects of the protocol, you have just given me some interesting tid bit for me to pursue.

But does this not re-enforce my statement ? Segwit + 32 MB blocks...?

No. Why would it?

Ask yourself this question: why was the 32 MB limit abandoned in favour of a 1MB limit

Most people that want massive blocks don't usually think... they just want stuff to be fast and cheap, but they don't think about what the consequences of fast and cheap means. Unfortunately, fast, cheap, global, decentralized, onchain massive volume transactions all in one doesn't exist. If someone came up with that idea, then that someone would release a new coin and this new coin would go from 0 to hero, but as of right now, it's a pipe dream.

So best we got is a "small-ish" or conservative block size, then building on top. I don't see any other method that would remain as decentralized. Im ok with an increase to 2MB, but we must do it right, and right means we must get segwit activated before doing so.
legendary
Activity: 3430
Merit: 3083
Quote
current transactions have a quadratic scaling problem, it would be possible to create 32MB of transaction(s) on one block that would take half hour to process with current bitcoind on the current network protocol, bringing the network to a standstill. By the way, segwit transactions don't have this scaling problem, they scale linearly.

I am only just beginning to spread my wings in the deeper aspects of the protocol, you have just given me some interesting tid bit for me to pursue.

But does this not re-enforce my statement ? Segwit + 32 MB blocks...?

No. Why would it?

Ask yourself this question: why was the 32 MB limit abandoned in favour of a 1MB limit
full member
Activity: 380
Merit: 103
Developer and Consultant
So if their hard work is not activated, are they willing to remove the code ?

I think the fact that they could release a successive number of updates that all support Segwit kinda sways things in favor of it being adopted, unless the biggest miners adamantly stick to v 0.12.

Myself i'm not totally convinced this is the best solution, but it is progress... I would have said segwit + revert to original block size limit, that would have caused less argument while allowing a lot of room for scaling.
There is no precedent on that front in terms of what happens should a soft/hard fork not activate so no one knows yet if/how the core devs would respond. On the other hand segwit has one year to activate and it's way too early to pass any kind of judgement on that front.

Changing to the original block limits would be 32MB, and since current transactions have a quadratic scaling problem, it would be possible to create 32MB of transaction(s) on one block that would take half hour to process with current bitcoind on the current network protocol, bringing the network to a standstill. By the way, segwit transactions don't have this scaling problem, they scale linearly.

I guess we'll see. I already knew about the 32 MB , which is what i actually prefer, that plus the highlighted advantages of Segwit would be awesome.


Quote
current transactions have a quadratic scaling problem, it would be possible to create 32MB of transaction(s) on one block that would take half hour to process with current bitcoind on the current network protocol, bringing the network to a standstill. By the way, segwit transactions don't have this scaling problem, they scale linearly.

I am only just beginning to spread my wings in the deeper aspects of the protocol, you have just given me some interesting tid bit for me to pursue.

But does this not re-enforce my statement ? Segwit + 32 MB blocks...?

I hope their idea works out, i am planning to change most of my monetary transactions to bitcoin based ones....would be a shame if it dies.
Pages:
Jump to: