Pages:
Author

Topic: [POLL] Possible scaling compromise: BIP 141 + BIP 102 (Segwit + 2MB) - page 6. (Read 14376 times)

legendary
Activity: 3724
Merit: 3063
Leave no FUD unchallenged
People only need to read the posts to see that you were being evasive and squirming, and I was being direct and clearly replying to you.

Said the weaseliest weasel that ever weaseled in all of weaseldom without a hint of irony.   Roll Eyes

So, Carlton, do you think there should be a cut off point where coins not moved to a quantum proof key are frozen by the network?  Cheesy

All anyone has to do is read, to see you're the one using dishonest playground tactics. You will be exposed again and again, if you continue in this way. You can't convince people with dishonesty.

Looked in a mirror lately?  I mean... it's like you don't even see it.
legendary
Activity: 4270
Merit: 4534
You scream: decentralisation !  But you have in any case only one single data source: your single pool. No other entity is making a block chain.  So you can just as well connect to your single source of data.
no
id scream centralisation..
one source one brand.. is still centralisation.

If you only have 1 pool (in your example) then there is only one source of block chain data.  So whether you get it DIRECTLY from them, or from someone who copied it from them, there's no "decentralization" in that story.
agreed. never said the opposite

i would rather there be 10 different codebases of a node. not 2 and not 1

That doesn't change the fact that if you have only one pool,
Huh
...
i see your using my simple explanation of relay timing example..
arbitrary numbers..
but yea in a 1*8*8 vs 1*8*8*8*8*8*8 would still be centralised.. but distributed.
where as 20*8*8 vs 20*8*8*8*8*8*8 would be more decentralised.. especially if those 20 pools had different code bases and the different nodes(layers of 8 ) had differing code bases.

which was mentioned before:
diversity(different code bases and even wrote in different languages (go, ruby, java, c#))  and more than one option is decentralisation..
hero member
Activity: 770
Merit: 629
You scream: decentralisation !  But you have in any case only one single data source: your single pool. No other entity is making a block chain.  So you can just as well connect to your single source of data.
no
id scream centralisation..
one source one brand.. is still centralisation.

If you only have 1 pool (in your example) then there is only one source of block chain data.  So whether you get it DIRECTLY from them, or from someone who copied it from them, there's no "decentralization" in that story.

Quote
i would rather there be 10 different codebases of a node. not 2 and not 1

That doesn't change the fact that if you have only one pool, making one single block chain, you either accept it, or you don't have a block chain.

Even if you have 10 pools, connected together and agreeing (building on one another's blocks), there is STILL only one block chain, but now, 10 different pools have it and can send it to you.  All the others can simply copy from one of them, and send it to you too.
legendary
Activity: 3430
Merit: 3074
d5000, you have the same problem as all dishonest debaters trying to throw up smokescreens and diversions to cover their tracks:


People only need to read the posts to see that you were being evasive and squirming, and I was being direct and clearly replying to you. All anyone has to do is read, to see you're the one using dishonest playground tactics. You will be exposed again and again, if you continue in this way. You can't convince people with dishonesty.



It's pretty extremist to want to do something extremely stupid in the light of contrary evidence if you ask me.
legendary
Activity: 4270
Merit: 4534
You scream: decentralisation !  But you have in any case only one single data source: your single pool. No other entity is making a block chain.  So you can just as well connect to your single source of data.
no
id scream centralisation..
one source one brand.. is still centralisation.
in other topics and for many months and years i have said.. distributed centralisation is NOT decentralisation..
a network of only core, whether it be 3000 nodes, 8000 or 80000 is meaningless if all the code was the same. especially if a bug popped up, they would all be affected.
however.
diversity(different code bases and even wrote in different languages (go, ruby, java, c#))  and more than one option is decentralisation..

i would rather there be 10 different codebases for node users to freely choose. not 2 and not 1
core wanting domination and also wanting the fibre ring fencing the pools as the upper upstream filter.. is not about decentralisation.. just distributing.. but still centralising..
legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
@Carlton Banks: You have just demonstrated that you are the wrong person to discuss about a "compromise". You haven't brought in anything constructive into the discussion, only destructive ad hominem, so I will ignore your input from here on. You never proved NOTHING.

So don't blame me and others trying to find a intermediate position if the fork comes and Bitcoin's eternally split. This thread is an intent to avoid a fork. I know that some from the hardcore-Core (Wink ) fraction is totally OK with a fork, but it could harm Bitcoin seriously - If you don't see that, it's not my problem. I can also switch to an altcoin then, as I'm currency-agnostic.


hero member
Activity: 770
Merit: 629
TestNet was called Litecoin at 2½ minutes and multiple alts.
G.Maxwell is blowing smoke, if BTC can't do your 5 minute block speed then the whole thing should be scrapped and written from scratch.
(maybe copy litecoin source code that has 4X the transaction capacity of BTC without deadwit.)

What lie did he make up to explain why BTC sucks more than every other coin in existence in regards to blockspeed?

 Cool

FYI:
Have you Noticed that BTC is unable to do anything else to fix transactions capacity issues except segwit if you talk to G.Maxwell.
Answer is simple , Talk to someone that won't lie to you.  Wink

i do have to say this...

though litecoin and other coins may have lower blocktimes.. they also have lower node counts (~700ish active nodes).

once you start thinking of the time to download, verify and then send out a block. .. the more nodes there are, the more 'hops' each relay layer of nodes needs to do. and the more total time needed for the entir network to get the block before it should be happily ready to get the next block

EG
imagine a coin with just 8 nodes..
one node can send a block to all 8 nodes in 1 hop

but imagine there were say over 4000 nodes

Consider the pool installing a node that can have 20 000 outgoing connections.  Like a small data center.

All non-mining nodes connect directly to this one.  Like you connect to Facebook's servers.

You scream: decentralisation !  But you have in any case only one single data source: your single pool. No other entity is making a block chain.  So you can just as well connect to your single source of data.

In your story, replace 1 pool by 10 pools.  Well, any of these pools is a reliable source of data, and ONLY these 10 pools are the source of their (common) block chain.  There's no other source.  If they hold it back, they are hurting themselves.  EACH of these pools can serve all the other nodes.  Nodes can chose which node(s) they want to connect to.  No need to connect to proxy nodes. 
legendary
Activity: 3430
Merit: 3074
I don't want to hear another word from you or anyone else about blocksizes until you can demonstrate that you've investigated anything else but.

If you don't want to hear it then you're probably in the wrong thread.  Feel free to join another discussion more befitting to your hardline sensitivities.  

Or better yet, maybe learn how human interaction works and think before you bark.  Most people are on board with SegWit, but many are still concerned what comes after that.  Even if you can't accept the fact that not everyone wants to be pressured into using Lightning, or whatever off-chain solution ends up being proposed, you should still get used to the fact that they're going to keep talking about the blocksize, because it's not going away.  I think it's highly unlikely that the pressure will dissipate in that regard.  People see this arbitrary and temporary bottleneck which wasn't part of the original design, so it's hardly unreasonable to question it (despite your obvious opinion to the contrary).

So what you have to consider is, the more you tell people not to discuss it, the more you isolate yourself and make people think there's no point engaging with you, when all you're going to do is tell them how wrong they are because they don't see things as you do.  I'll phrase it as politely as I can here, but you don't exactly have a winning personality.  Maybe you feel like you've justified your view enough in the past and you're just repeating yourself at this point.  But whatever your reasons are, if you can't even be bothered to explain why you think anyone who wants to look at the blocksize is basically the devil, and just scream at people that they're wrong, you're not going to win many people over.


There is one situation in particular where I reserve the right to be belligerent; when people are doing something dangerous

.

Play the ball, not the man. Any personal criticism coming from me relates specifically to your behaviour in this thread, not wholesale attacks on character.


And you STILL have yet to provide actual reasoning for your x=y linear blocksize growth proposal, you still have yet to provide logical refutations to my insistence that blocksize should be the last resort, not the first.

You're the bad actor here, you cannot accept that you are wrong, and have only unbacked assertions to prosecute your position, what you're suggesting is bad engineering practice, and actual engineers know this full well. You have presented zero evidence to the contrary, and are only continuing because your pride is hurt, otherwise you would accept reasonable reasoning. Instead you try to discuss anything but the points I have raised.

You're campaigning on blocksize for it's own sake only, deaf and blind to any alternative, which exist. You are the bad actor, the poor debater, the intransigent. To attack me because of your own panoply of short comings is a total disgrace, you have no place debating any technical matters whatsoever.
legendary
Activity: 4270
Merit: 4534
TestNet was called Litecoin at 2½ minutes and multiple alts.
G.Maxwell is blowing smoke, if BTC can't do your 5 minute block speed then the whole thing should be scrapped and written from scratch.
(maybe copy litecoin source code that has 4X the transaction capacity of BTC without deadwit.)

What lie did he make up to explain why BTC sucks more than every other coin in existence in regards to blockspeed?

 Cool

FYI:
Have you Noticed that BTC is unable to do anything else to fix transactions capacity issues except segwit if you talk to G.Maxwell.
Answer is simple , Talk to someone that won't lie to you.  Wink

i do have to say this...

though litecoin and other coins may have lower blocktimes.. they also have lower node counts (~700ish active nodes).

once you start thinking of the time to download, verify and then send out a block. .. the more nodes there are, the more 'hops' each relay layer of nodes needs to do. and the more total time needed for the entir network to get the block before it should be happily ready to get the next block

EG
imagine a coin with just 8 nodes..
one node can send a block to all 8 nodes in 1 hop

but imagine there were say over 4000 nodes

=1*8*8*8*8
=1+8+64+512+4096=4681
thats 4 hops to gt to all 4000 nodes (bitcoin needs about 5 relays/hops due to having ~7000ish)

now thats 4 times the time it takes for everyone on the network to get and verify the block before the next blockheight gets sent.

reducing the time between the next block height means less time for the previous block height to get around and everyone agree on it. which can lead to more orphans if some have not got the data to agree on block X+2 if they have not even got x+1 yet


based on say a download, verify and relay out to all 8 peers takes less than 30 seconds...EVER
in a network of say 9 nodes (1pool node and 8 verifying nodes) you could get away with blocks being built every 30 seconds and have nice wiggle room
in a network of say 73 nodes (1pool node and 72 verifying nodes) you could get away with blocks being built every 60 seconds and have nice wiggle room
in a network of say 585 nodes (1pool node and 584 verifying nodes) you could get away with blocks being built every 1min30 seconds and have nice wiggle room
in a network of say 4681 nodes (1pool node and 4680 verifying nodes) you could get away with blocks being built every 2min and have nice wiggle room
in a network of say 37449 nodes (1pool node and 37448 verifying nodes) you could get away with blocks being built every 2min 30seconds and have nice wiggle room

but thats on the bases of download, verify and relay out to all 8 peers takes less than 30 seconds...EVER
if the average propagation was say 1minute. then suddenly bitcoin wouldnt cope with 5min blocks..
if the average propagation was say 2minute. then suddenly bitcoin wouldnt cope with 10min blocks..

so its a cat and mouse game between propagation times. node counts.

yep its not just about blocksize harming node counts. its also node counts can cause delays and orphan risks to blocks (but at just 1mb it would take a hell of alot of nodes to do that)
legendary
Activity: 3724
Merit: 3063
Leave no FUD unchallenged
I don't want to hear another word from you or anyone else about blocksizes until you can demonstrate that you've investigated anything else but.

If you don't want to hear it then you're probably in the wrong thread.  Feel free to join another discussion more befitting to your hardline sensitivities.  

Or better yet, maybe learn how human interaction works and think before you bark.  Most people are on board with SegWit, but many are still concerned what comes after that.  Even if you can't accept the fact that not everyone wants to be pressured into using Lightning, or whatever off-chain solution ends up being proposed, you should still get used to the fact that they're going to keep talking about the blocksize, because it's not going away.  I think it's highly unlikely that the pressure will dissipate in that regard.  People see this arbitrary and temporary bottleneck which wasn't part of the original design, so it's hardly unreasonable to question it (despite your obvious opinion to the contrary).

So what you have to consider is, the more you tell people not to discuss it, the more you isolate yourself and make people think there's no point engaging with you, when all you're going to do is tell them how wrong they are because they don't see things as you do.  I'll phrase it as politely as I can here, but you don't exactly have a winning personality.  Maybe you feel like you've justified your view enough in the past and you're just repeating yourself at this point.  But whatever your reasons are, if you can't even be bothered to explain why you think anyone who wants to look at the blocksize is basically the devil, and just scream at people that they're wrong, you're not going to win many people over.

Just a suggestion.  But if you like being seen as the mindless attack dog, keep up the sterling work!   Roll Eyes
legendary
Activity: 3430
Merit: 3074
So your position is basically: The miners are totally wrong. So let's ignore the miners. Not very constructive.

You're a dirty little exaggerater when you have no real argument, huh

I said nothing of the sort. SOME of the miners are behaving irresponsibly as they are not respecting what their actual responsibilities are. Bitcoin 13.1 was the Segwit soft fork implementation only, and the users overwhelmingly demonstrated they are ready for it and supportive of it, even now 13.1 is actually more numerous on the network than the release that came immediately after it.


DON"T YOU DARE SUGGEST I'M DOING SOMETHING WRONG BY POINTING THAT OUT. You're a disgrace, the sort of fool who will not back down in an argument beacuse of your precious little ego.

You're wrong, you're suggesting something staggeringly foolish and I will not be brow-beaten by someone who cannot swallow their misplaced pride on such an important issue



and I think I'm not saying that block size is the only way and not even the best way to go, that is an invention of yours, so your last phrase is totally dishonest and wrong. I've only said that in the actual conditions (~3 tps at 1MB blocks) a Segwit + 1 base/4 weight MB capacity very probably won't be enough in five years, even with LN and sidechains, if we really want mass adoption (> 50 million users).


Dedicating a huge thread, labelled as BIP 102, but is actually BIP 102 on Schwarzenegger doses of steroids SURE AS HELL DOES LOOK LIKE SOMEONE PUSHING BIGGER BLOCKS AT ANY COST.

If you're not dismissing on-chain scaling improvements in favour of just flat-out increasing the resources the Bitcoin network needs to run, then why are you:

  • Dismissing on-chain scaling improvements
  • Heavily promoting increases in the resources the Bitcoin network needs to run



I don't care if a transaction capacity increase is achieved with bigger blocks or other measures.


Well, you could've fooled me.



So I'll investigate Gmaxwell's transaction encoding proposal but it's not easy to find. So really, if you are interested in continuing a constructive discussion, please provide me a source.

NO.

I don't want to hear another word from you or anyone else about blocksizes until you can demonstrate that you've investigated anything else but, and that you understand it's the worst and most dangerous way to increase capacity, and most of all that it DOES NOT CONSTITUTE A SCALING PARADIGM AT ALL. There's not much point in trying to argue otherwise, it's an incontrovertible fact.
legendary
Activity: 1092
Merit: 1000
-snip-
Regarding your recent mention of reducing the block generation time, I asked on IRC. Apparently even with all the relay improvements, they are not adequate to reduce the generation time safely to e.g. 5 minutes claimed Maxwell. There were a few more people giving their opinion at the time, but I forgot to save a copy of the chat. This is quite unfortunate though, I'd definitely like to see 2017 data for this kind of expertiment. Maybe a testnet with constantly full 1 MB blocks and 5 min block interval.

TestNet was called Litecoin at 2½ minutes and multiple alts.
G.Maxwell is blowing smoke, if BTC can't do your 5 minute block speed then the whole thing should be scrapped and written from scratch.
(maybe copy litecoin source code that has 4X the transaction capacity of BTC without deadwit.)

What lie did he make up to explain why BTC sucks more than every other coin in existence in regards to blockspeed?

 Cool

FYI:
Have you Noticed that BTC is unable to do anything else to fix transactions capacity issues except segwit if you talk to G.Maxwell.
Answer is simple , Talk to someone that won't lie to you.  Wink
legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
So your position is basically: The miners are totally wrong. So let's ignore the miners. Not very constructive.

I'm not saying that block size is the only way and not even the best way to go, that is an invention of yours, so your last phrase is totally dishonest and wrong. I've only said that in the actual conditions (~3 tps at 1MB blocks) a Segwit + 1 base/4 weight MB capacity very probably won't be enough in five years, even with LN and sidechains, if we really want mass adoption (> 50 million users).

I don't care if a transaction capacity increase is achieved with bigger blocks or other measures. So I'll investigate Gmaxwell's transaction encoding proposal but it's not easy to find. So really, if you are interested in continuing a constructive discussion, please provide me a source.

The "smaller block interval" proposals (with coinbase blocks separated or not), anyway, only would improve one side of the scaling problems: block propagation. IBD and storage space would not be affected.
legendary
Activity: 3430
Merit: 3074
And you seem to have STILL not read the following sentence:

Don't forget that the original intention of the proposals we're discussing here is to achieve approval for Segwit by miners of the "big blockers" fraction to get rid of the stalemate. I for myself, for now, would be perfectly happy with Segwit alone for now and then let the Core devs decide further scaling measures.

I have. Have you?

I don't think you understand the politics here: the inconsistencies in the position of the miners blocking Segwit/promoting BU are obvious; they're saying "but we want big blocks!" when they're being offered big blocks. It's a bit like the way you're arguing, really Roll Eyes

The obstructionist miners are doing so to add to the conflict, they're not interested in being constructive, it's incredibly transparent that's their intention, to those that aren't completely naive of course


there are several ways (and likely several more we haven't thought of) that could be employed to get that kind of transaction density with 4MB Segwit blocks, bigger than that is unnecessary when there are other options.

Again: More precision please. We aren't advancing here. I, for myself, would be perfectly well with a solution like that:

- Segwit
- 50% TX density improvement by better encoding
- Block time reduced to 5 minutes (again, I would be in favour, but I don't think it will come)
- 1 MB base / 4 MB weight.

Quote
Please, respond to the actually argument I'm making, not derailing back into defending x=y linear growth in blocksize. It's indefensible, when the same rate of growth could be achieved a less dangerous way, using actual scaling paradigms that multiply the utility of EXISTING CAPACITY, not adding extra burden to the capacity at the same scale.

I'm interested in LN and sidechains like Rootstock, but I have already pointed out that even with a well-functioning LN we need more on-chain capacity. If the solutions you mention (TX encoding, block time) are providing them, then why don't you link me to the relevant discussions of it?

gmaxwell posted on Bitcointalk about the tx encoding efficiency hard fork a few weeks ago. He mentioned a factor of improvement, why aren't you motivated to find out for yourself what it is, instead of taking a demonstrably controversial, fruitless and very naive route?

Again, if you're really interested in actual scaling paradigms and not dangerous non-scaling resource use increases, you would be sufficiently motivated to look for yourself.

I've read it and don't need to read it again. You sound interested, so what's the problem? Are you interested and motivated, or not?



And AGAIN please, respond to the actually argument I'm making, not derailing back into arguments defending x=y linear growth in blocksize. Using actual scaling paradigms that multiply the utility of EXISTING CAPACITY is far more valuable and sensible than your idea of adding extra burden to the capacity at the same scale.
legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
Don't forget that the original intention of the proposals we're discussing here is to achieve approval for Segwit by miners of the "big blockers" fraction to get rid of the stalemate. I for myself, for now, would be perfectly happy with Segwit alone for now and then let the Core devs decide further scaling measures.

Quote
I'm suggesting, without stating opinions as fact, that Segwit's 4MB IS the compromise solution.

That's called maximalism. It's one of the two extremes of the positions in the actual debate (if we exclude Luke-Jr with his 300 kB blocks). Even many Core devs are open to block size increases after Segwit. Addition: Remember the Hong Kong Consensus?

Quote
there are several ways (and likely several more we haven't thought of) that could be employed to get that kind of transaction density with 4MB Segwit blocks, bigger than that is unnecessary when there are other options.

Again: More precision please. We aren't advancing here. I, for myself, would be perfectly well with a solution like that:

- Segwit
- 50% TX density improvement by better encoding
- Block time reduced to 5 minutes (again, I would be in favour, but I don't think it will come)
- 1 MB base / 4 MB weight.

Quote
Please, respond to the actually argument I'm making, not derailing back into defending x=y linear growth in blocksize. It's indefensible, when the same rate of growth could be achieved a less dangerous way, using actual scaling paradigms that multiply the utility of EXISTING CAPACITY, not adding extra burden to the capacity at the same scale.

I'm interested in LN and sidechains like Rootstock, but I have already pointed out that even with a well-functioning LN we need more on-chain capacity. If the solutions you mention (TX encoding, block time) are providing them, then why don't you link me to the relevant discussions of it?

But again, you're missing the point (read the bold sentence again!). The goal was to achieve a solution that could be supported by the hardcore BU miners AND Segwit supporters. I think we've already failed anyway. Fork will come, because of maximalists on both sides (you are one of them, it seems), and it won't be good for Bitcoin. (It won't kill it, though).
legendary
Activity: 3430
Merit: 3074
My argument is that diplomatic tension is rising between the states that adjoin the South China Sea.

OK. I don't believe it will happen

You're shifting the argument, and it's dishonest

My point is not to argue the finer details of whether this scenario or that scenario is sufficiently problematic to cause impact X on the network.

What I'm saying (and which you did not address) more broadly is that there are multiple possible situations like the scenario I presented that are becoming more likely because of the rising trend of polarised diplomatic relationships between strategically important states, all over the world.


Lie to us, tell us that polarised diplomacy between major states doesn't carry the serious threat of conflicts that could disrupt the internet very badly


Why not improve the efficiency of transaction encoding FIRST, which is also a hard fork
Quote
Why not explore the possibility of separating tx and coinbase blocks (and making the tx block interval more frequent) FIRST, which is also a hard fork

These two look interesting. What is the combined potential, in transaction capacity per MB, of these two measures? (Is the second one Bitcoin-NG? Why was it not explored?)

Where is your answer to this question? You're trying to trick me into a rhetoric labyrinth to sustain your maximalist position.


I am not sure how much can be achieved using those on-chain scaling methods, but that wasn't the point. I'm proving that different ways to scale exist that do not carry the dangers that blocksize increases do, and that also constitute actual scaling, instead of just using more resources at the same scale.


Again: the sensible engineering approach for a peer-to-peer network is to use sensible margins of safety and precaution, IN PARTICULAR when it comes to the resources the network needs to operate


Don't forget that the original intention of the proposals we're discussing here is to achieve approval for Segwit by miners of the "big blockers" fraction to get rid of the stalemate. I for myself, for now, would be perfectly happy with Segwit alone for now and then let the Core devs decide further scaling measures.

The "dogmatic positions" are actually BU and Segwit/1MB maximalism, not the exploration of a compromise!

Prove it.

I'm suggesting, without stating opinions as fact, that Segwit's 4MB IS the compromise solution. You're just using circular reasoning, yet again.


Quote
Segwit itself is a blocksize increase, of 4x. A 4x increase is NOT to be underestimated in impact, increasing resource use by a factor of 4 is very large for a peer 2 peer network.

That's why the last proposal is gradual and has a grace period where the base size would not be changed for one year, so if there was an actual danger the code change can be reverted. The 12 MB would only be possible after 4 years - 4 years where the developers at every time could revert the change if there is a catastrophic scenario which makes impossible such a high block size.

And it's NOT a final proposal, it's only the result of the discussion of some users in this thread. It can be changed.


You're not getting the broader point here: there are several ways (and likely several more we haven't thought of) that could be employed to get that kind of transaction density with 4MB Segwit blocks, bigger than that is unnecessary when there are other options.

You're just desperate to keep the conversation on the track of "bigger blocks = only method to increase tx rate possible", when it's demonstrably provable that it's the WORST WAY TO ACHIEVE THAT because it carries the MOST SIGNIFICANT RISKS WHEN DEALING WITH PEER TO PEER NETWORKS.



Please, respond to the actually argument I'm making, not derailing back into defending x=y linear growth in blocksize. It's indefensible, when the same rate of growth could be achieved a less dangerous way, using actual scaling paradigms that multiply the utility of EXISTING CAPACITY, not adding extra burden to the capacity at the same scale.
hv_
legendary
Activity: 2520
Merit: 1055
Clean Code and Scale
-snip-
Regarding your recent mention of reducing the block generation time, I asked on IRC. Apparently even with all the relay improvements, they are not adequate to reduce the generation time safely to e.g. 5 minutes claimed Maxwell. There were a few more people giving their opinion at the time, but I forgot to save a copy of the chat. This is quite unfortunate though, I'd definitely like to see 2017 data for this kind of expertiment. Maybe a testnet with constantly full 1 MB blocks and 5 min block interval.

Thx for doing this. To me a 5min interval (halving the reward) would be a sign of flexibility and step into right direction. It would also reduce waiting time - win win.

I do not really understand why this is a real issue since we see some bigger shit coins work mostly ok with less time.... Hmmm
legendary
Activity: 2674
Merit: 2965
Terminated.
-snip-
Regarding your recent mention of reducing the block generation time, I asked on IRC. Apparently even with all the relay improvements, they are not adequate to reduce the generation time safely to e.g. 5 minutes claimed Maxwell. There were a few more people giving their opinion at the time, but I forgot to save a copy of the chat. This is quite unfortunate though, I'd definitely like to see 2017 data for this kind of expertiment. Maybe a testnet with constantly full 1 MB blocks and 5 min block interval.
legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
My argument is that diplomatic tension is rising between the states that adjoin the South China Sea.

OK. I don't believe it will happen, but let's explore the possibility. Which cables - apart from some regional connections - are threatened by that possible conflict? The rest of the world is pretty well connected. If the Philippines, because of the "intelligence" of their president, are temporarily disconnected, the rest of the world would not even notice. Seriously threatening the connectivity of China and the rest of East Asia will be a LOT more difficult, there must be a >WW2-like scenario for it. And even then the rest of the world could continue to use Bitcoin.

And: What problem do you see arising? Block propagation or IBD? If it's the first one - I seriously don't believe that a 12 MB data packet every 10 minutes or so is too much for even a low-bandwidth connection of today's standards. And we are talking about >4 years from now and about the absolute worst case - if a large part of the transactions were multisig, for example.

Quote
Why not improve the efficiency of transaction encoding FIRST, which is also a hard fork
Quote
Why not explore the possibility of separating tx and coinbase blocks (and making the tx block interval more frequent) FIRST, which is also a hard fork

These two look interesting. What is the combined potential, in transaction capacity per MB, of these two measures? (Is the second one Bitcoin-NG? Why was it not explored?)

Where is your answer to this question? You're trying to trick me into a rhetoric labyrinth to sustain your maximalist position.

Don't forget that the original intention of the proposals we're discussing here is to achieve approval for Segwit by miners of the "big blockers" fraction to get rid of the stalemate. I for myself, for now, would be perfectly happy with Segwit alone for now and then let the Core devs decide further scaling measures.

The "dogmatic positions" are actually BU and Segwit/1MB maximalism, not the exploration of a compromise!

Quote
Segwit itself is a blocksize increase, of 4x. A 4x increase is NOT to be underestimated in impact, increasing resource use by a factor of 4 is very large for a peer 2 peer network.

That's why the last proposal is gradual and has a grace period where the base size would not be changed for one year, so if there was an actual danger the code change can be reverted. The 12 MB would only be possible after 4 years - 4 years where the developers at every time could revert the change if there is a catastrophic scenario which makes impossible such a high block size.

And it's NOT a final proposal, it's only the result of the discussion of some users in this thread. It can be changed.
legendary
Activity: 1092
Merit: 1000
And maybe you have something to remember; Segwit itself is a blocksize increase, of 4x. A 4x increase is NOT to be underestimated in impact, increasing resource use by a factor of 4 is very large for a peer 2 peer network.

Segwit => DeadWIT

Miners will never allow it to activate,

If you know anyone dumb enough to work and send their paychecks to someone else.
Because that is what you are asking the miners to do.


 Cool


Pages:
Jump to: