Pages:
Author

Topic: Last Time. In small words. Why 2MB impossible. Why Soft Fork. Why SegWit First.. - page 3. (Read 6499 times)

-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Yes. Anyone who wants to be a central element of a multibillion dollar system is going to have to buck up for the requisite (and rather trivially-valued, in the scope of things) hardware to do so.

Bitcoin's dirty little secret is that non-mining nodes provide zero benefit to the network at large. Sure, operating a node allows that particular node operator to transact directly on the chain, so provides value to that person or persons. But it provides zero utility to the network itself. Miners can always route around the nodes that do not accept their transactions. Miners don't care whether non-mining nodes accept their blocks - only whether other miners will build atop their blocks.

And the number will not be ten - it will be many more. As again, anyone who wants to be able to transact directly upon the chain in a trustless manner will need to buck up to the hardware demands.
Thanks. If anyone wants to know what BU'ers think of what the system is and should be, I think I can now refer them to your post.

I rest my case.
legendary
Activity: 3080
Merit: 1688
lose: unfind ... loose: untight
Miners employing parallel validation do not fall victim to extended time validating blocks containing aberrant large quadratic hashing time transactions. Instead, they orphan such blocks. By continuing to mine and validate on other threads while the validation of the aberrant quadratic hashing time block runs on one other thread. Miners who continue to make blocks with such transactions will eventually bankrupt themselves. All without doing any damage to the network. Problem solved.

What implementation includes parallel validation? Oh yeah... BU does.
Given the massive amounts of ram required by ultra large transactions that are heavy in sigops that would be prone the quadratic scaling laws, validating yet another block in parallel is an excellent way of using even more ram. High ram servers with 256GB may be able to cope temporarily with it but normal machines and even normal servers will likely run out of memory and kill bitcoind.

Which implementation has had out of memory issues already? Oh yeah... BU did.

You don't think the significant mining pools can afford one large server each?
And that's your solution? Have only 10 nodes that can stay online worldwide during that parallel validation period and crash the remaining 6000+ nodes worldwide at the same time?

Yes. Anyone who wants to be a central element of a multibillion dollar system is going to have to buck up for the requisite (and rather trivially-valued, in the scope of things) hardware to do so.

Bitcoin's dirty little secret is that non-mining nodes provide zero benefit to the network at large. Sure, operating a node allows that particular node operator to transact directly on the chain, so provides value to that person or persons. But it provides zero utility to the network itself. Miners can always route around the nodes that do not accept their transactions. Miners don't care whether non-mining nodes accept their blocks - only whether other miners will build atop their blocks.

And the number will not be ten - it will be many more. As again, anyone who wants to be able to transact directly upon the chain in a trustless manner will need to buck up to the hardware demands.
legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
Miners employing parallel validation do not fall victim to extended time validating blocks containing aberrant large quadratic hashing time transactions. Instead, they orphan such blocks. By continuing to mine and validate on other threads while the validation of the aberrant quadratic hashing time block runs on one other thread. Miners who continue to make blocks with such transactions will eventually bankrupt themselves. All without doing any damage to the network. Problem solved.

What implementation includes parallel validation? Oh yeah... BU does.
Given the massive amounts of ram required by ultra large transactions that are heavy in sigops that would be prone the quadratic scaling laws, validating yet another block in parallel is an excellent way of using even more ram. High ram servers with 256GB may be able to cope temporarily with it but normal machines and even normal servers will likely run out of memory and kill bitcoind.

Which implementation has had out of memory issues already? Oh yeah... BU did.

You don't think the significant mining pools can afford one large server each?
And that's your solution? Have only 10 nodes that can stay online worldwide during that parallel validation period and crash the remaining 6000+ nodes worldwide at the same time?

Surely that won't happen with a simple 2MB HF.  So if you are sincere about capacity increase, why not that now and maybe segwit later?
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Miners employing parallel validation do not fall victim to extended time validating blocks containing aberrant large quadratic hashing time transactions. Instead, they orphan such blocks. By continuing to mine and validate on other threads while the validation of the aberrant quadratic hashing time block runs on one other thread. Miners who continue to make blocks with such transactions will eventually bankrupt themselves. All without doing any damage to the network. Problem solved.

What implementation includes parallel validation? Oh yeah... BU does.
Given the massive amounts of ram required by ultra large transactions that are heavy in sigops that would be prone the quadratic scaling laws, validating yet another block in parallel is an excellent way of using even more ram. High ram servers with 256GB may be able to cope temporarily with it but normal machines and even normal servers will likely run out of memory and kill bitcoind.

Which implementation has had out of memory issues already? Oh yeah... BU did.

You don't think the significant mining pools can afford one large server each?
And that's your solution? Have only 10 nodes that can stay online worldwide during that parallel validation period and crash the remaining 6000+ nodes worldwide at the same time?
legendary
Activity: 4424
Merit: 4794
Miners employing parallel validation do not fall victim to extended time validating blocks containing aberrant large quadratic hashing time transactions. Instead, they orphan such blocks. By continuing to mine and validate on other threads while the validation of the aberrant quadratic hashing time block runs on one other thread. Miners who continue to make blocks with such transactions will eventually bankrupt themselves. All without doing any damage to the network. Problem solved.

What implementation includes parallel validation? Oh yeah... BU does.
Given the massive amounts of ram required by ultra large transactions that are heavy in sigops that would be prone the quadratic scaling laws, validating yet another block in parallel is an excellent way of using even more ram. High ram servers with 256GB may be able to cope temporarily with it but normal machines and even normal servers will likely run out of memory and kill bitcoind.

Which implementation has had out of memory issues already? Oh yeah... BU did.

You don't think the significant mining pools can afford one large server each?

this is why you dont let TX's get MORE bloated when block sizes increase.
best option is to keep tx's at or below 4k sigops. the quadratics are copable and capable on normal machines

EG things like
< 4ktxsigops
< 100k txmaxbytes

that way for instance..
spam attack
1mb block  requires 5tx sigop spam or requires 10tx bloat data spam
2mb block  requires 10tx sigop spam or requires 20tx bloat data spam
4mb block  requires 10tx sigop spam or requires 40tx bloat data spam

some people think going up is ok (facepalm) (where sigops per tx and bytes per tx goes up with blocksize)
1mb block  requires 5tx sigop spam or requires 10tx bloat data spam
2mb block  requires 5tx sigop spam or requires 10tx bloat data spam
4mb block  requires 5tx sigop spam or requires 10tx bloat data spam

some people think going down is bad (facepalm) , yet if txsigops went to say 1k and txmaxbyte =50k
1mb block  requires 20tx sigop spam or requires 20tx bloat data spam
2mb block  requires 40tx sigop spam or requires 40tx bloat data spam
4mb block  requires 80tx sigop spam or requires 80tx bloat data spam

where by at 4mb block for instance even using max tx sigops the time to process is seconds not minutes
sr. member
Activity: 476
Merit: 501
4. There are two possible ways to deploy/implement SegWit, as a softfork or as a hardfork. SegWit as a hardfork would allow a slightly cleaner implementation but would also require replay protection (as the exchanges have specifically asked for lately). SWSF does not require replay protection assuming a hashrate majority. Replay protection is difficult thus SegWit as a hardfork would altogether cause more technical debt than SWSF. Also a hardfork is generally considered of higher risk and would take a longer preparation time.

Sorry, it seems people have had their heads FOHK'ed with (Fear of Hard Fork).

There is little difference between the dangers of a soft fork and a hard fork.

In the event of a soft fork we have:
1.) The old chain exists with a more permissive set of rules.
2.) The new chain exists with a more restrictive set of rules.

In a hard fork we have:
1.) The old chain exists with a more restrictive set of rules.
2.) The new chain exists with a more permissive set of rules.

So they look exactly the same during a chain split.

The only difference is that a soft fork is backwards compatible because its more restrictive set of rules.

In the event of a successful soft fork, older nodes continue to operate as normal.
In the event of a successful hard fork, older nodes become unsynced and have to upgrade.

In the event of a contentious fork, hard of soft, it becomes an economically damaging clusterfuck until the winning fork is determined (the longest chain) or a bilateral split occurs (the minority chain implements replay protection)*.

* Strictly speaking the software forking away from the existing protocol (hard of soft) should be the version that implements relay protection as you cannot demand the existing protocol chain to change its behaviour. In practice though, the aim is not to create a permanent chain split and achieve consensus, so the minority chain should end up orphaned off, and any transactions that occur during any temporary chain split should end up confirmed on the main chain.

legendary
Activity: 3080
Merit: 1688
lose: unfind ... loose: untight
5. Because of a block verification processing time vulnerability that increases quadratically with block size, increasing the block size is only possible AFTER SegWit is active and only for SegWit transactions.

False. Parallel validation routes around quadratic hash time issues, by naturally orphaning blocks that take an inordinate time to verify.
legendary
Activity: 3080
Merit: 1688
lose: unfind ... loose: untight
Miners employing parallel validation do not fall victim to extended time validating blocks containing aberrant large quadratic hashing time transactions. Instead, they orphan such blocks. By continuing to mine and validate on other threads while the validation of the aberrant quadratic hashing time block runs on one other thread. Miners who continue to make blocks with such transactions will eventually bankrupt themselves. All without doing any damage to the network. Problem solved.

What implementation includes parallel validation? Oh yeah... BU does.
Given the massive amounts of ram required by ultra large transactions that are heavy in sigops that would be prone the quadratic scaling laws, validating yet another block in parallel is an excellent way of using even more ram. High ram servers with 256GB may be able to cope temporarily with it but normal machines and even normal servers will likely run out of memory and kill bitcoind.

Which implementation has had out of memory issues already? Oh yeah... BU did.

You don't think the significant mining pools can afford one large server each?
legendary
Activity: 2674
Merit: 3000
Terminated.
ill let you argue with yourself
There is nothing to argue about. You don't understand English.

one minute to say pools should and will censor tx's that can spam but then you argue that pools shouldnt censor transactions that can spam
Which is not what I said. I used the word prioritize, which is very different from censoring.
you HOPE pools will prioritise segwit keys out of some faith and dream reasoning
It is not hope, it is reason. Stop trolling already.

Dishonest shills cant even keep their own story straight in the same day.  
Said the baboon working for BU. Ironic. Roll Eyes
legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
ill let you argue with yourself

will be strongly weakened by the prioritization of native ->Segwit and Segwit -> Segwit transactions.
They may be use cases which require this. Who are you to censor such transactions?

one minute to say pools should and will censor tx's that can spam but then you argue that pools shouldnt censor transactions that can spam
Dishonest shills cant even keep their own story straight in the same day. 

Another contradiction recently has been: high fees are good...and then: bigger blocks wont fix high fees.
legendary
Activity: 4424
Merit: 4794
ill let you argue with yourself

will be strongly weakened by the prioritization of native ->Segwit and Segwit -> Segwit transactions.
They may be use cases which require this. Who are you to censor such transactions?

one minute to say pools should and will censor tx's that can spam but then you argue that pools shouldnt censor transactions that can spam

you HOPE pools will prioritise segwit keys out of some faith and dream reasoning
but hate the idea of code prioritising transactions
legendary
Activity: 2674
Merit: 3000
Terminated.
1. it does. because having say 1k txsigops and 80k blocksigops  vs 4k(mathematically twisted to be treated as 16k) means you cannot use up all the blocksigops with 5-7tx's but instead need 80+ tx's if your malicious
also
That is nonsensical. It does not allow for more throughput. All it does is make it a little bit harder to abuse sigops to fill up blocks.

ask yourself why should anyone have the ability to make 1tx that uses up 14%-20% of a blocks limit.
They may be use cases which require this. Who are you to censor such transactions?

2) quadratics of 4k of a few seconds vs 1k thats only a few milliseconds per tx..
Irrelevant. It is still quadratic validation time.

a HOPE of priority of segwit users
No. It is going to happen as long as there are reasonable pools/miners, which we know that there are (e.g. Bitfury).
legendary
Activity: 4424
Merit: 4794
I did not forget anything and have already told you the answer to your nonsense. A malicious actor will be strongly weakened by the prioritization of native ->Segwit and Segwit -> Segwit transactions.
a HOPE of priority of segwit users

CODE should mean more then HOPE
legendary
Activity: 4424
Merit: 4794
by lowering the txsigops (not fake the maths) you can both allow more tx's in and reduce the CPU demand of native tx's
Both points are wrong. This:
1) Does not allow for more TXs. All it does is disable some use-cases which require more sigops.
2) It does not reduce CPU demand at all. Those 1k sigops still have quadratic validation time.

1. it does. because having say 1k txsigops and 80k blocksigops  vs 4k(mathematically twisted to be treated as 16k) means you cannot use up all the blocksigops with 5-7tx's but instead need 80+ tx's if your malicious
also
by having 1k sigops for instance it helps keep people to making lean tx's more. ask yourself why should anyone have the ability to make 1tx that uses up 14%-20% of a blocks limit.

2) quadratics of 4k of a few seconds vs 1k thats only a few milliseconds per tx..

EG 80x 1k tx sigops with 80kblocksigop = under 2 seconds CPU time per block..  

EG 5x 4k txsigops with 20k blocksigop= under 50 seconds CPU time per block..  
EG 5x 4k txsigops(math manip to 16k) with 80kblocksigop = under 50 seconds CPU time per block..  

EG 5x 16k txsigops = under 50 minutes CPU time per block..  

so 80x 1k txsigops with 80kblocksigop = under 2 seconds CPU time..   is better than
SFSW: 5x 4k txsigops(math manip to 16k) with 80kblocksigop= under 50 seconds CPU time..  
and better than removing the cludgy math to get a HFSW
HFSW: 5x 16k txsigops = under 50 minutes CPU time..  

do the maths
1 tx of 80k sigops vs 80tx of 1ksigops... both total 80k total sigops. but bcause its broken up into different tx's the CPU time changes where 80tx of 1ksigops is much much better for all reasons
legendary
Activity: 2674
Merit: 3000
Terminated.
your still thinking from the HOPE of a 2merkle soft activation where people move to segwit tx's..
No. You are confused again and need to re-read what I was talking about. You mentioned Segwit into a statement that had nothing to do with it, and lots again.

by lowering the txsigops (not fake the maths) you can both allow more tx's in and reduce the CPU demand of native tx's
Both points are wrong. This:
1) Does not allow for more TXs. All it does is disable some use-cases which require more sigops.
2) It does not reduce CPU demand at all. Those 1k sigops still have quadratic validation time.

P.S you forget to remind yourself that segwit linear time is ONLY IF people move to segwit keys (which malicious pools/spam users wont do) so stop trying to assume segwit will help, because pools/users that want to be malicious wont use segwit keys
I did not forget anything and have already told you the answer to your nonsense. A malicious actor will be strongly weakened by the prioritization of native ->Segwit and Segwit -> Segwit transactions.
legendary
Activity: 4424
Merit: 4794
If you didn't mine the block, you are going to validate it. If a malicious miner starts deploying quadratic intensive blocks at higher MB (e.g. 2 MB), they could make you constantly be behind them (hence DDOS).
now your starting to see why segwit hasnt fixed it!!
There is no risk at 1 MB, and with >1MB for Segwit you'd have linear time so it has been fixed in this context.

your still thinking from the HOPE of a 2merkle soft activation where people move to segwit tx's..
your question was
"If a malicious miner starts deploying quadratic intensive blocks at higher MB (e.g. 2 MB), they could make you constantly be behind them (hence DDOS)."

stop flip flopping to hide the risks of a 1 merkle segwit, by then round circling back to a 2 merkle*.
stop flip flopping to hide the non-fixes of a 2 merkle segwit, by then round circling back to a 1 merkle.

by lowering the txsigops (not fake the maths) you can both allow more tx's in and reduce the CPU demand of native tx's no matter if people are using segwit or not
P.S
*you forget to remind yourself that segwit linear time is ONLY IF people move to segwit keys (which malicious pools/spam users wont do) so stop trying to assume segwit will help, because pools/users that want to be malicious wont use segwit keys
legendary
Activity: 2674
Merit: 3000
Terminated.
If you didn't mine the block, you are going to validate it. If a malicious miner starts deploying quadratic intensive blocks at higher MB (e.g. 2 MB), they could make you constantly be behind them (hence DDOS).
now your starting to see why segwit hasnt fixed it!!
There is no risk at 1 MB, and with >1MB for Segwit you'd have linear time so it has been fixed in this context.

core have already removed the FEE calculation features such as priority, reactive.. nothing to stop them removing the 4x witness scale factor as soon as segwit is activated.. after duping people into activating it..
maybe you need to read the documentation and code and then think of the long term.. not the temporary sales pitch..
The fee calculation is entirely irrelevant and priority has been mostly unused in ages. You still don't understand why the scale factor was included. Go back to Segwit 101.

because of the tier network preventing old nodes connecting direct to pools, i did * that to say i was baiting you.. i was hoping you would have honestly /integrity to explain why its not an issue.. but you love to hide the bad bits under the rug..
It is still a non-issue.

actually you need to think deeper.. by reducing tx sigops to say 1k and then having 80k blocksigops. without any cludgy maths of pretend counting..
it changes it from being just 5-7 tx to being 80tx to fill a block.
Exactly what would that change? Nothing. You'd disable a lot of use-cases in which these sigops may be needed, in order to make it <20x more expensive to attack the network this way.

my disclaimer was to await your reply to see how practical, critical, and honest you would be .. but you stayed silent by just saying "t does not matter" without explaining why. knowing you would dig yourself a hole should you explain
Ironically you don't explain anything yourself. All you write is "it is x y z". Roll Eyes
full member
Activity: 128
Merit: 107
I guess the thread title has not helped... it isn't going to be the last time and we'll never be able to continue in small words:)
Will give it another try:

1. There are certain structural oversights in Bitcoin that need to be fixed. Without fixing this altcoins will probably overtake Bitcoin in the long run.

2. SegWit has several benefits including short term higher transaction capacity, long term much higher transaction capacity through second level transactions and also safe (!) increasing of the block size. If Satoshi would design Bitcoin from scratch today he would probably do it somewhat similar to SWHF.

3. SegWit a good solution, ready for action and well tested. Even some of it's strongest opponents secretly admit it is "good" ('verified chatlogs').

4. There are two possible ways to deploy/implement SegWit, as a softfork or as a hardfork. SegWit as a hardfork would allow a slightly cleaner implementation but would also require replay protection (as the exchanges have specifically asked for lately). SWSF does not require replay protection assuming a hashrate majority. Replay protection is difficult thus SegWit as a hardfork would altogether cause more technical debt than SWSF. Also a hardfork is generally considered of higher risk and would take a longer preparation time.

5. Because of a block verification processing time vulnerability that increases quadratically with block size, increasing the block size is only possible AFTER SegWit is active and only for SegWit transactions.

6. Any alternative to SegWit SF would take at least half a year longer in implementation and testing.

7. A mining hardware manufacturer and a rich guy are trying to prevent SegWit from being activated probably because of financial incentives and power political reasons ('verified chatlogs').

8. Watching altcoins with SWSF flourish pressure from the users will become so high that Bitcoin finally will get SegWit SF, probably by the miners accepting it after all.
legendary
Activity: 4424
Merit: 4794
emphasis the quadratic/cpu intensive time only happens once for a pool. when it first gets relayed a tx and validates it to add it to mempool.. the creation of a raw block is just collating data minutes later. not revalidating tx's again
If you didn't mine the block, you are going to validate it. If a malicious miner starts deploying quadratic intensive blocks at higher MB (e.g. 2 MB), they could make you constantly be behind them (hence DDOS).
now your starting to see why segwit hasnt fixed it!!

also segwit is "supposedly" 75% cheaper. which means pools get 4x less bonus from a segwit tx.
There is a reason for that. You need to re-read what Segwit is about.
core have already removed the FEE calculation features such as priority, reactive.. nothing to stop them removing the 4x witness scale factor as soon as segwit is activated.. after duping people into activating it..
maybe you need to read the documentation and code and then think of the long term.. not the temporary sales pitch..

theres also issues of if they add segwit tx's they have to form the 2 merkle. and then have some peers request the pool to strip it down to just the base block..(old nodes connected to pools)*
That's not an issue.
because of the tier network preventing old nodes connecting direct to pools, i did * that to say i was baiting you.. i was hoping you would have honesty /integrity to explain why its not an issue.. but you love to hide the bad bits under the rug with empty replies or wrong, irrelevant, not an issue

very simple keep sigops at a REAL 4k or below 4k per tx.
Which also makes it easier to clutter up blocks to hit the max sigops per block limit. As you'd say it, this is no fix.
actually you need to think deeper.. by reducing tx sigops to say 1k and then having 80k blocksigops. without any cludgy maths of pretend counting..
it changes it from being just 5-7 tx to being 80tx to fill a block.

P.S if segwit went soft first and then removed the cludge to go to 1 merkle after. that means removing the 'witness discount' which then would bring back the quadratics risk of REAL 16k sigops (8min native validation time)
(disclaimer their is bait in my last sentence i wonder if you will bite)
Your disclaimer is full of nonsense and proof that you don't understand Segwit. Go back to school.
my disclaimer was to await your reply to see how practical, critical, and honest you would be .. but you stayed silent by just saying "t does not matter" without explaining why. knowing you would dig yourself a hole should you explain

but atleast in a few area's you are starting to think beyond the temporary promotion.. now you really need to start wearing the critical hat more often and look passed the blockstream defense you keep trying to promote
legendary
Activity: 2674
Merit: 3000
Terminated.
emphasis the quadratic/cpu intensive time only happens once for a pool. when it first gets relayed a tx and validates it to add it to mempool.. the creation of a raw block is just collating data minutes later. not revalidating tx's again
If you didn't mine the block, you are going to validate it. If a malicious miner starts deploying quadratic intensive blocks at higher MB (e.g. 2 MB), they could make you constantly be behind them (hence DDOS).

also segwit is "supposedly" 75% cheaper. which means pools get 4x less bonus from a segwit tx.
There is a reason for that. You need to re-read what Segwit is about.

theres also issues of if they add segwit tx's they have to form the 2 merkle. and then have some peers request the pool to strip it down to just the base block..(old nodes connected to pools)*
That's not an issue.

very simple keep sigops at a REAL 4k or below 4k per tx.
Which also makes it easier to clutter up blocks to hit the max sigops per block limit. As you'd say it, this is no fix.

P.S if segwit went soft first and then removed the cludge to go to 1 merkle after. that means removing the 'witness discount' which then would bring back the quadratics risk of REAL 16k sigops (8min native validation time)
(disclaimer their is bait in my last sentence i wonder if you will bite)
Your disclaimer is full of nonsense and proof that you don't understand Segwit. Go back to school.

Do any from both sides (I see the same posters) feel this will ever come to meet in some middle?
Why would you compromise when you've delivered an actually proven and working solution for something that has no benefits aside from a capacity increase? Roll Eyes
Pages:
Jump to: