Pages:
Author

Topic: Last Time. In small words. Why 2MB impossible. Why Soft Fork. Why SegWit First.. - page 2. (Read 6499 times)

legendary
Activity: 4424
Merit: 4794
Nope, that is completely wrong. The 80k limit is post-SW and legacy sigops count as 4x more. Therefore, for legacy transaction the limit does not change at all. It is 20k and will continue to remain 20k,

thats maths cludge OUTSIDE of network consensus rules..

but from a network consensus rule its not what you think

there needs to be a network consensus txsigops rule of <4k to solve the native risks


after a year you have prefered to just defend core.. rather than the network.
anyway.. your only wasting your own time with your games.
have a nice year kissing ass
legendary
Activity: 2674
Merit: 3000
Terminated.
from a network overview.. if pools used the 80k CONSENSUS but were not following cores CLUDGE maths.. then it does make things worse
Nope, that is completely wrong. The 80k limit is post-SW and legacy sigops count as 4x more. Therefore, for legacy transaction the limit does not change at all. It is 20k and will continue to remain 20k,

there needs to be a proper RULE of 4k sigops that does not change.
No.

See how I baited and double debunked you? You don't even understand the ELI5 explanations from sipa & luke-jr, let alone C++. Cheesy
legendary
Activity: 4424
Merit: 4794

using v0.12 rules your right..
but check out 0.14 rules
80k block 16k tx

segwit makes things worse for the 1mb block


Quote
questioner2: sigops in legacy scripts count 4x toward the 80k limit
this is in validation.cpp:GetTransactionSigOpCost
-snip-
a legacy sigop counts as 4 segwit sigops
so 20k legacy sigops would fill a block

a txsigops limit of <4k in consensus header file solves the native quadratics.!!
No. You didn't even admit that you were wrong about the in-existence of the 4k limit per TX, as that's a policy rule. How sad.

thats maths cludge is CORE centric... not NETWORK consensus

from a network overview.. if pools used the 80k CONSENSUS but were not following cores CLUDGE maths.. then it does make things worse
there needs to be a proper RULE of 4k sigops that does not change.
REAL RULE. not math cludge. not implementation defined but real NETWORK consensus RULE

other implementations also have the 80k blocksigops for NETWORK CONSENSUS. meaning that due to segwit. it can make things worse.
ESPECIALLY when core removes the cludge to make a 1merkle version which they promise(but wont uphold) after the soft activation
legendary
Activity: 2674
Merit: 3000
Terminated.
i dont even know why im interacting with someone that cant even read c++
I don't even do C++ and it seem rather obvious that I understand more of it than someone claiming that he knows it (you). That is just sad.

you have not debunked crap.
You should write a book about lying and shilling. Roll Eyes Simple example of easily debunked bullshit.

using v0.12 rules your right..
but check out 0.14 rules
80k block 16k tx

segwit makes things worse for the 1mb block

Quote
questioner2: sigops in legacy scripts count 4x toward the 80k limit
this is in validation.cpp:GetTransactionSigOpCost
-snip-
a legacy sigop counts as 4 segwit sigops
so 20k legacy sigops would fill a block

a txsigops limit of <4k in consensus header file solves the native quadratics.!!
No. You didn't even admit that you were wrong about the in-existence of the 4k limit per TX, as that's a policy rule. How sad.
legendary
Activity: 4424
Merit: 4794


you have not debunked crap.
you have just not seen the whole picture.
you have not seen things from the whoole network point of view.. you just love the word games

i dont even know why im interacting with someone that cant even read c++
lauda.. same advice i gave you a year ago
learn C++
learn to read passed the 1 paragraph sale pitch.

its hard enough to try explaining things in such short amounts before you lose concentration to just shout
"nonsensical" "wrong because shill" " they paying you enough" as your failsafe reply when you cant understand things.

but now you have gone beyond even trying to learn anything.

you have become a hypocrit by making arguments that actually debunk your own earlier arguments

by saying nodes can by pass the fee math cludge is correct.
but thats why real rules need to be placed in the consensus header file. rather than the cludge

P.S the blocksigop limit is in the consensus. but from a network wide overview. where the maths cludge of core can be by passed. my initial arguments still stand.

i tried entering your narrow mindset by pretending that everyone was following core code and even saying i was wrong when looking at cores cludge specifically.. (rather than network overview) and still shown how it can be abused. just to try getting you to understand the risks. but then you go and play semantics ..

your not trying to see the network risks, your just playing word games.
WAKE UP

a txsigops limit of <4k in consensus header file solves the native quadratics.!!
wake up
legendary
Activity: 2674
Merit: 3000
Terminated.
yor not understanding it
Some people think that you aren't completely uneducated. That's what the only misunderstanding here is.

1. the 5x is about the blocksigoplimit/5=txsigop limit..
There is NO such thing as a TX sigops limit as a consensus rule. It is a RELAY policy. Any miner can create and include a transaction consisting of more than 4k sigops.

this is why REAL rules. real code, should be used.. not bait and switch hope and faith cludgy maths crap.
You don't understand the code behind a simple calculator, yet alone "real rules & real code". Get another job franky, seriously. I debunked you 20 times in 1 week.
legendary
Activity: 4424
Merit: 4794
certain multisig.. pfft. and why do they deserve to have 20% of a block, without paying 20% of the price
It looks like someone hasn't been reading the Bitcoin Core code again. Smiley

This PR also negates any concern of your "easy to spam via 5 max sigops TXs" nonsense:
Quote
Treat high-sigop transactions as larger rather than rejecting them
When a transaction's sigops * bytespersigop exceeds its size size, use that value as its size instead (for fee purposes and mempool sorting). This means that high-sigop transactions are no longer prevented, but they need to pay a fee corresponding to the maximally-used resource.

All currently acceptable transactions should remain acceptable and there should be no effect on their fee/sorting/prioritization.
https://github.com/bitcoin/bitcoin/pull/8365

yor not understanding it
1. the 5x is about the blocksigoplimit/5=txsigop limit.. (consensus+policy)

2. the 'larger than rather than reject' is more about exceeding 100kb of data that SOME tx's would accumulate while trying to make 4k sigops.

which is where i knew you would knitpick.. so i pre-empted your obvious crap.
screw it. i know there are many knitpickers
c) 1input:2856output=97252bytes~(2.857k sigops)
7tx of (c)=680764bytes(20k sigops)

with a TX that stays below the:
bloat of 1mb block
bloat of 100k 'larger than' limit (of REAL BYTES)

while filling the blocksigop limits to prevent any other transactions getting in.

PS the cludgy maths that core has in the 'pull/8365' is just about trying to assume a fee for the tx. but.. ultimately if a pool was not using that cludgy maths/core implementation for the FEE. a pool would add the TX with a cheaper rate that doesnt have the cludgy maths.

this is why REAL rules. real code, should be used.. not bait and switch hope and faith cludgy maths crap.
legendary
Activity: 2674
Merit: 3000
Terminated.
certain multisig.. pfft. and why do they deserve to have 20% of a block, without paying 20% of the price
It looks like someone hasn't been reading the Bitcoin Core code again. Smiley

This PR also negates any concern of your "easy to spam via 5 max sigops TXs" nonsense:
Quote
Treat high-sigop transactions as larger rather than rejecting them
When a transaction's sigops * bytespersigop exceeds its size size, use that value as its size instead (for fee purposes and mempool sorting). This means that high-sigop transactions are no longer prevented, but they need to pay a fee corresponding to the maximally-used resource.

All currently acceptable transactions should remain acceptable and there should be no effect on their fee/sorting/prioritization.
https://github.com/bitcoin/bitcoin/pull/8365
legendary
Activity: 4424
Merit: 4794
the solution is to limit the sigops.
No, that is no solution whatsoever. All that does is kill use-cases which require higher sigops (see certain multisig).
certain multisig.. pfft. and why do they deserve to have 20% of a block, without paying 20% of the price

and develop a new priority fee formulae actually works that charges more for people that bloat and want to spend too often.
Each wallet can and has developed their own fee calculations.
yep but core removed some irrational ones. but also removed some rational ones. the network as a whole should have atleast some agreed(consensus) limits to make users be leaner and less easy to spam,..

im still laughing how you want to prioritise X but then you dont want to prioritise Y..

things like hope and faith that pools will do the right thing are not enough, i actually laugh that the blockstream(core) devs actually removed code mechanisms and then went for banker economics 'just pay more'
Statements like these make you look like an idiot. Mining pools have primarily stopped using priority long before Bitcoin Core removed it from their code (which is also the reason for its removal).

because that fee formulae was just a rich vs poor mechanism. core didnt even bother being devs to develop a better code formulae.

yep developers didnt develop
yep coders didnt code

instead they shouted "just pay more"
which is still the rich vs poor failed mechanism
legendary
Activity: 2674
Merit: 3000
Terminated.
the solution is to limit the sigops.
No, that is no solution whatsoever. All that does is kill use-cases which require higher sigops (see certain multisig).

and develop a new priority fee formulae actually works that charges more for people that bloat and want to spend too often.
Each wallet can and has developed their own fee calculations.

things like hope and faith that pools will do the right thing are not enough, i actually laugh that the blockstream(core) devs actually removed code mechanisms and then went for banker economics 'just pay more'
Statements like these make you look like an idiot. Mining pools have primarily stopped using priority long before Bitcoin Core removed it from their code (which is also the reason for its removal).
legendary
Activity: 4424
Merit: 4794
You know about flextrans right?

Wouldn't Flextrans have the exact same problem? I haven't studied Flextrans in detail, but from what I remember it would enable a new "version" of transactions without malleability. But wouldn't legacy transactions ("v1", as they call it here) continue to be allowed in this proposal, too? In this case it could lead to the exact same situation where a malicious miner or pool could try to spam the network with legacy transactions to "take out" some competitors.

yep flex trans is a new tx type just like segwit.. requiring people to choose to use them, but doesnt solve the issues with the old native(legacy) transactions

the solution is to limit the sigops. and develop a new priority fee formulae actually works that charges more for people that bloat and want to spend too often.

things like hope and faith that pools will do the right thing are not enough, i actually laugh that the blockstream(core) devs actually removed code mechanisms and then went for banker economics 'just pay more'

i laughed more when reading their half gesture hopes and utopian promoted half baked promises meant more to them then actual clean code
member
Activity: 101
Merit: 10
Hi OP, don't spread such naive words.

Roger and miners have already agreed to activate SW first, only if BS devs show some sincerity on further blocklimit increase. But what's BS devs' response? They refuse any proposal and continue to spread lies and personal attacks on Roger and miners.


Bitcoin is not your enemy. Wake UP. Let's fight against BS.
legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
You know about flextrans right?

Wouldn't Flextrans have the exact same problem? I haven't studied Flextrans in detail, but from what I remember it would enable a new "version" of transactions without malleability. But wouldn't legacy transactions ("v1", as they call it here) continue to be allowed in this proposal, too? In this case it could lead to the exact same situation where a malicious miner or pool could try to spam the network with legacy transactions to "take out" some competitors.
legendary
Activity: 2674
Merit: 3000
Terminated.
Thanks. If anyone wants to know what BU'ers think of what the system is and should be, I think I can now refer them to your post.

I rest my case.
Yup. This is exactly the nonsense that they are preaching. Let's make Bitcoin a very centralized system in which you can't achieve financial sovereignty unless you buy server grade hardware costing thousands of USD. Roll Eyes

Ok, I think I have understood the quadratic scaling problem now (thanks to @Lauda, @franky1, @jbreher and @-ck), my error was to think that only miners were affected, but as it affects mainly validation, all full nodes are affected and a malicious miner/pool could try to "kill small full nodes" or even smaller mining pools via a spam attack. So my opinion is reinforced that in the case of a block size increase, legacy transactions would have to be restricted by the protocol in some way.
Correct. Everyone is affected and the "parallel validation" BUIP that attempts to solve it is a joke. It does not solve anything.
full member
Activity: 142
Merit: 100
You know about flextrans right?

Makes it better, but doesn't fix it. It still doesn't scale linearly.
It is possible to fix it in the near future. How to spread the information then it already works.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Do you care to argue the facts above?
No I think I'm quite done here, thanks.
legendary
Activity: 3080
Merit: 1688
lose: unfind ... loose: untight
Yes. Anyone who wants to be a central element of a multibillion dollar system is going to have to buck up for the requisite (and rather trivially-valued, in the scope of things) hardware to do so.

Bitcoin's dirty little secret is that non-mining nodes provide zero benefit to the network at large. Sure, operating a node allows that particular node operator to transact directly on the chain, so provides value to that person or persons. But it provides zero utility to the network itself. Miners can always route around the nodes that do not accept their transactions. Miners don't care whether non-mining nodes accept their blocks - only whether other miners will build atop their blocks.

And the number will not be ten - it will be many more. As again, anyone who wants to be able to transact directly upon the chain in a trustless manner will need to buck up to the hardware demands.
Thanks. If anyone wants to know what BU'ers think of what the system is and should be, I think I can now refer them to your post.

No, you may not. If you want to have a handy reference to what one BU'er -- namely myself -- thinks, then you can refer them to my post. I do not speak for others.

Do you care to argue the facts above? Or shall you just rely on crowd sentiment as sufficient to escape any reasoned discussion?
full member
Activity: 196
Merit: 101
You know about flextrans right?

Makes it better, but doesn't fix it. It still doesn't scale linearly.
legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
Ok, I think I have understood the quadratic scaling problem now (thanks to @Lauda, @franky1, @jbreher and @-ck), my error was to think that only miners were affected, but as it affects mainly validation, all full nodes are affected and a malicious miner/pool could try to "kill small full nodes" or even smaller mining pools via a spam attack. So my opinion is reinforced that in the case of a block size increase, legacy transactions would have to be restricted by the protocol in some way.

You know about flextrans right?
legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
Ok, I think I have understood the quadratic scaling problem now (thanks to @Lauda, @franky1, @jbreher and @-ck), my error was to think that only miners were affected, but as it affects mainly validation, all full nodes are affected and a malicious miner/pool could try to "kill small full nodes" or even smaller mining pools via a spam attack. So my opinion is reinforced that in the case of a block size increase, legacy transactions would have to be restricted by the protocol in some way.
Pages:
Jump to: