Author

Topic: ToominCoin aka "Bitcoin_Classic" #R3KT - page 114. (Read 157163 times)

staff
Activity: 4284
Merit: 8808
January 22, 2016, 02:47:06 PM
Quote
SOME interactions between new and old clients POSSIBLE
What are you saying isn't possible? Did you suddenly lose functionality when CLTV was released?

Quote
but will be left passing the parcel of unknown content
No they won't-- but even if they were, why would that be bad?

Quote
95% consensus
95% of what? and where are you getting that figure from?  The code in the Bitcoin classic repository doesn't appear to attempt to do that.

Quote
when 95% of people are happy
Oh, of people? How the heck do you intend to measure that?  ... if we could measure that we likely wouldn't need the blockchain.
legendary
Activity: 4424
Merit: 4794
January 22, 2016, 02:30:53 PM
-snip-
Let us review it from another angle (summary):

SegWit soft fork
  • Clients that do not upgrade are unable to validate signatures
  • SOME interactions between new and old clients POSSIBLE
  • Services that do not upgrade function kind of normally but will be left passing the parcel of unknown content

2MB hard fork
  • Clients that do not upgrade are ok until there is 95% consensus. then and only then will those not upgraded will be completely cut off
  • Services that do not upgrade stop functioningonly when there is overwhelming consensus for miners to risk increasing size
  • Interacting between new and old clients impossiblewhen 95% of people are happy with >1mb block


remember miners that want to actually mine wont want to have their blocks orphaned. and so they wont make bigger blocks until they are sure that the network is ready. if the network is not ready then even nefarious large blockers will have their blocks orphaned when non nefarious small blockers dont have that nefarious block in the chain.
legendary
Activity: 2674
Merit: 3000
Terminated.
January 22, 2016, 02:15:04 PM
-snip-
Let us review it from another angle (summary):

SegWit soft fork
  • Clients that do not upgrade are unable to validate 'new format' signatures
  • All interactions between new and old clients possible
  • Services that do not upgrade function normally

2MB hard fork
  • Clients that do not upgrade are completely cut off
  • Services that do not upgrade stop functioning
  • Interacting between new and old clients impossible

Keep in mind that this is a summary, not a technical explanation of any points (in detail).
There may be scenarios in which some point proves to be false (e.g. services operating on wrong chain)
hero member
Activity: 854
Merit: 1009
JAYCE DESIGNS - http://bit.ly/1tmgIwK
January 22, 2016, 02:06:35 PM

Well yes , rushed HFs in general even to 1.001 MB can cause problems. This is Why Wang Chun with F2Pool agreed with the devs that we need at least 3 months with soft forks and 1 year with hard forks and the next planned HF is Feb /March 17' to 2MB. As Pieter suggests I think that we can safely do one within 6 months with a high degree of consensus but 1 year would be better.

Alright then, see I`m fair enough to admit that i`m not a technically well experienced person to call the shots on this.

However many trolls who also dont have any idea on the issue seem to now try to dictate policy. It's very arrogant.

So yes people need to really look into a mirror and think before they say ridiculous proposals.
legendary
Activity: 994
Merit: 1035
January 22, 2016, 01:38:39 PM

I won't waste my time going through the basics again. There's a security risk, i.e. new attack vector. Whether you believe that someone is going to try abusing it or not does not change the fact of its existence.

What is the difference between a 1mb block size and a 1.001 mb block size? Is there any logical boundary why the 1mb is the block limit or is it only arbitrary?

If so, then raising the block size from 1mb to 1.001mb would cause any harm? Or the mere fact of raising the block size is the issue here (due to panic, dividing the userbase, and creating division in users)?


I`m just trying to understand things here.

Well yes , rushed HFs in general even to 1.001 MB can cause problems. This is Why Wang Chun with F2Pool agreed with the devs that we need at least 3 months with soft forks and 1 year with hard forks and the next planned HF is Feb /March 17' to 2MB. As Pieter suggests I think that we can safely do one within 6 months with a high degree of consensus but 1 year would be better.
legendary
Activity: 4424
Merit: 4794
January 22, 2016, 01:35:24 PM

Almost all of franky's posts in regards to SegWit are at least partially wrong though, and some are completely wrong.


-snip-
I won't waste my time going through the basics again. There's a security risk, i.e. new attack vector. Whether you believe that someone is going to try abusing it or not does not change the fact of its existence.

the reason I said franky's posts were interesting was because I haven't verified them Wink he might have good points, he might not!
He's starting to look like a hopeless case though.


so my ongoing rant has been that segwit will cut off other implementations from fully validating unless they too upgraded to be segwit supporters.
your rant that everything will be fine...
you even said that the dev's said everything would be fine.. but................

Quote
[01:03] sipa what about a client that does not support segwit?
[01:03] Lauda: why would you care to?
[01:03] Just out of curiousity.
[01:04] they won't see the witness data
[01:04] but they also don't care about it

[01:04] Someone mentioned it. So it is not possible for a client that does not support Segwit to see the witness data?
[01:04] Lauda: it is certainly possible
[01:04] Lauda: but it's meaningless to do.
[01:05] of course it is "possible"... but that "possible" just means supporting segwit

[01:05] imagine people wanted to stick with bitcoin-core 0.11 and not upgrade, will they be cut off from getting witness data, by defalt if segwit gets consensus?
[01:06] Chiwawa_: they could certainly code up their wallet to get it, but again what's the point? are they going to check the witness themselves?

so unless other implementations add more code just to be able to fully validate again. they are going to get cut off and just passing the parcel of data they dont understand.. which in itself is a risk if a non-segwit miner adds data it cant check into a block.

basically
bitcoin-core v0.1
bitcoin-core v0.11
bitcoin-core v0.12
bitcoin classic
bitcoin unlimited
bitcoin xt
bitcoin .. whatever the other dozen implementations are
will be cut off from seeing signatures if segwit gets consensus..
and that makes bitcoincore v0.13SW the dictator

have a nice day.. as you are becoming a hopeless case
hero member
Activity: 854
Merit: 1009
JAYCE DESIGNS - http://bit.ly/1tmgIwK
January 22, 2016, 01:32:20 PM

I won't waste my time going through the basics again. There's a security risk, i.e. new attack vector. Whether you believe that someone is going to try abusing it or not does not change the fact of its existence.

What is the difference between a 1mb block size and a 1.001 mb block size? Is there any logical boundary why the 1mb is the block limit or is it only arbitrary?

If so, then raising the block size from 1mb to 1.001mb would cause any harm? Or the mere fact of raising the block size is the issue here (due to panic, dividing the userbase, and creating division in users)?


I`m just trying to understand things here.
legendary
Activity: 2674
Merit: 3000
Terminated.
January 22, 2016, 12:55:49 PM
-snip-
I won't waste my time going through the basics again. There's a security risk, i.e. new attack vector. Whether you believe that someone is going to try abusing it or not does not change the fact of its existence.

sorry i meant full blocks. i.e. do you think RBF and/or fee market mitigates any risk there. do you even think that full blocks is a problem?
I don't think that there's a problem when some blocks are full (e.g. right now). However, I think that if the percentage is high (full blocks/total blocks) a problem might occur. The fees might become too high for Bitcoin to make sense especially for people transacting smaller amounts. This could also be worsened if we do not have off-chain solutions by then. I'm in favor of raising the block size limit once the needed improvements are implemented; I'm also in favor of a dynamic block size.

the reason I said franky's posts were interesting was because I haven't verified them Wink he might have good points, he might not!
He's starting to look like a hopeless case though.
sr. member
Activity: 689
Merit: 269
January 22, 2016, 11:08:43 AM
I think Americans that you should be worrying. It would be really a shame that some power lines may fail from the storm and that you will not be able to mine some smaller than 1MB blocks.
legendary
Activity: 2576
Merit: 1087
January 22, 2016, 11:05:12 AM
I totally accept that raising blocksize limit is a workaround, I don't agree that its a case of blind hope. Nobody knows for sure the outcome, but I think it would be difficult to argue against the most likely scenario being that nothing drastic will happen. I can see how the risk profile might not be acceptable for some though, I am a bit more risk tolerant so that will colour my opinion. Though interestingly I wonder how you profile the risk of full transactions - is it that the 'fee market' mitigates that? are you pro RBF? genuinely interested.
What do you mean by 'profile the risk of full transactions'? I don't have a particular opinion on RBF, I'm still in the 'grey zone' when it comes that that. However, I can see the use cases for it and making it opt in seems acceptable.

sorry i meant full blocks. i.e. do you think RBF and/or fee market mitigates any risk there. do you even think that full blocks is a problem?

the reason I said franky's posts were interesting was because I haven't verified them Wink he might have good points, he might not!
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
January 22, 2016, 10:21:37 AM
This is 'not a fix' but a bad workaround that limits the system.
Imagine if you applied that logic to the blocksize! Wink
I can. The block size limit does indeed limit the system. Just raising it (bad workaround) and hoping everything will be alright would be a bad move. This is why the infrastructure around it needs to be improved after which the block size limit can be safely increased.

your doomsday scare mongering of nefarious people creating 2mb blocks filled with a single transaction of 1.99mb data wont happen..
for 2 reasons

1. as you said yourself it takes 10minutes+ just to validate the transaction before even working on hashing the block itself.. so thats 25btc lost not even mining.. then if they start mining and create a block. because its 1.99mb of data it will take longer than a block of 1.025mb or 0.99mb would take just to hash. so it wont get solved the fastest either


2. because miners know that being nefarious is risking them losing out on 3 blocks due to time to do it.. thats a potential 75btc loss ($30,000) so they would need to be bribed with $30k just to attempt it with still no guarantee it would work.

The construed troll tx don't have to be all in a single 0.99MB (BTC) or 1.99MB (Classic SW or whatever they are calling that shade of bikeshed today) one.

That's just the worst case scenario, for maximum quadratically superlinear sigop trolling fun.  The less-than-worse-case outcomes aren't OK either.


Your textbook hand-waving "won't happen" ignores 4 key facts

1. "10minutes+" varies depending on the CPU power used to construe it (I'll just assume you are right about construal taking time equal to validation given equal CPU/IO).

2.  Only one miner needs to defect and mine a block for it/the to be included.  If there is are huge >75BTC or whatever (in-band) tx fees and/or huge (out-of-band) bribes at stake, you can bet it will happen.  Also, renting hashpower is a thing.  And it's easy to set up a pool with subsidized 101% or 200% payouts to summon any amount of ASICs as easily as turning on a faucet firehose.

3.  MP and other small block militia types have staggering sums of BTC in their war chests; the coffers of La Serenissima are second to none but The Creator.

4.  Unlike the attacking greedy VC ratfucks, the defenders in 3. are willing to spend every last BTC to defend the walls of La Serenissima, as they consider the BTC worthless if the walls fall.
legendary
Activity: 4424
Merit: 4794
January 22, 2016, 09:02:49 AM
This is 'not a fix' but a bad workaround that limits the system.
Imagine if you applied that logic to the blocksize! Wink
I can. The block size limit does indeed limit the system. Just raising it (bad workaround) and hoping everything will be alright would be a bad move. This is why the infrastructure around it needs to be improved after which the block size limit can be safely increased.

your doomsday scare mongering of nefarious people creating 2mb blocks filled with a single transaction of 1.99mb data wont happen..
for 2 reasons

1. as you said yourself it takes 10minutes+ just to validate the transaction before even working on hashing the block itself.. so thats 25btc lost not even mining.. then if they start mining and create a block. because its 1.99mb of data it will take longer than a block of 1.025mb or 0.99mb would take just to hash. so it wont get solved the fastest either

so.. with that said.. imagine blockheight was 400,000... the nefarious miner would need to start validating the transaction, taking 10minutes ..
simultaneously other miners validate AND hash out height 400,001.

its been 10 minutes and now the nefarious miner finally gets around to hashing the block, but because 1.99mb of data is more then 1.025mb
it takes longer.. so again the non nefarious miner hashes out 400,002 while nefarious miner is still working..

and when the nefarious miner finally gets a solution.. its blockheight will be 400,001 because its merkle data is linked to 400,000 while the rest of the network is at 400003.. making the nefarious block instantly out of sync and rejected.. not due to blocksize rules.. but due to being behind in the chain.

2. because miners know that being nefarious is risking them losing out on 3 blocks due to time to do it.. thats a potential 75btc loss ($30,000) so they would need to be bribed with $30k just to attempt it with still no guarantee it would work.

yes 30k is a small bribe but its not like they would successfully get a large tx into the chain on first attempt so the $30k payments will mount up..
and there is nothing stopping the devs from adding other rules that
a. rejects blocks with less than 200 transactions..  to force miners to actually put multiple transactions into block, instead of creating near empty blocks which not only solves the doomsday problem, but also helps ensure transactions are not held in mempool for hours while blocks are being solved without tx's
b. or reject and not relay transactions where a single tx has more than 500k of data. to teach people how to create transactions properly and more lean
EG
instead of
single TX {
1originfunds
2originfunds
3originfunds
4originfunds
5originfunds
6originfunds  -> 1destination
7originfunds
8originfunds
9originfunds
10originfunds
}

single TX {
1originfunds
2originfunds  -> 1destination
}
single TX {
3originfunds
4originfunds  -> 1destination
}
single TX {
5originfunds
6originfunds  -> 1destination
}
single TX {
7originfunds
8originfunds  -> 1destination
}
single TX {
9originfunds
10originfunds  -> 1destination
}
single TX {
1originfunds
1originfunds  -> 1destination
}

oh and by the way segwit doesnt prevent the doomsday 1tx with 4000 dust inputs, because although segwit separates the signatures it still needs to check the signatures.. so it would still be 10 minute of validation time for nefarious segwit miners or normal miners.. before even getting to the hashing part

infact segwit makes it easier at the hashing block stage to make such a doomsday block be able to be part of the chain faster. because after the 10minute validation.. the actual block data wont be 1.99mb it would be 1mb-1.2mb because its not holding the signatures.. so the time to actually hash the block can result in a solution sooner than non segwit miners..
legendary
Activity: 2674
Merit: 3000
Terminated.
January 22, 2016, 08:36:07 AM
I totally accept that raising blocksize limit is a workaround, I don't agree that its a case of blind hope. Nobody knows for sure the outcome, but I think it would be difficult to argue against the most likely scenario being that nothing drastic will happen. I can see how the risk profile might not be acceptable for some though, I am a bit more risk tolerant so that will colour my opinion. Though interestingly I wonder how you profile the risk of full transactions - is it that the 'fee market' mitigates that? are you pro RBF? genuinely interested.
What do you mean by 'profile the risk of full transactions'? I don't have a particular opinion on RBF, I'm still in the 'grey zone' when it comes that that. However, I can see the use cases for it and making it opt in seems acceptable.

When you say infrastructure do you mean hardware or protocol improvements? If the latter I would 100% agree that other solutions need to be developed. My favourite is IBLT, I'm just a guy that does a bit of web dev, but the elegance of that solution really stands out to me. I like segwit (believe it or not I'd dreamed up some similar solution in my head about partitioning off data, but anyone can *imagine* a hoverboard!) Some of franky's recent posts about the specific implementation are interesting though, I note that wuilles transaction fix also looked to depend on segwit.
I'm talking about improvements to the protocol. Almost all of franky's posts in regards to SegWit are at least partially wrong though, and some are completely wrong.

So yes 2MB is not a great workaround, but imho its a necessary one for now to buy some time. I think core's resistance is no longer about technical merit, but about having become entrenched in a position and not wanting to set a new precedent. Thats just opinion though and I can't possibly prove it, nevertheless I think its worth considering as a possibility.
2 MB is problematic because of the validation time ATM. Besides, trying to push a hard fork so quickly is even more dangerous. You're essentially cutting everyone off who does not update. Because of this the update/consensus window needs to be greater (i.e. longer). SegWit gives enough capacity 'short term'.
legendary
Activity: 3430
Merit: 3080
January 22, 2016, 08:32:56 AM
Some of franky's recent posts about the specific implementation are interesting though, I note that wuilles transaction fix also looked to depend on segwit.



HI FRANKY YOU ARE VERY INTERESTING AND LEVEL HEADED

WHY THANK YOU BETT, YOU ARE WELL ROUNDED AND GREAT TO TALK WITH
hero member
Activity: 854
Merit: 1009
JAYCE DESIGNS - http://bit.ly/1tmgIwK
January 22, 2016, 08:17:23 AM

To be honest I think Roger Ver agrees with Classic so when he is asked in interviews why he agrees with classic he can subtlety drop an advertisement about his website. "By the way Classic is censored in the official reddit thats why we created Bitcoin.com..." it's not that subtle actually.

Yes the censorship sucks, and they will use that against us to gain credibility. I think censorship should not be done even if we are right, it is just a bad thing to do.

it was in no way censorship, you've bought into the double-speak of the sock-puppets and shills who were over-running the legitimate bitcoin forums before better moderation showed up (notice how they still jealously covet getting a voice on these "hated censored forums") ... it was totally out of control, there was no debate before, just a loud cacophony of propaganda ... once they go to their "own" forums they had nothing to say and just started tearing strips off each other.

Wow that's amusing, there are certainly dozens of suckpuppets on this forum, the way you can buy easily 20-30 legendary accounts for a cost.

So what, it costs 10-15 BTC for the enemy to sow discontent with legendary or hero accounts. Clever and cheap. For 15 BTC is really nothing if you want to sabotage BTC.

Few legendary accounts leading the propaganda, and a few more member/jr. member sockpuppets to confirm their position, and done, you have changed the public opinion.

Now I see why the moderators have to clean up this mess Cheesy
legendary
Activity: 2576
Merit: 1087
January 22, 2016, 08:11:37 AM
This is 'not a fix' but a bad workaround that limits the system.
Imagine if you applied that logic to the blocksize! Wink
I can. The block size limit does indeed limit the system. Just raising it (bad workaround) and hoping everything will be alright would be a bold move. This is why the infrastructure around it needs to be improved after which the block size limit can be safely increased.

Sorry it was a cheap shot Wink

I totally accept that raising blocksize limit is a workaround, I don't agree that its a case of blind hope. Nobody knows for sure the outcome, but I think it would be difficult to argue against the most likely scenario being that nothing drastic will happen. I can see how the risk profile might not be acceptable for some though, I am a bit more risk tolerant so that will colour my opinion. Though interestingly I wonder how you profile the risk of full transactions - is it that the 'fee market' mitigates that? are you pro RBF? genuinely interested.

When you say infrastructure do you mean hardware or protocol improvements? If the latter I would 100% agree that other solutions need to be developed. My favourite is IBLT, I'm just a guy that does a bit of web dev, but the elegance of that solution really stands out to me. I like segwit (believe it or not I'd dreamed up some similar solution in my head about partitioning off data, but anyone can *imagine* a hoverboard!) Some of franky's recent posts about the specific implementation are interesting though, I note that wuilles transaction fix also looked to depend on segwit.

If it's hardware then I'm not sure how we can measure that - I think there will always be a natural resistance to growth based on the economics of running a full node and what resources that requires. I accept that doubling the blocksize limit now does push up against the current boundary, and as a result (bit fury paper even models this) there would be some node attrition. Too much? I think this is subjective. Dr Back already expresses great concern with mining centralisation and feels that as a result we have to be super careful about node centralisation, and now more than ever I've felt more in accord with that.

So yes 2MB is not a great workaround, but imho its a necessary one for now to buy some time. I think core's resistance is no longer about technical merit, but about having become entrenched in a position and not wanting to set a new precedent. Thats just opinion though and I can't possibly prove it, nevertheless I think its worth considering as a possibility.

sr. member
Activity: 689
Merit: 269
January 22, 2016, 07:53:38 AM
They think they raise the limit and crash system using blocks full of dust.

LOL
legendary
Activity: 2674
Merit: 3000
Terminated.
January 22, 2016, 07:51:20 AM
This is 'not a fix' but a bad workaround that limits the system.
Imagine if you applied that logic to the blocksize! Wink
I can. The block size limit does indeed limit the system. Just raising it (bad workaround) and hoping everything will be alright would be a bad move. This is why the infrastructure around it needs to be improved after which the block size limit can be safely increased.
legendary
Activity: 2576
Merit: 1087
January 22, 2016, 07:48:29 AM
There is a commit by Gavin which is not a fix, but rather a limitation per transaction (IRC). This is 'not a fix' but a bad workaround that limits the system.

Imagine if you applied that logic to the blocksize! Wink
sr. member
Activity: 689
Merit: 269
January 22, 2016, 07:43:48 AM
and we thought that the XT train was lulzy...  Grin

the ride never ends Grin
Jump to: