Pages:
Author

Topic: Really no talk of segwit / big blocks.. (Read 2129 times)

legendary
Activity: 4410
Merit: 4766
January 15, 2017, 08:45:28 AM
#43

So you want to turn the miners in some sort of pseudo-Government? Really?

That has to be the dumbest Idea I have ever seen. No miners should not have too much power. And the Nodes should have all the authority.

When you create this quasi government structure, you will end up just like all other "limited governments".


What will then it take for the miners to collude and start increasing the block reward for themselves, or do all sorts of weird hardforks.

Nope. You must keep it decentralized, and the miners are only clients of the network, but not the governors of it.


The nodes must, and always have the final authority always. Otherwise you have just created a new form of government. And we all know here how governments behave.

now your twisting things again.
the pools are not the government but the pools are the ones that collate the data for the nodes to validate to confirm or reject. the pools need to know what is the acceptable format and flag their acceptance of that format.

your twisting consensus. please learn consensus

yes consensus  of 5000+ nodes and yes pools are a node too decide what format is acceptable.
but its the nodes that decide the general agreement of what format is acceptable .. then the pools pools flag to agree on producing that format. and the nodes then validate it, accepting/rejecting what the pools produce. ofcourse if 5000+ want on thing the pools will follow, or risk losing their income while wasting electric for nothing. but the pools need to wave their hand in their air to show when they will begin. so that the nodes are ready.

again think about consensus

kind of strange how you desired 12 devs and 90 interns to be the government. but then twist consensus into myths, ifs, and maybe's of non dev collusion.
please put your rational hat back on. you had a good 24 hours hours of thinking rationally, dont go putting your fanboy hat on again

moving from 1mb to the scenario of 1.3mb over a few days is a natural and risk averting logical thing. jumping from 1mb straight to 1.3mb in minutes is not logical, rational or risk averting.
hero member
Activity: 854
Merit: 1009
JAYCE DESIGNS - http://bit.ly/1tmgIwK
January 15, 2017, 06:37:05 AM
#42

my personal number that i think is safe differs to blockstreams 95%..
but i only mentioned 95% because blockstream would send in the usual intern centralist bandwaggon if i said anything different. so to avoid argument i just used their numbers so they cant argue..

anyway,
when user nodes set their settings the consensus is measured and the pools are the ones that decide when to push out blocks with the least risk.
yes pools choose when, as its in their interest to not lose $12k in 10minutes.

logically even if there is a clear majority (pick random number of majority all you like).. pools will then do their own flagging of intent.

EG imagine nodes new limit will be 1.3mb by large majority consensus.. then pools flag also has majority consensus to say yes too..
but when activating it.. they are going to be smart.

first block after activation... 1.001mb then slowly get to the 1.3mb over time
they are not irrationally going to push out a 1.3mb block the very next block after activation. they will test the water.
it might take 2 days and 2 hours(0.001mb) increments per block(2days *144blocks per day=288+12blocks in 2hours =300 adjustments) to see the orphan risk as it climbs to 1.3mb new limit.

thats the logical and safe way.

So you want to turn the miners in some sort of pseudo-Government? Really?

That has to be the dumbest Idea I have ever seen. No miners should not have too much power. And the Nodes should have all the authority.

When you create this quasi government structure, you will end up just like all other "limited governments".


What will then it take for the miners to collude and start increasing the block reward for themselves, or do all sorts of weird hardforks.

Nope. You must keep it decentralized, and the miners are only clients of the network, but not the governors of it.


The nodes must, and always have the final authority always. Otherwise you have just created a new form of government. And we all know here how governments behave.
legendary
Activity: 4410
Merit: 4766
January 14, 2017, 10:08:09 PM
#41
We really need satoshi now because if he somehow shows up and tell us what he thinks we should do then we might actually agree on one thing and then doing it, I don't think anyone refuses him only if he could speak out somehow.
It is very unlikely that he/she/they appear now to say what the best procedure is,

It can be phased in, like:

if (blocknumber > 115000)
    maxblocksize = largerlimit

It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete.

When we're near the cutoff block number, I can put an alert to old versions to make sure they know they have to upgrade.
hero member
Activity: 490
Merit: 500
January 14, 2017, 09:57:29 PM
#40
We really need satoshi now because if he somehow shows up and tell us what he thinks we should do then we might actually agree on one thing and then doing it, I don't think anyone refuses him only if he could speak out somehow.
It is very unlikely that he/she/they appear now to say what the best procedure is, but there are many competent developers working on it, and from what I know, it seems that segwit is not the only thing proposed by them... There is also another solution called FlexTrans, but I'm not sure if this would be a better solution.
legendary
Activity: 4410
Merit: 4766
January 14, 2017, 09:57:17 PM
#39
Wouldn't it also still be doubling what is added to the blockchain in new blocks, instead of 2mb every 10 min you have 1 mb every 5 min for a total, still, of 2mb every 10 minutes, and still be adding the same, if not more, data transmission between nodes that is the bandwidth concern..

the 10min rule is an average of a indirect rule.... not a direct rule in itself

some blocks can take an hour+ some can take <2minutes.
changing the difficulty and rewards and halvings and everything else still does not guarantee a 5min expectation.. only an average

this may still be <2min-1hour+. but a higher majority happening in a shorter period.
while screwing with so many bitcoin features that should not be touched.

pools already have issues filling blocks with new tx's if a block is solved in 2 minutes, causing empty blocks.. while its checking the last block. so this 'empty block' result will occur more often
thus its not actually helping to get more transactions confirmed longterm. infact it can make less transactions get confirmed
legendary
Activity: 2296
Merit: 2262
BTC or BUST
January 14, 2017, 09:47:31 PM
#38
If they cut the transaction time from 10 minutes to 5 minutes and kept the blocksize the same for the five minute transaction that would, effectively, double the total throughput.

(facepalm)

i understand the human utility of desire for faster confirms... but
but to do such, has many ramifications and complexity and having to change many things that can affect and risk many things.
EG it messes with the coin creation metric. difficulty rate. and also difficulty retarget time. and many other things.

to implement it is not only far more tweaking bitcoin away from bitcoins main rules that should not be changed. but also to implement and activate
is more of a hard controversial fork that could more easily lead to an intentional split.

Wouldn't it also still be doubling what is added to the blockchain in new blocks, instead of 2mb every 10 min you have 1 mb every 5 min for a total, still, of 2mb every 10 minutes, and still be adding the same, if not more, data transmission between nodes that is the bandwidth concern..
legendary
Activity: 4410
Merit: 4766
January 14, 2017, 09:39:56 PM
#37
If they cut the transaction time from 10 minutes to 5 minutes and kept the blocksize the same for the five minute transaction that would, effectively, double the total throughput.

(facepalm)

i understand the human utility of desire for faster confirms... but
but to do such, has many ramifications and complexity and having to change many things that can affect and risk many other things.
EG it messes with the coin creation metric. difficulty rate. and also difficulty retarget time. and many other things.

for instance. the difficulty retarget (2016blocks in 2 weeks) would become 2016 blocks a week
for instance. the block reward halving(210000 blocks (every 4 years)) would become every 2 years

so then does the decision becomes:
let the reward halvings happen every 2 years meaning 21mill cap stays but cap reached in 66yrs instead of 132yrs
or move the goalposts and end up having more than 21mill coins

to implement it, is not only far more tweaking bitcoin away from bitcoins main rules that should not be changed. but also to implement and activate
is more of a hard controversial fork that could more easily lead to an intentional split.
full member
Activity: 178
Merit: 100
January 14, 2017, 09:28:14 PM
#36
If they cut the transaction time from 10 minutes to 5 minutes and kept the blocksize the same for the five minute transaction that would, effectively, double the total throughput.

It would also achieve confirmations in half the time.


Jump ahead ten years and ask if a ten minute transaction time will still work?

Why not, then, have options at every halving?

We already have scheduled halvings every four years so why not just have soft forks added at the same time to select transaction time, blocksize, or combinations of the two. The "halving" already has a certain degree of consternation but it is in the open and people prepare. Adding the decisions at the same time about blocksize would seem to be a trivial consideration, plus, many miners would be using the halving as an exit opportunity after it makes their equipment no longer competitive.
legendary
Activity: 4410
Merit: 4766
January 14, 2017, 08:14:31 PM
#35
You also made up the 5% as arbitrary.

seriously??

you been fanboying core for a year and you dont know where the 95% i mentioned originated..
it was core that set the bar so high.

but have a nice day

blah

my personal number that i think is safe differs to blockstreams 95%..
but i only mentioned 95% because blockstream would send in the usual intern centralist bandwaggon if i said anything different. so to avoid argument i just used their numbers so they cant argue..

anyway,
when user nodes set their settings the consensus is measured and the pools are the ones that decide when to push out blocks with the least risk.
yes pools choose when, as its in their interest to not lose $12k in 10minutes.

logically even if there is a clear majority (pick random number of majority all you like).. pools will then do their own flagging of intent.

EG imagine nodes new limit will be 1.3mb by large majority consensus.. then pools flag also has majority consensus to say yes too..
but when activating it.. they are going to be smart.

first block after activation... 1.001mb then slowly get to the 1.3mb over time
they are not irrationally going to push out a 1.3mb block the very next block after activation. they will test the water.
it might take 2 days and 2 hours(0.001mb) increments per block(2days *144blocks per day=288+12blocks in 2hours =300 adjustments) to see the orphan risk as it climbs to 1.3mb new limit.

thats the logical and safe way.
hero member
Activity: 854
Merit: 1009
JAYCE DESIGNS - http://bit.ly/1tmgIwK
January 14, 2017, 07:38:49 PM
#34
You also made up the 5% as arbitrary.

seriously??

you been fanboying core for a year and you dont know where the 95% i mentioned originated..
it was core that set the bar so high.

but have a nice day

Yes but do you really want to dilute the concept so much. I would personally set it to 90% but it's almost close. But so can it be 89% and so on, you can always set the bar lower and lower, and then risk fragmenting the consensus system.

Isn't it this how democracy got corrupted. It used to be 50%+1 vote, but since only about 15-20% of the population votes , it's actually only 7.5%+1 that is needed to win an election.

So you slowly turn democracy into authoritarian tyranny.


I think it should definitely not be below 80%, but you could debate what number is sufficient between 80 and 99%.


Maybe the standard deviation needs to be calculated to see how probable a consensus is on different percentage levels. Obviously the lower the standard the easier, but then you also give up the security of the system.


And since there are less and less nodes, the bar should be really high.



You know 1000 nodes with 45% consensus is exactly the same as  500 nodes with 90% consensus.



MAYBE THE CONSENSUS TRESHOLD SHOULD BE PEGGED TO HOW MANY NODES THERE ARE ACTIVE?

And then any subjectivity is removed from the system.
hero member
Activity: 588
Merit: 541
January 14, 2017, 07:22:55 PM
#33
We really need satoshi now because if he somehow shows up and tell us what he thinks we should do then we might actually agree on one thing and then doing it, I don't think anyone refuses him only if he could speak out somehow.
legendary
Activity: 4410
Merit: 4766
January 14, 2017, 07:10:34 PM
#32
You also made up the 5% as arbitrary.

seriously??

you been fanboying core for a year and you dont know where the 95% i mentioned originated..
it was core that set the bar so high.

but have a nice day
hero member
Activity: 854
Merit: 1009
JAYCE DESIGNS - http://bit.ly/1tmgIwK
January 14, 2017, 06:26:36 PM
#31

also where you said 1 node can hold it up. i said 5% so based on 5000 nodes, more than 250 nodes would need to hold at 1mb to hold it up. not just one node.

I think you have no clue about basic math. Do you know what a Percentile is?

https://en.wikipedia.org/wiki/Percentile

It is exactly what you are talking about, but a more formal calculation of it.


You also made up the 5% as arbitrary. But a percentile is easier to calculate and synch across the network. The treshhold of course can be debated.

But yes, lets say we make a 10% treashold just for the sake of it. That is a 90% consensus. I think that is a pretty solid approval for any feature. You will never get 100% either way.


But also in the example above, we have a small sample size. Over a large sample size it's more manageable.

So if you have 5000 nodes, and lets say they all randomly choose a MB limit between 1 MB and 10 megabyte.




Ok so if the treshold is 10% , then we take a 10% Percentile value, but we will have a 90% consensus:  1.887393 MB in my experiment

Ok, so you have a 90% consensus between nodes, 10% will be forced to accept the new rules, and forced to uppgrade their hardware if they cant run higher than 1 mb node.


It's that simple, easy to calculate.






people can have a PHD in anything.. bitcoin physics and theoretic's is not covered in their syllabus. also some have had PHD's before the millenium so dont expect the tech they learned about is the same tech available today.. IT PhD's get outdated faster than most peoples wives.

dont ideally trust someone because of qualifications.. without understanding what that qualification actually taught or didnt teach.
do you even know when these PhD guys even got their qualifications and what technologies were available at the time.

That is true, however they have been actively working since. It's not like you get your PHD and then let in rust on your shelf. You probably get hired by IT firms to do work for them or work in academia.



dont ideally trust someone because of qualifications.. without understanding what that qualification actually taught or didnt teach.
do you even know when these PhD guys even got their qualifications and what technologies were available at the time.

Me neither, a piece of paper is not really proving anything and somebody who has not piece of paper might be an equally good programmer.

However the Core team so far is still the most trustworthy and proffessional.
legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
January 14, 2017, 05:48:47 PM
#30
does litecoin really need segwit, they have no block limit problem, they numbers of transactions per day is very small not comparable with bitcoin

As this seems not to have been answered:

Litecoin wants Segwit because it allows Atomic cross-chain trading with a simple mechanism very similar to the one that would be used in the Lightning Network (see: https://en.bitcoin.it/wiki/Atomic_cross-chain_trading). Atomic cross-chain trading would be a major advantage for all altcoins, because there would be a simple and trustless way to change value from one blockchain to another. So to buy altcoins with BTC you wouldn't have to trust an altcoin exchange (remember Cryptsy?).

Obviously they want it also to make it appear that they are technically on the forefront of the cryptocurrency movement and have an excuse for a pump.
legendary
Activity: 4410
Merit: 4766
January 14, 2017, 03:19:46 PM
#29
for instance if there were more results than just 10
1.2       2        1.2        5        17        3        6        8        1.3        1.6
1          2        1.2        5        17        3        6        8        1.3        1.6

where 95% wantd 1.2 min but there was 5% holding back.
then pools would way up the need for more buffer vs orphan risk (5% lagger) and then decide to push on for more and leave the lagger behind having to tweak their setting up to be part of the network or left unsyncing (standard 95% consensus even core/blockstream think is acceptable)
No if it's least common denominator then it takes only 1 person to fuck it up.
If 3999 nodes agree on 2 mb, but 1 node doesnt, then it's still 1mb. And if you add some weight to it, then it's too arbitrary.

Median is a good choice.
In this example of yours:
1.2       2        1.2        5        17        3        6        8        1.3        1.6
The median is 2.5

Which means that it's the consensus of 60% of the nodes.
Or you can use Percentiles
Where the 50% percentile is the median, but if you think the median is too high, then use the 40% percentile: 1.84MB or the 25% percentile 1.375
The 25% percentile is literally a consensus of 75% in this case, where we round it up due to small sample size it's 80%.

using the median and then finding a random size after that is foolish
imagine your numbers..
randomly saying that 25%=1.375mb.. no.. 1.2    1.2    1.3 are all excluded . meaning its 30% node drop/orphan risk based on 1.375mb figure
randomly saying that 40%=1.84mb..  yes.. 1.2   1.2    1.3  1.6 are all excluded . meaning its 40% node drop/orphan risk based on 1.84mb figure

where and why would you choose a random number of 1.375 or 1.84 is another variable of debate.. afterall to a 2mb node(next number after 1.6mb) will be wondering why halt at 1.87mb if no one is saying they cant cope with 1.88-1.99.

there is too much iffyness and orphan risk of medians.
especially the way you played around after, to get your magic numbers.

now i think about it. its not "least common figure", i was thinking of.  its to sort the amounts into ascending order.. take off 5% of results from smallest end.. and then whatever the lowest number of what is left is the acceptable (lowest of 95% is the new buffer size)
EG

1     1.2     1.2     1.2     1.3     1.3     1.6     1.6     2     2     3      3      5     5      6     6      8      8      17     17

also where you said 1 node can hold it up. i said 5% so based on 5000 nodes, more than 250 nodes would need to hold at 1mb to hold it up. not just one node.

Quote
the benefits of segwit are exaggerated. the most foolish thing is letting someone make a TX with 20,000sigops. and then cry that tx's using many sigops take longer to process.. logical solution is restrict sigops so that bloated tx's dont take up too much blockspace, which also cuts down on processing time.

Yeah but we have PhD's working on this. So I kind of trust them better, to be better experts on this. The miners can flip flop, but the devs are experts in IN or Network Engineering.

people can have a PHD in anything.. bitcoin physics and theoretic's is not covered in their syllabus. also some have had PHD's before the millenium so dont expect the tech they learned about is the same tech available today.. IT PhD's get outdated faster than most peoples wives.

dont ideally trust someone because of qualifications.. without understanding what that qualification actually taught or didnt teach.
do you even know when these PhD guys even got their qualifications and what technologies were available at the time.
hero member
Activity: 854
Merit: 1009
JAYCE DESIGNS - http://bit.ly/1tmgIwK
January 14, 2017, 12:26:13 PM
#28

mdian.. hell no..

for instance if it was median. 4 out of 10 would be fut off and not syncing.

what it would be is as it already is.. is the common least denominator
meaning using your numbers it would stick to 1.. (like today we already have a few nodes at 2->16

but imagine people slightly adjusted numbers as time went on
EG
1.2       2        1.2        5        17        3        6        8        1.3        1.6
pools will now make blocks at 1.2
then
EG
1.5       2        1.5        5        17        3        6        8        1.5        1.6
pools will now make blocks at 1.5
then
EG
1.2       2        1.2        5        17        3        6        8        1.3        1.6

and each time.. EVERYONE is happy
or for instance if there were more results than just 10
1.2       2        1.2        5        17        3        6        8        1.3        1.6
1          2        1.2        5        17        3        6        8        1.3        1.6

where 95% wantd 1.2 min but there was 5% holding back.
then pools would way up the need for more buffer vs orphan risk (5% lagger) and then decide to push on for more and leave the lagger behind having to tweak their setting up to be part of the network or left unsyncing (standard 95% consensus even core/blockstream think is acceptable)


No if it's least common denominator then it takes only 1 person to fuck it up.

If 3999 nodes agree on 2 mb, but 1 node doesnt, then it's still 1mb. And if you add some weight to it, then it's too arbitrary.

Median is a good choice.

In this example of yours:

1.2       2        1.2        5        17        3        6        8        1.3        1.6

The median is 2.5

Which means that it's the consensus of 60% of the nodes.


Or you can use Percentiles

Where the 50% percentile is the median, but if you think the median is too high, then use the 40% percentile: 1.84MB or the 25% percentile 1.375


The 25% percentile is literally a consensus of 75% in this case, where we round it up due to small sample size it's 80%.


Quote

the benefits of segwit are exaggerated. the most foolish thing is letting someone make a TX with 20,000sigops. and then cry that tx's using many sigops take longer to process.. logical solution is restrict sigops so that bloated tx's dont take up too much blockspace, which also cuts down on processing time.


Yeah but we have PhD's working on this. So I kind of trust them better, to be better experts on this. The miners can flip flop, but the devs are experts in IN or Network Engineering.







legendary
Activity: 4410
Merit: 4766
January 14, 2017, 10:58:51 AM
#27
I have to say this , I start to agree with you more and more.

For example every node sets their maximum  block size they can handle, and then the lowest common denominator, or the median will be used as block size.

So if there are 10 nodes for example and they set respectively: 1   2   1   5   17   3   6   8   1   1

Then the median: 2.5 mb block can be set, that will satisfy most nodes.
median.. hell no..

for instance if it was median. 4 out of 10 would be fut off and not syncing.

what it would be is as it already is.. is the common least denominator
meaning using your numbers it would stick to 1.. (like today we already have a few nodes at 2->16

but imagine people slightly adjusted numbers as time went on
EG
1.2       2        1.2        5        17        3        6        8        1.3        1.6
pools will now make blocks at 1.2
then
EG
1.5       2        1.5        5        17        3        6        8        1.5        1.6
pools will now make blocks at 1.5
then
EG
1.7       2        1.7        5        17        3        6        8        1.7        1.7
and each time.. EVERYONE is happy


or for instance if there were more results than just 10 (eg over 5000)
1.2       2        1.2        5        17        3        6        8        1.3        1.6
1          2        1.2        5        17        3        6        8        1.3        1.6

where 95% wanted 1.2 min but there was 5% holding back at 1.
then pools would weigh up the need for more buffer vs orphan risk (5% lagger) and then decide to push on for more and leave the lagger behind having to tweak their setting up to be part of the network or left unsyncing (standard 95% consensus even core/blockstream think is acceptable)

However Segwit should still be implemented. Segwit has other features that are important. And after that passes ,we could try to implement this one.

the benefits of segwit are exaggerated. the most foolish thing is letting someone make a TX with 20,000sigops. and then cry that tx's using many sigops take longer to process.. logical solution is restrict sigops so that bloated tx's dont take up too much blockspace, dont use as many sigops, which also cuts down on processing time.
hero member
Activity: 854
Merit: 1009
JAYCE DESIGNS - http://bit.ly/1tmgIwK
January 14, 2017, 08:04:53 AM
#26
I suppose the network bandwidth is the weakest link here.

In that case, what if the block size increase will be pegged to half of the global average bandwidth increase.

So if bandwidth increases by 5% yearly, then we can increase block size by 2.5%. How about that?

what if i told you that using dynamic rules AND consensus.. nodes only flag desire to increase when they can handle it. and it only increases if the majority can handle it. they all set their own max buffer flag, and blocksizes of larger amounts only grow to the scale the majority can happily cope with.

meaning it will not surpass what people can cope with, because if larger sizes cant be coped with by nodes they wont flag desire for..

we dont need devs to spoonfeed what they feel/they desire, when the network itself can do it.

devs have already said 8mb is safe but they prefer their 4mb weight. (compared to their old fake doomsday rhetoric of 2mb was bad)
so there is no reason to keep the baseblock at 1mb

I have to say this , I start to agree with you more and more.

For example every node sets their maximum  block size they can handle, and then the lowest common denominator, or the median will be used as block size.

So if there are 10 nodes for example and they set respectively: 1   2   1   5   17   3   6   8   1   1

Then the median: 2.5 mb block can be set, that will satisfy most nodes.


However Segwit should still be implemented. Segwit has other features that are important. And after that passes ,we could try to implement this one.

legendary
Activity: 4410
Merit: 4766
January 14, 2017, 07:37:04 AM
#25
I suppose the network bandwidth is the weakest link here.

In that case, what if the block size increase will be pegged to half of the global average bandwidth increase.

So if bandwidth increases by 5% yearly, then we can increase block size by 2.5%. How about that?

what if i told you that using dynamic rules AND consensus.. nodes only flag desire to increase when they can handle it. and it only increases if the majority can handle it. they all set their own max buffer flag, and blocksizes of larger amounts only grow to the scale the majority can happily cope with.

meaning it will not surpass what people can cope with, because if larger sizes cant be coped with by nodes they wont flag desire for..

we dont need devs to spoonfeed what they feel/they desire, when the network itself can do it.

devs have already said 8mb is safe but they prefer their 4mb weight. (compared to their old fake doomsday rhetoric of 2mb was bad)
so there is no reason to keep the baseblock at 1mb
hero member
Activity: 854
Merit: 1009
JAYCE DESIGNS - http://bit.ly/1tmgIwK
January 14, 2017, 06:50:11 AM
#24
There is still quite a bit of discussion on segwit and block size increase hard forks. None of it is being instantly deleted. Those threads have just not been posted as much because there are already threads discussing segwit and other proposals. All of the arguments for and against all of the proposals have basically been discussed to death already.

Any thread on this.. or is still an instant delete subject... I use to be very anti big block but after  buying a 4tb harddrive for $120... I am changing my tune.. I can't see how the blocksize scaling at half of moores law causes any issues..
There's more to it than just disk space. You also have to consider network bandwidth usage and processing power for processing blocks.

Also, segwit is a block size increase as the data per block being sent over the wire will be larger than now.

I suppose the network bandwidth is the weakest link here.

In that case, what if the block size increase will be pegged to half of the global average bandwidth increase.


So if bandwidth increases by 5% yearly, then we can increase block size by 2.5%. How about that?
Pages:
Jump to: