Pages:
Author

Topic: Analysis and list of top big blocks shills (XT #REKT ignorers) - page 9. (Read 46564 times)

legendary
Activity: 1988
Merit: 1012
Beyond Imagination
It also shows a strong philosophy when it comes to future scalability: Simple solutions tends to survive long term wise. It is very easy to expand the capacity of a simple system by simply adding more of it, because you don't change the behavior of existing system and all the systems that are dependent on it can work as usual
It doesn't change the behavior of the system? This can only be said for segwit until everyone is using it, after that it is the system. However, I would argue that this 'simple' solution has a broader effect unlike what many claim. Does it not push away some existing nodes and potential new nodes due to the increased requirements?

The thinking behind RAID 0 is to split data between two disks so that data can transfer parallel thus reach double throughput, it is a change of the behavior. But as a means to reach higher throughput, it is short lived. We all know that the rise of SSD has replaced most of the RAID 0 setup quickly since they works exactly as a single hard drive but provide magnitudes higher throughput

Similarly, the hard drive will move to a SSD based route and reach PB level storage with ease, cost is not a problem when it is mass produced. As to CPU limitation, special ASICs can be developed to accelerate the verification process, so that a 1GB block can be processed in seconds

The only thing left is the network bandwidth, which is the current bottleneck. Segwit will not help in that regards since it brings more data given same amount of transaction. But we are far from reaching the theoretical limitation of fiber networks communication yet. Currently it is already Pbs level in lab. With the new trend of 4K content streaming, especially 4K live broadcast, it requires enormous amount of bandwidth. If average household want to see 4k live broadcast, they must get 1Gbps level optic fiber, and I think it will happen in latest 10 years

All these things seems to be able to scale indefinitely because they simply add more to its existing capacity, but the way they inter-operates never changed, so that component manufacturer can focus on capacity increase instead of worrying about losing backward compatibility
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
... An eight terabyte hard drive will be able to store the entire Bitcoin blockchain for the next six years easily, and that is with a two megabyte blocksize limit.

http://www.amazon.com/Seagate-Archive-Internal-Hard-Drive/dp/B00XS423SC
Assuming Bitcoin adoption is poor & we don't need to bump blocksize again. 2MB gives us what, ~6tps, and we're ~2 - 3tps now?
Yeah, it essentially doubles throughput. This will make a huge practical difference in terms of adoption over the next few years.

Oh, I agree that it's better than nothing, but in a futile sort of way. I mean, if you don't count on Bitcoin's userbase >doubling in 6 years...

The Toomininstas are confronting the same problem the Gavinistas did, which is that multiplying a tiny number such as 3tps by another tiny (ie sane) number such as 2 or 4 or even 8 still only produces another tiny number such as 6tps or 12tps or 24tps.

You can't get to Visa tps from here.  Our only realistic path to Visa is orthogonal scaling, where each tx does the maximum economic work possible.

Core and Blockstream are carefully and meticulously preparing Bitcoin for such scaling, via sidechains, Lightning, CLTV, RBF, etc.

Meanwhile Tooministas fret and moan about "Are We There Yet?" and "But Daddy I Want It Now."  As if a "not much testing needed" 2MB jump is not worse than useless...   Roll Eyes
legendary
Activity: 1526
Merit: 1013
Make Bitcoin glow with ENIAC
To be clear I do not think we should restrict the throughput of the network so that people can continue to run full nodes on raspberry PI's. I do not think this was ever the intention for Bitcoin either. It is true that there is a balancing act here, and that decentralization is effected in different ways. However everything considered I think that increasing the blocksize will be better for decentralization overall compared to leaving it at one megabyte.
You guys obviously have no clue as to what an 'example' is. I could care less about the Raspberry PI, but I know a lot of people using those to either mine or run nodes.

Sure. I have literally hundreds of Raspberrys and Beagle Boards in my mine. But do you know what they are for and why they're completely irrelevant to this debate? Are you just writing stuff to shut us up?

...Who cares about the Raspberry in particular...

You do.

And you seem to think everyone should run a full node. In the name of decentralization. I really think you misunderstand how decentralization works. You cannot expect that every user should actively maintain the network. It's much more important that those who do maintain it understands what it takes and accepts that burden. A person who can't set up their network properly, who can't really understand the difference between a full node and a light client, and gets a crappy experience because the node is using too much bandwidth or is deteriorating the wifi, should never run a full node. You need quality HW and at least semi-qualified people. And to really add to decentralization they should keep an eye on the behavior of the node. Those weirdos who's just sitting there with a beer looking at transactions, they add more to decentralization than 10 "dumb" nodes.
sr. member
Activity: 423
Merit: 250
2. I have 2 TB storage but I guess average might be well above 1 TB today. At least 5x increase is reasonable in 6 years
Unless new technology comes up, no it isn't.

If you prove we will not see over 10 TB HDDs in mass production in the following 6 years, then you can get some big blockers more sceptical about scaling above 2 or 4 MB. So go ahead.


3. Rasperi PIs are dead path for the most successfull crypto imo. You dont release best games with minimum system requirements so you can please everybody. You stick to something like top 75% of user PC market and only this way you can release top product played by most. Releasing India game with basically no minimum system requirements will not give you a better seller...
You guys seem to lack some logic though. Who cares about the Raspberry in particular; take any model of a processor that you want. Where is the limit? This was discussed at a workshop IIRC; new nodes being unable to ever catch up to the network. You can't deny it now though, essentially you would push X amount of people with systems that are unable to handle the network anymore -> smaller number of nodes -> decentralization harmed. 
As previously said, this is definitely going to happen the question is if it is going to be negligible.

Nodes with average system specs catching up pretty nicely (4 core 2 GHz CPU), until checkpoint it is never issue there is network speed biggest botteneck. Only about the last about half year takes most time because only then all transaction signatures are checked. Fortunately today standard 4 core CPU rig can use all 4 cores to catch up in few days at most. Obviously in future number of cores average rig will have is going to increase, and Bitcoin devs can put the new checkpoints in every minor version (compared to major version today) so you will need to really spend most time on last about 2 months of syncing only and not about half a year as of now... pretty easy fix helping scalability for new nodes syncing from 0
legendary
Activity: 2576
Merit: 1087
Anecdotally, I just switched plans with my ISP. Signed up for their new "Terabyte" service.

Not a bad service for some backwater village in the UK...
legendary
Activity: 2674
Merit: 2965
Terminated.
It also shows a strong philosophy when it comes to future scalability: Simple solutions tends to survive long term wise. It is very easy to expand the capacity of a simple system by simply adding more of it, because you don't change the behavior of existing system and all the systems that are dependent on it can work as usual
It doesn't change the behavior of the system? This can only be said for segwit until everyone is using it, after that it is the system. However, I would argue that this 'simple' solution has a broader effect unlike what many claim. Does it not push away some existing nodes and potential new nodes due to the increased requirements?
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
I think any normal people will just go for the first option, only geeks and technical interested guy will try the second approach, and eventually many of them will give up on the second setup because it is just too complex to implement and maintain, and a Raid 0 will have higher risk of failure, it does not worth the effort
RAID 0 does not have a high risk of failure and it is usually used for much better performance not endurance/storage and thus this analogy is wrong. Let's move on.


I use this example because it has proven history: After so many years of Raid technology appearance, most of the people are still not running Raid, and when they run, they typically run Raid 1 or Raid 10 in data centers to have more data safety. This is all because raised level of complexity caused so many compatibility/driver problems which do not exist for a single hard drive setup, so that the benefit of increased speed does not worth the effort of setting up and maintain a RAID

It also shows a strong philosophy when it comes to future scalability: Simple solutions tends to survive long term wise. It is very easy to expand the capacity of a simple system by simply adding more of it, because you don't change the behavior of existing system and all the systems that are dependent on it can work as usual
legendary
Activity: 2674
Merit: 2965
Terminated.
To be clear I do not think we should restrict the throughput of the network so that people can continue to run full nodes on raspberry PI's. I do not think this was ever the intention for Bitcoin either. It is true that there is a balancing act here, and that decentralization is effected in different ways. However everything considered I think that increasing the blocksize will be better for decentralization overall compared to leaving it at one megabyte.
You guys obviously have no clue as to what an 'example' is. I could care less about the Raspberry PI, but I know a lot of people using those to either mine or run nodes.

2. I have 2 TB storage but I guess average might be well above 1 TB today. At least 5x increase is reasonable in 6 years
Unless new technology comes up, no it isn't.

3. Rasperi PIs are dead path for the most successfull crypto imo. You dont release best games with minimum system requirements so you can please everybody. You stick to something like top 75% of user PC market and only this way you can release top product played by most. Releasing India game with basically no minimum system requirements will not give you a better seller...
You guys seem to lack some logic though. Who cares about the Raspberry in particular; take any model of a processor that you want. Where is the limit? This was discussed at a workshop IIRC; new nodes being unable to ever catch up to the network. You can't deny it now though, essentially you would push X amount of people with systems that are unable to handle the network anymore -> smaller number of nodes -> decentralization harmed.  
As previously said, this is definitely going to happen the question is if it is going to be negligible.
sr. member
Activity: 423
Merit: 250
Miners don't currently mine empty blocks in hopes of keeping the blockchain trim & saving some HD space. Care to guess why they do it?
Care to guess how likely that shit is to become pandemic as the blocksize is raised?

When miners receive new block they start working on empty block so they are guaranteed to have valid block. When they finally validate the received block they can add transactions and mine for nonempty blocks. It might take about 30 seconds to validate which fits 5% of empty blocks.

I see where you aiming, if bigger blocks are received then it takes longer to validate thus higher % of empty blocks. But there is work in progress where miners could anounce the block with txs they are mining on, or adding their own txs which noone else can known about (like payments to its users), or naturally increased CPU processing speed over time, or another possibility is dedicated ASIC hardware for faster validation if fees become really important and other options fail.

But it is only worries for miners who are expected to have better hardware than regular nodes who are ok even with medium system hardware specs when the block size limit is increased.
full member
Activity: 126
Merit: 100

The 21 Bitcoin PiTato is the first PiTato with native hardware and software support for the Bitcoin protocol.
Cool

I dont know what it is, but I feel that a PiTato should really be a thing.

That's what somebody called the 21inc Bitcoin Computer, and now I can't think of it as anything else Cheesy
hero member
Activity: 546
Merit: 500
Warning: Confrmed Gavinista

The 21 Bitcoin PiTato is the first PiTato with native hardware and software support for the Bitcoin protocol.
Cool

I dont know what it is, but I feel that a PiTato should really be a thing.
sr. member
Activity: 423
Merit: 250
Because not all blocks can be full, we are looking for 0.5 TB in next 6 years when users will probably have 10-100 TB HDDs in 6 years. If someone really think a bit technically about it and dont spread the FUD about decentralization and how Bitcoin cant scale anyway with onchain transanctions, he would come to conclusion let scale onchain transanctions as much as possible to still keep decentralization - 0.5 TB HDD storage in next 6 years will not break decentralization, even 1TB or 2 TB in next 6 years will not break decentralization either Smiley.
This is "FUD". You're working under the assumption that: 1. The block size will remain at 2 MB for 6 years (while saying that 1 MB hurts adoption). 2. The HDD capacity is going to increase tenfold in 6 years? 3. This doesn't hurt decentralization. You obviously are not thinking properly because there are a lot of factors to consider.
What about new nodes? Good luck catching up with a 0.63 TB network on a raspberry PI. This is something that can not be left out (among other things). In other words, this does hurt decentralization and that is a fact. The question is just how much and is it negligible?

1. It can safely be adjusted up to 8 MB when needed and still only maximum 2TB of data every 6 years. The point was as long as user natural HDD capacity is increased over time, so can be blocksize limit to be increased to keep the same level of decentralization.

2. I have 2 TB storage but I guess average might be well above 1 TB today. At least 5x increase is reasonable in 6 years

3. Rasperi PIs are dead path for the most successfull crypto imo. You dont release best games with minimum system requirements so you can please everybody. You stick to something like top 75% of user PC market and only this way you can release top product played by most. Releasing India game with basically no minimum system requirements will not give you a better seller...
full member
Activity: 126
Merit: 100
Because not all blocks can be full, we are looking for 0.5 TB in next 6 years when users will probably have 10-100 TB HDDs in 6 years. If someone really think a bit technically about it and dont spread the FUD about decentralization and how Bitcoin cant scale anyway with onchain transanctions, he would come to conclusion let scale onchain transanctions as much as possible to still keep decentralization - 0.5 TB HDD storage in next 6 years will not break decentralization, even 1TB or 2 TB in next 6 years will not break decentralization either Smiley.
This is "FUD". You're working under the assumption that: 1. The block size will remain at 2 MB for 6 years (while saying that 1 MB hurts adoption). 2. The HDD capacity is going to increase tenfold in 6 years? 3. This doesn't hurt decentralization. You obviously are not thinking properly because there are a lot of factors to consider.
What about new nodes? Good luck catching up with a 0.63 TB network on a raspberry PI. This is something that can not be left out (among other things). In other words, this does hurt decentralization and that is a fact. The question is just how much and is it negligible?

Is this what you're fighting for?

Being able to run a node on a Raspberry Pi?

[Serious facepalm]

The 21 Bitcoin PiTato is the first PiTato with native hardware and software support for the Bitcoin protocol.
Cool
legendary
Activity: 1526
Merit: 1013
Make Bitcoin glow with ENIAC
Because not all blocks can be full, we are looking for 0.5 TB in next 6 years when users will probably have 10-100 TB HDDs in 6 years. If someone really think a bit technically about it and dont spread the FUD about decentralization and how Bitcoin cant scale anyway with onchain transanctions, he would come to conclusion let scale onchain transanctions as much as possible to still keep decentralization - 0.5 TB HDD storage in next 6 years will not break decentralization, even 1TB or 2 TB in next 6 years will not break decentralization either Smiley.
This is "FUD". You're working under the assumption that: 1. The block size will remain at 2 MB for 6 years (while saying that 1 MB hurts adoption). 2. The HDD capacity is going to increase tenfold in 6 years? 3. This doesn't hurt decentralization. You obviously are not thinking properly because there are a lot of factors to consider.
What about new nodes? Good luck catching up with a 0.63 TB network on a raspberry PI. This is something that can not be left out (among other things). In other words, this does hurt decentralization and that is a fact. The question is just how much and is it negligible?

Is this what you're fighting for?

Being able to run a node on a Raspberry Pi?

[Serious facepalm]
newbie
Activity: 42
Merit: 0
... An eight terabyte hard drive will be able to store the entire Bitcoin blockchain for the next six years easily, and that is with a two megabyte blocksize limit.

http://www.amazon.com/Seagate-Archive-Internal-Hard-Drive/dp/B00XS423SC
Assuming Bitcoin adoption is poor & we don't need to bump blocksize again. 2MB gives us what, ~6tps, and we're ~2 - 3tps now?
Yeah, it essentially doubles throughput. This will make a huge practical difference in terms of adoption over the next few years.

Oh, I agree that it's better than nothing, but in a futile sort of way. I mean, if you don't count on Bitcoin's userbase >doubling in 6 years...

2MB * 144 blocks per day * 365 days * 6 years = 0.63 TB

Because not all blocks can be full, we are looking for 0.5 TB in next 6 years when users will probably have 10-100 TB HDDs in 6 years. If someone really think a bit technically about it and dont spread the FUD about decentralization and how Bitcoin cant scale anyway with onchain transanctions, he would come to conclusion let scale onchain transanctions as much as possible to still keep decentralization - 0.5 TB HDD storage in next 6 years will not break decentralization, even 1TB or 2 TB in next 6 years will not break decentralization either Smiley.

But keep posting FUD smallblockers in a atempt to hurt Bitcoin adoption as much as possible to help your goals whatever these are...

I think keeping the 1M block cap is suicidal. Which is not to say that doubling it is a solution. And not to imply that "smallblockers" do not have some valid points.
The price of storage devices is neither here nor there.
Miners don't currently mine empty blocks in hopes of keeping the blockchain trim & saving some HD space. Care to guess why they do it?
Care to guess how likely that shit is to become pandemic as the blocksize is raised?
hero member
Activity: 546
Merit: 500
Because not all blocks can be full, we are looking for 0.5 TB in next 6 years when users will probably have 10-100 TB HDDs in 6 years. If someone really think a bit technically about it and dont spread the FUD about decentralization and how Bitcoin cant scale anyway with onchain transanctions, he would come to conclusion let scale onchain transanctions as much as possible to still keep decentralization - 0.5 TB HDD storage in next 6 years will not break decentralization, even 1TB or 2 TB in next 6 years will not break decentralization either Smiley.
This is "FUD". You're working under the assumption that: 1. The block size will remain at 2 MB for 6 years (while saying that 1 MB hurts adoption). 2. The HDD capacity is going to increase tenfold in 6 years? 3. This doesn't hurt decentralization. You obviously are not thinking properly because there are a lot of factors to consider.
What about new nodes? Good luck catching up with a 0.63 TB network on a raspberry PI. This is something that can not be left out (among other things). In other words, this does hurt decentralization and that is a fact. The question is just how much and is it negligible?
To be clear I do not think we should restrict the throughput of the network so that people can continue to run full nodes on raspberry PI's. I do not think this was ever the intention for Bitcoin either. It is true that there is a balancing act here, and that decentralization is effected in different ways. However everything considered I think that increasing the blocksize will be better for decentralization overall compared to leaving it at one megabyte.
hero member
Activity: 546
Merit: 500
... An eight terabyte hard drive will be able to store the entire Bitcoin blockchain for the next six years easily, and that is with a two megabyte blocksize limit.

http://www.amazon.com/Seagate-Archive-Internal-Hard-Drive/dp/B00XS423SC
Assuming Bitcoin adoption is poor & we don't need to bump blocksize again. 2MB gives us what, ~6tps, and we're ~2 - 3tps now?
Yeah, it essentially doubles throughput. This will make a huge practical difference in terms of adoption over the next few years.

Oh, I agree that it's better than nothing, but in a futile sort of way. I mean, if you don't count on Bitcoin's userbase >doubling in 6 years...

2MB * 144 blocks per day * 365 days * 6 years = 0.63 TB

Because not all blocks can be full, we are looking for 0.5 TB in next 6 years when users will probably have 10-100 TB HDDs in 6 years. If someone really think a bit technically about it and dont spread the FUD about decentralization and how Bitcoin cant scale anyway with onchain transanctions, he would come to conclusion let scale onchain transanctions as much as possible to still keep decentralization - 0.5 TB HDD storage in next 6 years will not break decentralization, even 1TB or 2 TB in next 6 years will not break decentralization either Smiley.

But keep posting FUD smallblockers in a atempt to hurt Bitcoin adoption as much as possible to help your goals whatever these are...
True, my math was a bit off there, must have missed a zero somewhere. Admittedly mathematics is not my strong point. Smiley
legendary
Activity: 2674
Merit: 2965
Terminated.
Because not all blocks can be full, we are looking for 0.5 TB in next 6 years when users will probably have 10-100 TB HDDs in 6 years. If someone really think a bit technically about it and dont spread the FUD about decentralization and how Bitcoin cant scale anyway with onchain transanctions, he would come to conclusion let scale onchain transanctions as much as possible to still keep decentralization - 0.5 TB HDD storage in next 6 years will not break decentralization, even 1TB or 2 TB in next 6 years will not break decentralization either Smiley.
This is "FUD". You're working under the assumption that: 1. The block size will remain at 2 MB for 6 years (while saying that 1 MB hurts adoption). 2. The HDD capacity is going to increase tenfold in 6 years? 3. This doesn't hurt decentralization. You obviously are not thinking properly because there are a lot of factors to consider.
What about new nodes? Good luck catching up with a 0.63 TB network on a raspberry PI. This is something that can not be left out (among other things). In other words, this does hurt decentralization and that is a fact. The question is just how much and is it negligible?
sr. member
Activity: 423
Merit: 250
... An eight terabyte hard drive will be able to store the entire Bitcoin blockchain for the next six years easily, and that is with a two megabyte blocksize limit.

http://www.amazon.com/Seagate-Archive-Internal-Hard-Drive/dp/B00XS423SC
Assuming Bitcoin adoption is poor & we don't need to bump blocksize again. 2MB gives us what, ~6tps, and we're ~2 - 3tps now?
Yeah, it essentially doubles throughput. This will make a huge practical difference in terms of adoption over the next few years.

Oh, I agree that it's better than nothing, but in a futile sort of way. I mean, if you don't count on Bitcoin's userbase >doubling in 6 years...

2MB * 144 blocks per day * 365 days * 6 years = 0.63 TB

Because not all blocks can be full, we are looking for 0.5 TB in next 6 years when users will probably have 10-100 TB HDDs in 6 years. If someone really think a bit technically about it and dont spread the FUD about decentralization and how Bitcoin cant scale with onchain transanctions anyway, he would come to conclusion let scale onchain transanctions as much as possible to still keep decentralization - 0.5 TB HDD storage in next 6 years will not break decentralization, even 1TB or 2 TB in next 6 years will not break decentralization either Smiley.

But keep posting FUD smallblockers in a atempt to hurt Bitcoin adoption as much as possible to help your goals whatever these are...
legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political

No, he's not. He made a great point and your counter argument is 'you're starting to crack under pressure'.  Roll Eyes

Didn't seem that great to me.  (the spoon/fork comment was apropos).

He started with a false premise about the "block maximalists" and what their motives are.
I already stated my own opinions about what "big blockers" really want,
and that was the counter argument.

The 'cracking under pressure' was a tangential remark, not my argument,
you knucklehead.  Tongue

Half joking anyway about 'cracking' but his recent post history has been uncharacteristic.
Pages:
Jump to: