Pages:
Author

Topic: AntPool, BTCChina Pool, 21 Inc., BW Pool and KNCMiner support 8MB (Read 1984 times)

legendary
Activity: 1274
Merit: 1000
I support larger blocks, but I do not support XT.  I see it as an attempt at a hostile take over of the blockchain.  If XT "wins", then the firm of Hearn and Andresen basically takes ownership of the entire blockchain, as they retain the sole power to say what's in and what's out of XT.  That doesn't sit well with me at all.

they will suddenly have to power to make miners run Whatever they want them too as soon as TX is implement? really?? come on....

Yes, really.  If XT were to win out those two would have ultimate say in every single aspect of bitcoin, with one of them being able to override the other.

Quote
Decisions are made through agreement between Mike and Gavin, with Mike making the final call if a serious dispute were to arise.
source: https://bitcoinxt.software/faq.html

I do not see how giving 1 or 2 people ultimate control of the network decentralizes anything.
legendary
Activity: 2786
Merit: 1031

Satoshi never intended to a limit to exist, so what was his solution?
His solution was the 1MB blocks that we have today.  He didn't have an ultimate solution, else this discussion wouldn't be happening.

I support larger blocks, but I do not support XT.  I see it as an attempt at a hostile take over of the blockchain.  If XT "wins", then the firm of Hearn and Andresen basically takes ownership of the entire blockchain, as they retain the sole power to say what's in and what's out of XT.  That doesn't sit well with me at all.

It's that or Blockstream Core, and those guys have their own network to sell to the entire community...
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner

Satoshi never intended to a limit to exist, so what was his solution?
His solution was the 1MB blocks that we have today.  He didn't have an ultimate solution, else this discussion wouldn't be happening.

I support larger blocks, but I do not support XT.  I see it as an attempt at a hostile take over of the blockchain.  If XT "wins", then the firm of Hearn and Andresen basically takes ownership of the entire blockchain, as they retain the sole power to say what's in and what's out of XT.  That doesn't sit well with me at all.

they will suddenly have to power to make miners run Whatever they want them too as soon as TX is implement? really?? come on....

I support larger blocks , and i'd definitely buy some hashing power mining big blocks if that option was available.


legendary
Activity: 1274
Merit: 1000

Satoshi never intended to a limit to exist, so what was his solution?
His solution was the 1MB blocks that we have today.  He didn't have an ultimate solution, else this discussion wouldn't be happening.

I support larger blocks, but I do not support XT.  I see it as an attempt at a hostile take over of the blockchain.  If XT "wins", then the firm of Hearn and Andresen basically takes ownership of the entire blockchain, as they retain the sole power to say what's in and what's out of XT.  That doesn't sit well with me at all.
legendary
Activity: 2674
Merit: 3000
Terminated.
Please stop spreading misinformation. Today I have 2Mbps upload, and 10 years ago I had 1Mbps. What is your point? As I have stated, the problem is severe in developing lands (if you factor in the firewall in China, it gets worse).

I have today 200mpbs upload speed. 10 years ago I had 56kbps.
That's interesting, dial up internet in 2005. Where do you live if I may ask? Anyhow, mentioning 5G is pointless. Where I currently reside (information not available to public) I barely ever experience a 3G connection (rarely, usually in the center of a city).


Even though it is a bit outdated, it shows that the developing lands are far behind and will continue to to be so.


I didn't have time to preform a more extensive search.
legendary
Activity: 994
Merit: 1035
And what about the race for hight-speed Internet Satellite? SpaceX, Airbus...

I'm sure they could help the other part of the world.

Satellite internet has horrible latency. There is a huge discrepancy between real life numbers and proposed projections and peak numbers or hypothetical rates. Even your 200 Mbps is likely burst speed and not continuous , with possible secret soft caps. As a child I used to read books like popular science and popular mechanics but as I matured I realized that many "tech" journalists know little about the technology they write about and many of their expectations were exaggerated for journalistic effect as well.

I could understand if your case was presented in such a way that took into account past historical trends.
Why isn't it sensible to use past trends as a conservative projection and than we can make adjustments if these technologies actually become commonplace?
sr. member
Activity: 422
Merit: 250
I really have 200mbps upload speed. And it's just a commercial ISP with a cheap and flat price, and I'm not from USA. And I have not doubt we are going to see soon much faster speeds with the new technologies: http://www.androidauthority.com/5g-network-speed-20-gbps-618192/

5G will be defined as a network “capable of transmitting data at up to 20 gigabits-per-second”

The exception that breaks the rule being applied to an international decentralized currency , huh?

When and if that reality happens we can revisit its implications on bitcoin. Until than it doesn't follow past historical trends as outlined by BiP 103 and thus we should be skeptical about the advertised possibilities that the future entails. That is what is sensible and rational. I have no doubt there will be certain unique cities that have high speed fiber optic connections and 5G but the concern is node count decreasing to these certain locations only instead of increasing everywhere. This infrastructure may roll out with the next 4-6 years in large metropolitan cities but will take much longer elsewhere.



And what about the race for hight-speed Internet Satellite? SpaceX, Airbus...

I'm sure they could help the other part of the world.
legendary
Activity: 994
Merit: 1035
I really have 200mbps upload speed. And it's just a commercial ISP with a cheap and flat price, and I'm not from USA. And I have not doubt we are going to see soon much faster speeds with the new technologies: http://www.androidauthority.com/5g-network-speed-20-gbps-618192/

5G will be defined as a network “capable of transmitting data at up to 20 gigabits-per-second”

The exception that breaks the rule being applied to an international decentralized currency , huh?

When and if that reality happens we can revisit its implications on bitcoin. Until than it doesn't follow past historical trends as outlined by BiP 103 and thus we should be skeptical about the advertised possibilities that the future entails. That is what is sensible and rational. I have no doubt there will be certain unique cities that have high speed fiber optic connections and 5G but the concern is node count decreasing to these certain locations only instead of increasing everywhere. This infrastructure may roll out with the next 4-6 years in large metropolitan cities but will take much longer elsewhere.

sr. member
Activity: 422
Merit: 250
I have today 200mpbs upload speed. 10 years ago I had 56kbps.

First of all, it isn't just about you but about the rest of the world as well and the diversity of node count.

Secondly, I seriously doubt you have 200 Mbps persistent upload speed. The US typically averages about ~8Mbps upload and many parts of the world people are limited to 0.5 to 1 Mbps upload. A couple of the devs would have to drop their nodes as an example with Gavin's proposal.

Advertised peak upload speed does not equal reality.



I really have 200mbps upload speed. And it's just a commercial ISP with a cheap and flat price, and I'm not from USA. And I have not doubt we are going to see soon much faster speeds with the new technologies: http://www.androidauthority.com/5g-network-speed-20-gbps-618192/

5G will be defined as a network “capable of transmitting data at up to 20 gigabits-per-second”
legendary
Activity: 994
Merit: 1035
I have today 200mpbs upload speed. 10 years ago I had 56kbps.

First of all, it isn't just about you but about the rest of the world as well and the diversity of node count.

Secondly, I seriously doubt you have 200 Mbps persistent upload speed. The US typically averages about ~8Mbps upload and many parts of the world people are limited to 0.5 to 1 Mbps upload. A couple of the devs would have to drop their nodes as an example with Gavin's proposal.

Advertised peak upload speed does not equal reality.
sr. member
Activity: 422
Merit: 250
Wrong. It is more about the network bandwidth than about storage (even though we are reaching the maximum with the current technology). The increase has started to slow down over the last few years. The problem is persistent in developing countries.

Mmmm.... NO. That's the problem, they see an scarce future, and I'm not agree at all with that predictions.

http://www.v3.co.uk/v3-uk/news/2396249/exclusive-university-of-surrey-achieves-5g-speeds-of-1tbps
sr. member
Activity: 422
Merit: 250
10 years ago, the average hdd was 1GB of capacity, today is close to 1TB! How can he proposes a 4MB limit in 10 years? Where is the logic for that?

The concerns of most devs has nothing to do with hard drive space but propagation time, bandwidth limits(especially upload) and mempool problems created by larger blocks.

I have today 200mpbs upload speed. 10 years ago I had 56kbps.
legendary
Activity: 1148
Merit: 1014
In Satoshi I Trust
and sidechains / lightning need bigger blocks too. so where is the point? maybe XT is too much but 1 MB is too little.

95%+ of the devs are fine with increasing the blocksize above 1MB , they just want more testing done.

This is preciously what is attempting to be done here:

https://scalingbitcoin.org/montreal2015/



yes i know about that. hopefully that will bring more clarity.

Warren, a leading Litecoin Dev is in the Montreal Workshop Planning Committee.
legendary
Activity: 994
Merit: 1035
and sidechains / lightning need bigger blocks too. so where is the point? maybe XT is too much but 1 MB is too little.

95%+ of the devs are fine with increasing the blocksize above 1MB , they just want more testing done.

This is preciously what is attempting to be done here:

https://scalingbitcoin.org/montreal2015/

legendary
Activity: 2674
Merit: 3000
Terminated.
What is wrong with that?  That company could force Bitcoin development to advantage themselves!!
Everyone would probably do it if they were in their places. (almost) Everyone strives for money/profit. They can't force anything to the software, people would be aware and reject the client.

"They will increase the block size limit"

Let's see. I have readed LUKEJR saying that the first priority is Lightning, not the block limit. ADAM BACK saying that not every transaction should be on the blockchain or will be collapsing the whole Internet. PETER TODD saying that blockhains doesn't scale well. And SZABO proposing 4mb blocks for 2025!
You mean you have read, as 'readed' is not existent. You will definitely have to do a lot of reading on the mailing list and reddit. The only ones who are strongly against the hard forking (now) are Maxwell and Luke.
You only look at one side of the picture, and thus your view is biased. What about Jeff Garzik who proposed both BIP 100 and BIP 102? Is he not a core developer?

10 years ago, the average hdd was 1GB of capacity, today is close to 1TB! How can he proposes a 4MB limit in 10 years? Where is the logic for that?
Wrong. It is more about the network bandwidth than about storage (even though we are reaching the maximum with the current technology). The increase has started to slow down over the last few years. The problem is persistent in developing countries.
legendary
Activity: 1148
Merit: 1014
In Satoshi I Trust
and sidechains / lightning need bigger blocks too. so where is the point? maybe XT is too much but 1 MB is too little.
legendary
Activity: 2786
Merit: 1031
10 years ago, the average hdd was 1GB of capacity, today is close to 1TB! How can he proposes a 4MB limit in 10 years? Where is the logic for that?

The concerns of most devs has nothing to do with hard drive space but propagation time, bandwidth limits(especially upload) and mempool problems created by larger blocks.

http://wallstreettechnologist.com/2015/08/19/bitcoin-xt-vs-core-blocksize-limit-the-schism-that-divides-us-all/

Quote
Selfish miners

Selfish mining, is one such attack which was clearly explained in Satoshi’s paper as a possible weakness of Bitcoin.  This entails a miner, who has a significant amount of hashing power, mining blocks but not publishing them, thereby creating a secret longer chain that the rest of the network does not know about, with the intent of broadcasting it later, and in doing so will reverse some transactions that may have already been confirmed on the public (but shorter) chain.  This is the infamous double spend attack.  Normally this can only be accomplished reliably when one possesses over 51% of the hashing power of the network.  What most people don’t know is that network propagation is also a factor here.  Satoshi admitted that his calculations on the percentage of hashing power in order to be able to pull off a 51% attack reliably assumes no significant network propagation delays.  Indeed the danger of allowing block size to increase to the point where the expected delays in block propagation through the network has been discussed ad infinitum in the past, and the reason why the block debate has been ongoing since at least 2011.

In this regard, I can understand why Gavin feels that he must do something drastic to force the issue.  The attack goes as follows: If blocks were allowed to be ‘too big’ (big enough to add plausible delays to propagate to all nodes) then a miner would be incentivized to stuff the block they are mining full of txns that pay himself (or a cohort), up to the allowable block limit.  They do not broadcast these transactions to the network unless they solve the block themselves, removing the possibility of paying miner fees to some other miner.  If they manage to solve the block, they immediately broadcast all their spam txns and block solution.  The other miners would have to drop what they are mining, and start downloading the new (very large) block (which may take some time) and verify it, which involves checking the validity of all contained transactions (which will take some more time).  All this results in a appreciable head start that the attacking miner can enjoy in mining the next block.  So what he has successfully done is increased his ‘effective’ hashing power giving him a slight edge over his competitors.  Of course this is a game-theoretic problem, so we can assume that once one miner starts doing this, then either all miners will start doing this, as well (and make orphan blocks and double spends a lot more common) or band together to share high bandwidth connections/nodes (and push the system more towards a centralized one) both situations are bad for Bitcoin.  So everyone can agree that too big of a block size would open up bitcoin to a certain type of fragility that has up until now, not been a problem.

Satoshi never intended to a limit to exist, so what was his solution?
hero member
Activity: 1582
Merit: 502
10 years ago, the average hdd was 1GB of capacity, today is close to 1TB! How can he proposes a 4MB limit in 10 years? Where is the logic for that?

The concerns of most devs has nothing to do with hard drive space but propagation time, bandwidth limits(especially upload) and mempool problems created by larger blocks.

http://wallstreettechnologist.com/2015/08/19/bitcoin-xt-vs-core-blocksize-limit-the-schism-that-divides-us-all/

Quote
Selfish miners

Selfish mining, is one such attack which was clearly explained in Satoshi’s paper as a possible weakness of Bitcoin.  This entails a miner, who has a significant amount of hashing power, mining blocks but not publishing them, thereby creating a secret longer chain that the rest of the network does not know about, with the intent of broadcasting it later, and in doing so will reverse some transactions that may have already been confirmed on the public (but shorter) chain.  This is the infamous double spend attack.  Normally this can only be accomplished reliably when one possesses over 51% of the hashing power of the network.  What most people don’t know is that network propagation is also a factor here.  Satoshi admitted that his calculations on the percentage of hashing power in order to be able to pull off a 51% attack reliably assumes no significant network propagation delays.  Indeed the danger of allowing block size to increase to the point where the expected delays in block propagation through the network has been discussed ad infinitum in the past, and the reason why the block debate has been ongoing since at least 2011.

In this regard, I can understand why Gavin feels that he must do something drastic to force the issue.  The attack goes as follows: If blocks were allowed to be ‘too big’ (big enough to add plausible delays to propagate to all nodes) then a miner would be incentivized to stuff the block they are mining full of txns that pay himself (or a cohort), up to the allowable block limit.  They do not broadcast these transactions to the network unless they solve the block themselves, removing the possibility of paying miner fees to some other miner.  If they manage to solve the block, they immediately broadcast all their spam txns and block solution.  The other miners would have to drop what they are mining, and start downloading the new (very large) block (which may take some time) and verify it, which involves checking the validity of all contained transactions (which will take some more time).  All this results in a appreciable head start that the attacking miner can enjoy in mining the next block.  So what he has successfully done is increased his ‘effective’ hashing power giving him a slight edge over his competitors.  Of course this is a game-theoretic problem, so we can assume that once one miner starts doing this, then either all miners will start doing this, as well (and make orphan blocks and double spends a lot more common) or band together to share high bandwidth connections/nodes (and push the system more towards a centralized one) both situations are bad for Bitcoin.  So everyone can agree that too big of a block size would open up bitcoin to a certain type of fragility that has up until now, not been a problem.
legendary
Activity: 994
Merit: 1035
10 years ago, the average hdd was 1GB of capacity, today is close to 1TB! How can he proposes a 4MB limit in 10 years? Where is the logic for that?

The concerns of most devs has nothing to do with hard drive space but propagation time, bandwidth limits(especially upload) and mempool problems created by larger blocks.
sr. member
Activity: 422
Merit: 250
Exactly what is wrong with that? The developers decided to make a company to profit via something that they're going to build on top of Bitcoin. Are you trying to tell me that if you were in their position, that you would not try to do the same?  Roll Eyes
They will increase the block size limit, they just have to reach consensus in regards to how much and when.

What is wrong with that?  That company could force Bitcoin development to advantage themselves!!

"They will increase the block size limit"

Let's see. I have readed LUKEJR saying that the first priority is Lightning, not the block limit. ADAM BACK saying that not every transaction should be on the blockchain or will be collapsing the whole Internet. PETER TODD saying that blockhains doesn't scale well. And SZABO proposing 4mb blocks for 2025!

10 years ago, the average hdd was 1GB of capacity, today is close to 1TB! How can he proposes a 4MB limit in 10 years? Where is the logic for that?

Pages:
Jump to: