Pages:
Author

Topic: Bitcoin XT - Officially #REKT (also goes for BIP101 fraud) - page 4. (Read 378980 times)

legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
Bitfury's paper here:

http://bitfury.com/content/4-white-papers-research/block-size-1.1.1.pdf

"The table contains an estimate of how many full nodes would no longer function without hard-ware upgrades as average block size is increased. These estimates are based on the assumption that many users run full nodes on consumer-grade hardware, whether on personal computers or in the
cloud. Characteristics of node hardware are based on a survey performed by Steam [19]; we assume PC gamers and Bitcoin enthusiasts have a similar amount of resources dedicated to their hardware.

The exception is RAM: we assume that a typical computer supporting a node has no less than 3 GB RAM as a node requires at least 2GB RAM to run with margin[15]. For example,if block size increases to 2 MB, a node would need to dedicate 8 GB RAM to the Bitcoin client, while more than a half of PCs
in the survey have less RAM."

Based on his estimation, raise the block size to 4MB will drop 75% of nodes from the network

Thanks for the link. I'll read it in its entirety.

However, from my perspective, that latter statement is nothing but a rather weak assumption. According to bitnodes, there are currently about 5650 nodes in operation. Do you really think that a significant percentage of them have less than 8 GiB RAM? Even if they do today (which I do not for a minute believe), it currently costs well under 0.1 BTC to purchase 8 GiB memory. Your assumption that requiring this much RAM will knock a significant percentage of nodes off the network is simply incredible, in the literal sense of the word.
hero member
Activity: 546
Merit: 500
Quote from: Satoshi Nakamoto
Just a friendly word of advice... It isn't a good idea to create a cult of personality and make persistent appeals to authority. Satoshi was/is a genius and it is easier to revere his opinion, but he also made multiple mistakes and from his programming history wasn't the most competent developer either so could easily ignore many technical nuances. Many of the current core developers are far more competent and proficient technical than Satoshi... and we shouldn't view them as infallible or use appeals to authority to them either.
I agree, and in that spirit, let me state clearly here, that just because Satoshi said these things it does not mean he was right, he could have been wrong as some of the Core developers are now saying. I might agree with the original vision of Satoshi but we do have to be critical and always question ourselfs. I like to believe that is what Satoshi would have wanted us to do. It is true that I often fall into the trap of viewing him as a mythological figure, which is fun but I do need to check myself on that sometimes. Grin
legendary
Activity: 994
Merit: 1035
Quote from: Satoshi Nakamoto

Just a friendly word of advice... It isn't a good idea to create a cult of personality and make persistent appeals to authority. Satoshi was/is a genius and while it is easy to revere his opinion, he also made multiple mistakes and from his programming history wasn't the most competent developer either. Many of the current core developers are far more competent and proficient technically than Satoshi... and we shouldn't view them as infallible or use appeals to authority to them either. This is yet another reason why we need to support multiple implementations as the core developers could learn from the others or correct a mistake as well.

I will read this paper and respond later, going to head off now and celebrate the new year. I wish everyone here a happy new year. Smiley

Happy New Year to everyone. Smiley Don't drink and drive and if you can avoid the roads by sleeping with the host/hostess of the party than take the opportunity.
hero member
Activity: 546
Merit: 500
Bitfury's paper here:

http://bitfury.com/content/4-white-papers-research/block-size-1.1.1.pdf

"The table contains an estimate of how many full nodes would no longer function without hard-ware upgrades as average block size is increased. These estimates are based on the assumption that many users run full nodes on consumer-grade hardware, whether on personal computers or in the
cloud. Characteristics of node hardware are based on a survey performed by Steam [19]; we assume PC gamers and Bitcoin enthusiasts have a similar amount of resources dedicated to their hardware.

The exception is RAM: we assume that a typical computer supporting a node has no less than 3 GB RAM as a node requires at least 2GB RAM to run with margin[15]. For example,if block size increases to 2 MB, a node would need to dedicate 8 GB RAM to the Bitcoin client, while more than a half of PCs
in the survey have less RAM."

Based on his estimation, raise the block size to 4MB will drop 75% of nodes from the network
I will read this paper and respond later, going to head off now and celebrate the new year. I wish everyone here a happy new year. Smiley
hero member
Activity: 546
Merit: 500
Quote from: Satoshi Nakamoto
While I don't think Bitcoin is practical for smaller micropayments right now, it will eventually be as storage and bandwidth costs continue to fall.  If Bitcoin catches on on a big scale, it may already be the case by that time.  Another way they can become more practical is if I implement client-only mode and the number of network nodes consolidates into a smaller number of professional server farms.  Whatever size micropayments you need will eventually be practical.  I think in 5 or 10 years, the bandwidth and storage will seem trivial.
Quote from: Satoshi Nakamoto
Long before the network gets anywhere near as large as that, it would be safe for users to use Simplified Payment Verification (section Cool to check for double spending, which only requires having the chain of block headers, or about 12KB per day.  Only people trying to create new coins would need to run network nodes.  At first, most users would run network nodes, but as the network grows beyond a certain point, it would be left more and more to specialists with server farms of specialized hardware.
Quote from: Satoshi Nakamoto
The eventual solution will be to not care how big it gets.
Quote from: Satoshi Nakamoto
But for now, while it’s still small, it’s nice to keep it small so new users can get going faster. When I eventually implement client-only mode, that won’t matter much anymore.
Quote from: Satoshi Nakamoto
The current system where every user is a network node is not the intended configuration for large scale. That would be like every Usenet user runs their own NNTP server. The design supports letting users just be users.
Quote from: Satoshi Nakamoto
It can be phased in, like:

if (blocknumber > 115000)
    maxblocksize = largerlimit

It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete.

When we're near the cutoff block number, I can put an alert to old versions to make sure they know they have to upgrade.
Quote from: Satoshi Nakamoto
The threshold can easily be changed in the future.  We can decide to increase it when the time comes.  It's a good idea to keep it lower as a circuit breaker and increase it as needed.  If we hit the threshold now, it would almost certainly be some kind of flood and not actual use.  Keeping the threshold lower would help limit the amount of wasted disk space in that event.
Quote from: Satoshi Nakamoto
Bitcoin users might get increasingly tyrannical about limiting the size of the chain so it's easy for lots of users and small devices.
hero member
Activity: 546
Merit: 500
Can someone explain to me how not raising the block size limit is a good thing? All transactions should be able to go thru in a semi-timely manner and if the limit is reached in a block then that transaction is just cancelled? That doesnt' make sense to me, I don't see a good reason NOT to switch to bitcoin XT. Secondly, why wasn't this thought of originally in the making of bitcoin? Seems odd.
From the beginning the block size limit works as a spam filter (even this year it successfully resisted the spam attack by coinwallet.eu during July and September). And now people realized it also can work as a means to prevent centralization (As long as blocks are small, average people with a little bit IT knowledge can run a full node in his home thus increase the level of decentralization)
In the schedule outlined in BIP101 the majority of people will still be able run full nodes out of their homes. We now also have Bitcoin Unlimited which can be seen as a more conservative approach in regards to the blocksize.

This has nothing to do with transaction capacity, but a live or death question.
The way you are phrasing this is hyperbole. You claim that this is a live or death question. Can you seriously claim that an increase to two megabytes would destroy Bitcoin? Especially as our technology increases this limit becomes more and more arbitrary. Just increasing the limit does not actually increase the blocksize as the history of Bitcoin and the altcoins proves. It would also make these types of spam attacks much more expensive and less effective.

This has everything to do with transaction capacity. The blocksize limit as an anti spam measure is meant to be set at a much higher level then the actual level of transactions, these where the conditions under which it was setup, if we decide to use the blocksize limit to block the stream of transactions then it becomes a tool of economic policy, in the case of Core it is centralized economic planning. It would be better to allow the free market to determine the size of the blocks instead.

If the blocks are huge and average people can not run a node, thus they all run on large data centers, then a couple of phone call to ISP could disable the bitcoin network. By making blocks small and portable, you can run it on almost any device thus it becomes unlikely you can disable bitcoin network unless you shutdown the whole internet
This was never the intention or the security model that was intended for Bitcoin. Full nodes are destined to be run on servers and desktop computers, I do not see anything wrong with that. Furthermore if we had billions of people using Bitcoin, which such large blocksizes would imply, then we would also have hundreds of thousands of nodes run in data centers across the world in different jurisdictions. I would consider this to be highly decentralized and desirable, this was always the intention and design plan of Bitcoin. Security through mass adoption, not obscurity.

The original vision of Satoshi was that the majority of users would not be running a full node, he had thought about the problems of scaling and what the solutions should be. He said that we should allow it to grow as big as it needed to be, since doing otherwise would effectively block the stream of transactions, hurting adoption. Pushing transactions off chain is not a solution to scaling up the main Bitcoin blockchain itself, not to mention the terrible user experience this would be when compared to just using Bitcoin directly. Bitcoin needs to have a high volume of transactions to pay for its security while maintaining low fees which promotes adoption, which in turn increases its security.

Quote from: Satoshi Nakamoto
I’m sure that in 20 years there will either be very large transaction volume or no volume.
legendary
Activity: 994
Merit: 1035
Thank you, I appreciate your criticism. I am sorry if I came across a bit strong, I am to used to dealing with some of the trolls on this thread. Your observations are accurate, and we can assume shortcuts for profits and mistakes will be made despite thee miners best intentions. I even suspect that some of these problems we will need to learn the hard way, the nice thing about BU is that we could all decide on a two megabyte limit for instance and increase it again when required, allowing us more time to understand and further analyze its effects.

I am really sincere and appreciate your hard work.

I am working on a project now that will back up my words of support with actual action that shows I am truly sincere with both supporting the core dev and other implementations. Let us all rise above attacking each other and bikeshedding.  
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
But simulation by bitfury already indicated that we will have a severe performance problem with 4MB blocks on average home computer

I'd like to see that report. Got a link?

With only several thousand running full nodes now, it seems to me that 'average home computer' is not the limiting issue.
I don't remember exactly, but just google it, and also another statistic by Mark in Montreal conference showing that even a 1MB block can take over 30s to verify on a mining node
Today we have the mining relay network, the block is relayed in milliseconds. What you are saying is factually incorrect.
As I understand, Matt Corallo's bitcoin relay network is a private company, similar "a phone call to shut it down" risk
The free market coming up with solutions on its own, if it did get shut down another one could simply be setup.
Not when you are already heavily dependent on that one, you can not come up with a solution overnight
The Bitcoin relay network is also open source. Smiley

It does not help if you don't have servers at major internet backbones. And if you have, those servers will be censored in a similar way
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
Bitfury's paper here:

http://bitfury.com/content/4-white-papers-research/block-size-1.1.1.pdf

"The table contains an estimate of how many full nodes would no longer function without hard-ware upgrades as average block size is increased. These estimates are based on the assumption that many users run full nodes on consumer-grade hardware, whether on personal computers or in the
cloud. Characteristics of node hardware are based on a survey performed by Steam [19]; we assume PC gamers and Bitcoin enthusiasts have a similar amount of resources dedicated to their hardware.

The exception is RAM: we assume that a typical computer supporting a node has no less than 3 GB RAM as a node requires at least 2GB RAM to run with margin[15]. For example,if block size increases to 2 MB, a node would need to dedicate 8 GB RAM to the Bitcoin client, while more than a half of PCs
in the survey have less RAM."

Based on his estimation, raise the block size to 4MB will drop 75% of nodes from the network

hero member
Activity: 546
Merit: 500
But simulation by bitfury already indicated that we will have a severe performance problem with 4MB blocks on average home computer

I'd like to see that report. Got a link?

With only several thousand running full nodes now, it seems to me that 'average home computer' is not the limiting issue.
I don't remember exactly, but just google it, and also another statistic by Mark in Montreal conference showing that even a 1MB block can take over 30s to verify on a mining node
Today we have the mining relay network, the block is relayed in milliseconds. What you are saying is factually incorrect.
As I understand, Matt Corallo's bitcoin relay network is a private company, similar "a phone call to shut it down" risk
The free market coming up with solutions on its own, if it did get shut down another one could simply be setup.
Not when you are already heavily dependent on that one, you can not come up with a solution overnight
The Bitcoin relay network is also open source. Smiley
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
But simulation by bitfury already indicated that we will have a severe performance problem with 4MB blocks on average home computer

I'd like to see that report. Got a link?

With only several thousand running full nodes now, it seems to me that 'average home computer' is not the limiting issue.
I don't remember exactly, but just google it, and also another statistic by Mark in Montreal conference showing that even a 1MB block can take over 30s to verify on a mining node
Today we have the mining relay network, the block is relayed in milliseconds. What you are saying is factually incorrect.
As I understand, Matt Corallo's bitcoin relay network is a private company, similar "a phone call to shut it down" risk
The free market coming up with solutions on its own, if it did get shut down another one could simply be setup.

Not when you are already heavily dependent on that one, you can not come up with a solution overnight
hero member
Activity: 546
Merit: 500
What you are saying here is completely factually inaccurate, the number of transactions does not increase hashing time. Furthermore you are assuming that miners are irrational and malicious which is flawed. If the miners collectively wanted to destroy the Bitcoin network they already could do that now. It is the underlying game theory and economics of Bitcoin that prevents this from happening in the first place.

Its the number of inputs that can lead to long verification times. I cited an example.

Furthermore you are assuming that miners are irrational and malicious which is flawed. If the miners collectively wanted to destroy the Bitcoin network they already could do that now. It is the underlying game theory and economics of Bitcoin that prevents this from happening in the first place.

I am not assuming miners are irrational or malicious either. First, we should prepare and secure ourselves for this possibility, but lets assume that a majority of the hashing power has and will continue to have the best of intentions for a moment. We can assume shortcuts for profit, mistakes, and ignorance from miners despite their intentions. Case in point -- SPV mining.

P.S... I really appreciate your efforts with Bitcoin Unlimited and hope that more developers test and review the code. Having multiple implementations is extremely valuable to our ecosystem.
Thank you, I appreciate your criticism. I am sorry if I came across a bit strong, I am to used to dealing with some of the trolls on this thread. Your observations are accurate, and we can assume shortcuts for profits and mistakes will be made despite the miners best intentions. I even suspect that some of these problems we will need to learn the hard way, the nice thing about Bitcoin Unlimited is that we could all decide on a two megabyte limit for instance and increase it again when required, allowing us more time to understand and further analyze its effects.
hero member
Activity: 546
Merit: 500
Warning: Confrmed Gavinista

 
Quote
code to expect the worst and hostile intent, especially for bitcoin which has many extremely powerful adversaries

you are correct on that one. Look at the ddos attacks on XT. I suppose its down to who you think your adversaries are, eh?
hero member
Activity: 546
Merit: 500
But simulation by bitfury already indicated that we will have a severe performance problem with 4MB blocks on average home computer

I'd like to see that report. Got a link?

With only several thousand running full nodes now, it seems to me that 'average home computer' is not the limiting issue.
I don't remember exactly, but just google it, and also another statistic by Mark in Montreal conference showing that even a 1MB block can take over 30s to verify on a mining node
Today we have the mining relay network, the block is relayed in milliseconds. What you are saying is factually incorrect.
As I understand, Matt Corallo's bitcoin relay network is a private company, similar "a phone call to shut it down" risk
The free market coming up with solutions on its own, if it did get shut down another one could simply be setup.
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
But simulation by bitfury already indicated that we will have a severe performance problem with 4MB blocks on average home computer

I'd like to see that report. Got a link?

With only several thousand running full nodes now, it seems to me that 'average home computer' is not the limiting issue.
I don't remember exactly, but just google it, and also another statistic by Mark in Montreal conference showing that even a 1MB block can take over 30s to verify on a mining node
Today we have the mining relay network, the block is relayed in milliseconds. What you are saying is factually incorrect.

As I understand, Matt Corallo's bitcoin relay network is a private company, similar "a phone call to shut it down" risk
legendary
Activity: 994
Merit: 1035
What you are saying here is completely factually inaccurate, the number of transactions does not increase hashing time. Furthermore you are assuming that miners are irrational and malicious which is flawed. If the miners collectively wanted to destroy the Bitcoin network they already could do that now. It is the underlying game theory and economics of Bitcoin that prevents this from happening in the first place.

Its the number of inputs that can lead to long verification times. I cited an example.

Furthermore you are assuming that miners are irrational and malicious which is flawed. If the miners collectively wanted to destroy the Bitcoin network they already could do that now. It is the underlying game theory and economics of Bitcoin that prevents this from happening in the first place.

I am not assuming miners are irrational or malicious either. First, we should prepare and secure ourselves for this possibility, but lets assume that a majority of the hashing power has and will continue to have the best of intentions for a moment. We can assume shortcuts for profit, mistakes, and ignorance from miners despite their intentions. Case in point -- SPV mining.


P.S... I really appreciate your efforts with Bitcoin Unlimited and hope that more developers test and review the code. Having multiple implementations is extremely valuable to our ecosystem.
hero member
Activity: 546
Merit: 500
Quote from: rusty
This problem is far worse if blocks were 8MB: an 8MB transaction with 22,500 inputs and 3.95MB of outputs takes over 11 minutes to hash. If you can mine one of those, you can keep competitors off your heels forever, and own the bitcoin network… Well, probably not.  But there’d be a lot of emergency patching, forking and screaming…
And this is with the initial optimizations completed to speed up Verification.
This means that If we hardforked a 2MB MaxBlockSize increase on the main tree and we softforked/hardforked in SepSig, we would essentially have up to a 8MB limit (3.5MB to 8MB) in which an attack vector could be opened up with heavy output and multisig tx which would crash nodes.
What you are saying here is completely factually inaccurate, the number of transactions does not increase hashing time. Furthermore you are assuming that miners are irrational and malicious which is flawed. If the miners collectively wanted to destroy the Bitcoin network they already could do that now. It is the underlying game theory and economics of Bitcoin that prevents this from happening in the first place.

If a miner decided to attack the network by creating excessively big blocks then other miners can simply choose to orphan those blocks and not build of them. Bitcoin Unlimited already has this build into the client as a preventative measure for such a scenario, which is a highly unlikely attack vector which furthermore is easily countered. I have written more extensively about this issue here:

https://bitco.in/forum/threads/gold-collapsing-bitcoin-up.16/page-203#post-7395
https://bitco.in/forum/threads/gold-collapsing-bitcoin-up.16/page-208#post-7550
legendary
Activity: 994
Merit: 1035
Quote from: rusty
This problem is far worse if blocks were 8MB: an 8MB transaction with 22,500 inputs and 3.95MB of outputs takes over 11 minutes to hash. If you can mine one of those, you can keep competitors off your heels forever, and own the bitcoin network… Well, probably not.  But there’d be a lot of emergency patching, forking and screaming…

And this assuming the initial optimizations completed to speed up Verification!
This means that If we hardforked a 2MB MaxBlockSize increase on the main tree and we softforked/hardforked in SepSig, we would essentially have up to a 8MB limit (3.5MB to 8MB) in which an attack vector could be opened up with heavy input and multisig tx which would crash nodes.

These are edge cases... but edge cases are what attackers use to disrupt the network.

Remember we have to design code to expect the worst and hostile intent, especially for bitcoin which has many extremely powerful adversaries. This is why I have a nuanced view of simultaneously supporting multiple implementations, the conservative approach from the core devs, and eventually increasing the block limit.  
hero member
Activity: 546
Merit: 500
But simulation by bitfury already indicated that we will have a severe performance problem with 4MB blocks on average home computer
I'd like to see that report. Got a link?

With only several thousand running full nodes now, it seems to me that 'average home computer' is not the limiting issue.
The primary limiting technological factor for blocksize today is bandwidth and latency.
hero member
Activity: 546
Merit: 500
Warning: Confrmed Gavinista
But simulation by bitfury already indicated that we will have a severe performance problem with 4MB blocks on average home computer

I'd like to see that report. Got a link?

With only several thousand running full nodes now, it seems to me that 'average home computer' is not the limiting issue.

This is the data you are looking for -
https://rusty.ozlabs.org/?p=522

here is the simulation test code-
https://gist.github.com/rustyrussell/9c3c4bf3127419bd3f1d



This is an example of a tx that can push validation times to their limit and potentially crash nodes--

https://www.blocktrail.com/BTC/tx/bb41a757f405890fb0f5856228e23b715702d714d59bf2b1feb70d8b2b4e3e08


There are solutions to this problem and principally why core devs want to roll these out first ... before increasing the blocksize limit on the main tree.


Wasn't that F2P's own transaction to hoover up the coinwallet brainwallet dust?  Its a pretty extreme boundary case, but surely the number of inputs ( and associated resources tied up) are the issue - not the size of the transaction per se. Thats why it got processed with no fees - normally these wouldn't be touched.

edit:  was brainwallet, not coinwallet
Pages:
Jump to: