Pages:
Author

Topic: Are we stress testing again? - page 2. (Read 33190 times)

legendary
Activity: 1001
Merit: 1005
January 07, 2016, 10:32:09 PM
We are having around 8-10k unconfirmed transactions for the last few days.. Seems this is the new normal.
legendary
Activity: 1442
Merit: 1001
November 17, 2015, 06:23:00 PM
The stress testing did lead to my old core client crashing multiple times so I replaced it with XT and then 11.1 and then setting parameters and now 11.2 -- life is good.

The 1MB block size is artificially small.  Raising it immediately (or rather as soon as is reasonable) to the maximum handled by the API (32MB?) and then getting to work on exceeding that with segmenting blocks is really the only thing to do.  Propagating headers fast and data afterward seems a likely to be viable approach to me.  If any node can't keep up then cut them loose.  Just one man's opinion.

Until Bitcoin is run on Google's datacenters?

There is no particular negative with bitcoin nodes run on Google, Microsoft, AWS, Digital Ocean, Mom & Pop Hosting Company XYZ, etc. Already today, those without decent resources are incapable of running a full bitcoin node - at least 50% of the planet does not have a recent computer, reliable electricity or a broadband connection. Even if they do, they can't dedicate their resources to running a node and they'll need to use an SPV wallet or a Coinbase-like entity.

Many users will contribute by using hosted services of which there thousands of companies worldwide. While many people seem to think that there's a tremendous difference between hosting a bitcoin node at their residence vs on the cloud, the truth is that there is a minimal difference. All users ultimately rely upon their ISPs, few use Tor or proxies and the greatest decentralization weapon will be mass adoption. Regulators have a much harder time banning something that has taken hold and is widely used.

If Google wants to host a node or provide a CDN for the initial seed, bring it on.

That was not my point.

More like : until Bitcoin can only be run from Google's datacenters and every other individual rely on SPV

Sure - but literally everyone agrees that this isn't the goal. Even the large blockers state that the minimum spec target for running a Bitcoin node is readily available consumer grade hardware on reasonable residential bandwidth. Not everyone can meet this goal today, the same will hold true in years to come - running a full bitcoin node is not for everyone.
hero member
Activity: 644
Merit: 504
Bitcoin replaces central, not commercial, banks
November 17, 2015, 05:00:01 PM
The stress testing did lead to my old core client crashing multiple times so I replaced it with XT and then 11.1 and then setting parameters and now 11.2 -- life is good.

The 1MB block size is artificially small.  Raising it immediately (or rather as soon as is reasonable) to the maximum handled by the API (32MB?) and then getting to work on exceeding that with segmenting blocks is really the only thing to do.  Propagating headers fast and data afterward seems a likely to be viable approach to me.  If any node can't keep up then cut them loose.  Just one man's opinion.

Until Bitcoin is run on Google's datacenters?

There is no particular negative with bitcoin nodes run on Google, Microsoft, AWS, Digital Ocean, Mom & Pop Hosting Company XYZ, etc. Already today, those without decent resources are incapable of running a full bitcoin node - at least 50% of the planet does not have a recent computer, reliable electricity or a broadband connection. Even if they do, they can't dedicate their resources to running a node and they'll need to use an SPV wallet or a Coinbase-like entity.

Many users will contribute by using hosted services of which there thousands of companies worldwide. While many people seem to think that there's a tremendous difference between hosting a bitcoin node at their residence vs on the cloud, the truth is that there is a minimal difference. All users ultimately rely upon their ISPs, few use Tor or proxies and the greatest decentralization weapon will be mass adoption. Regulators have a much harder time banning something that has taken hold and is widely used.

If Google wants to host a node or provide a CDN for the initial seed, bring it on.

That was not my point.

More like : until Bitcoin can only be run from Google's datacenters and every other individual rely on SPV
hero member
Activity: 644
Merit: 504
Bitcoin replaces central, not commercial, banks
November 17, 2015, 04:58:53 PM
32MB/10min = 447Kb/s.

If only it was all so simple  Undecided

I see you run a node so surely you are aware that nodes typically have more than 1 peer..
legendary
Activity: 1442
Merit: 1001
November 17, 2015, 04:44:31 PM
The stress testing did lead to my old core client crashing multiple times so I replaced it with XT and then 11.1 and then setting parameters and now 11.2 -- life is good.

The 1MB block size is artificially small.  Raising it immediately (or rather as soon as is reasonable) to the maximum handled by the API (32MB?) and then getting to work on exceeding that with segmenting blocks is really the only thing to do.  Propagating headers fast and data afterward seems a likely to be viable approach to me.  If any node can't keep up then cut them loose.  Just one man's opinion.

Until Bitcoin is run on Google's datacenters?

There is no particular negative with bitcoin nodes run on Google, Microsoft, AWS, Digital Ocean, Mom & Pop Hosting Company XYZ, etc. Already today, those without decent resources are incapable of running a full bitcoin node - at least 50% of the planet does not have a recent computer, reliable electricity or a broadband connection. Even if they do, they can't dedicate their resources to running a node and they'll need to use an SPV wallet or a Coinbase-like entity.

Many users will contribute by using hosted services of which there thousands of companies worldwide. While many people seem to think that there's a tremendous difference between hosting a bitcoin node at their residence vs on the cloud, the truth is that there is a minimal difference. All users ultimately rely upon their ISPs, few use Tor or proxies and the greatest decentralization weapon will be mass adoption. Regulators have a much harder time banning something that has taken hold and is widely used.

If Google wants to host a node or provide a CDN for the initial seed, bring it on.
hero member
Activity: 709
Merit: 503
November 17, 2015, 04:25:53 PM
The stress testing did lead to my old core client crashing multiple times so I replaced it with XT and then 11.1 and then setting parameters and now 11.2 -- life is good.

The 1MB block size is artificially small.  Raising it immediately (or rather as soon as is reasonable) to the maximum handled by the API (32MB?) and then getting to work on exceeding that with segmenting blocks is really the only thing to do.  Propagating headers fast and data afterward seems a likely to be viable approach to me.  If any node can't keep up then cut them loose.  Just one man's opinion.
Until Bitcoin is run on Google's datacenters?
Increasing the block size will *not* cause a flood of full blocks; there aren't enough transactions in the memory pool.  Heck, plenty of 1MB blocks aren't even full right now.  Even if *every* block was full that's only 1MB/10min = 8Mb/10min = 839Kb/1min = 14Kb/s; compare that to 50Mb/s that plenty of folks have right now.  Even if every 32MB block was full (that'd be at least 32,000 transactions per block) that's *only* 32MB/10min = 447Kb/s.  I fail to see how cutting loose any (if any) modem connected full nodes is a serious issue.

Or are you worried about disk space?  144 (one days worth) of 32MB blocks is *only* 4.5GB/day; a little more than 1TB/year.

It will *not* take a large system and Internet link to run a full node even at 32MB blocks (even when they are all full).

People are scared of what they don't understand.
hero member
Activity: 644
Merit: 504
Bitcoin replaces central, not commercial, banks
November 17, 2015, 03:54:29 PM
The stress testing did lead to my old core client crashing multiple times so I replaced it with XT and then 11.1 and then setting parameters and now 11.2 -- life is good.

The 1MB block size is artificially small.  Raising it immediately (or rather as soon as is reasonable) to the maximum handled by the API (32MB?) and then getting to work on exceeding that with segmenting blocks is really the only thing to do.  Propagating headers fast and data afterward seems a likely to be viable approach to me.  If any node can't keep up then cut them loose.  Just one man's opinion.

Until Bitcoin is run on Google's datacenters?
hero member
Activity: 709
Merit: 503
November 17, 2015, 03:46:31 PM
The stress testing did lead to my old core client crashing multiple times so I replaced it with XT and then 11.1 and then setting parameters and now 11.2 -- life is good.

The 1MB block size is artificially small.  Raising it immediately (or rather as soon as is reasonable) to the maximum handled by the API (32MB?) and then getting to work on exceeding that with segmenting blocks is really the only thing to do.  Propagating headers fast and data afterward seems a likely to be viable approach to me.  If any node can't keep up then cut them loose.  Just one man's opinion.
legendary
Activity: 1386
Merit: 1009
November 17, 2015, 02:33:46 AM
So i lost some money yesterday because my legit transactions with enough fee did hang unconfirmed because there were too many legit transactions to get included. Great outlook on the future if the 1megabyte blocks remain.
How on earth do you manage to run into this? Blocks are reguraly underfilled, the transaction rate is ~2 tps, average blocksize is 0.5 Mb. Your transaction should've been included quite fast. I guess you simply have wallet problems, e.g. improper fees, non-standard tx or something of that kind, that prevents you txs from being included in a block.

Then you probably did not observe the amount of legit transactions in the time when the bitcoin price shot up high. The same goes for when it crashed again.

And no, transactions work fine again now, now that the price stabilized. The fees were the same all the time.

I mean do you really await that the normal amount of transactions remains the same when the value of bitcoin is rising many percent per day? No, you practically can await that the amount of transactions become a lot more.
I wonder what fees you paid then. I did notice the explosion of tx volume, but it was still manageable. I did a couple transactions with the help of Core's fee estimation during the run up, they all got confirmed the next block.
hero member
Activity: 910
Merit: 1003
November 16, 2015, 09:13:41 PM
during the multiple stress test, i have made many transactions with low (1-2 block retarded), med (0-1 block) and high fees (no delay).

no problem at all.

that's why i suspect that many of members emit transaction without any fees all the time.

The stress tests of recent months were not designed to block transactions, but only to create huge backlogs of unconfirmed transactions.  Ostensibly, their main goal was to stress the memory pools and other software that depended on them (such as statistics, businesses that accept 0-conf payments by checking the queues, etc.)  We can tell that because the fee F0 paid by the tester's transactions generally was the minimum fee, or slightly above the minimum.  Thus, most of the time, only transactions that paid F0 or less fees were affected.  Thansactions that paid slightly more than F0 went through as if the backlogs weren't there.

For that reason, the stress tests were not realistic previews of the "fee market".  In the latter, instead of competing with a stingy stress tester, you will be competing with other clients like you, who are using the same "smart" wallet that you use, many of them hoping to get their transactions ahead of yours in the queue.    

The stress tests were also not relaistic previews of malicious "spam attacks", when the attacker will be raising his fees so as to keep some fraction of the legitimate traffic (say,  50%) perpetually off the blockchain.
legendary
Activity: 1512
Merit: 1012
November 16, 2015, 08:06:14 PM
during the multiple stress test, i have made many transactions with low (1-2 block retarded), med (0-1 block) and high fees (no delay).

no problem at all.

that's why i suspect that many of members emit transaction without any fees all the time.
legendary
Activity: 2674
Merit: 1083
Legendary Escrow Service - Tip Jar in Profile
November 16, 2015, 07:42:58 PM
that's why you have 3 levels of fees in an android wallet ... and 10 levels of fees in Bitcoin Core client.


you can not emit transaction WITHOUT the right fees ... if you are a complet noobs !
but, you want emit the transaction without fees ... and you've fucked.

deal with it.
every transaction will get confirmed eventually, just the question is if you are willing to wait for 3 months for a confirmation. But I think so as well, if you send a transaction without fees it's completely their own fault that they did so. Crypto currency has a small learning curve and if you ain't willing to go trough that. Well don't use cryptos then. However the next generation will get this shit with ease.

Wasn't there an update that makes transactions that were not confirmed in a certain timeframe vanishing from mempool? That would mean that such transactions would not be in limbo anymore at least. But they would not confirm after months too.

And if we keep 1MB blocks then unconfirming transactions will be normal. Even with fees. Since it might be that the other users simply chose a little bit higher fee than you. And you already can lose. Jorge described it good.
sr. member
Activity: 574
Merit: 250
In XEM we trust
November 16, 2015, 07:34:30 PM
that's why you have 3 levels of fees in an android wallet ... and 10 levels of fees in Bitcoin Core client.


you can not emit transaction WITHOUT the right fees ... if you are a complet noobs !
but, you want emit the transaction without fees ... and you've fucked.

deal with it.
every transaction will get confirmed eventually, just the question is if you are willing to wait for 3 months for a confirmation. But I think so as well, if you send a transaction without fees it's completely their own fault that they did so. Crypto currency has a small learning curve and if you ain't willing to go trough that. Well don't use cryptos then. However the next generation will get this shit with ease.
legendary
Activity: 1512
Merit: 1012
November 16, 2015, 07:10:56 PM
that's why you have 3 levels of fees in an android wallet ... and 10 levels of fees in Bitcoin Core client.


you can not emit transaction WITHOUT the right fees ... if you are a complet noobs !
but, you want emit the transaction without fees ... and you've fucked.

deal with it.
legendary
Activity: 2674
Merit: 1083
Legendary Escrow Service - Tip Jar in Profile
November 16, 2015, 06:56:50 PM
If your wallet is producing valid standard transactions with proper fees, they can be expected to be confirmed with a high degree of certainty for all practical purposes. That changes when blocks are saturated, and there's an ever-growing tx backlog, and fee estimation becomes difficult, but we're still far from that point on average, while we might've already experienced this over short periods of time. I have guessed it's a problem caused by the wallet, but I don't have enough information.

Bitcoin has a fair number of probabilistic variables built into it, and the wallet's job is to account for that fact.

This has been the stock answer of small-blockian core devs to the issue of usability under saturated conditions: "the wallet can be programmed to compute the right fee, and adjust it (with RBF) if needed."

Their faith in computers is moving, but the wallet cannot compute an answer if it does not have the necessary data.  What is the proper fee to get my transaction confirmed in less than 1 hour?  Well, it depends on the fees paid by the transactions that are already in the queue, and by the transactions that will be issued by other clients in the next hour.  Since the latter are running "smart wallets" too --  perhaps the exact same wallet that I am running -- the problem that the wallet has to solve is basically "choose a number that is very likely to be greater than the number that you are going to choose".

If the network is well below saturation, computing the fee is trivial. If you pay just the minimum fee, your transaction will get confirmed in the next few blocks.  A larger fee will have effect only if the backlog of unconfirmed transactions becomes greater than 1 MB. This condition may be caused by an extra-long delay since the previous block, or by the miners solving one or more empty blocks due to "SPV mining" and network delays.  As long as the network is not saturated, these backlogs are short-lived (lasting a couple hours at most) and have a predictable distribution of size, frequency, and durations; and many users will not mind the delays that they cause, so they will use minimum fees anyway. Therefore, if I need to have my transaction confirmed as soon as possible, I need to pay only a few times the minimum fee.  Those who believe in "smart wallets" seem to be thinking about this situation only.

However, if the network becomes saturated, the situation will be very different.  If the average demand T (transactions isssued per second) is greater than the effective capacity C of the network (2.4 tx/s), there will be a long-lasting and constantly increasing backlog.  If the daily average T close to C but still less than C, such persistent "traffic jams" will occur during the part of the day when the traffic is well above average.  In that case the backlog may last for half a day or more.  If the daily average T itself is greater than C, the traffic jam will last forever -- until enough users give up on bitcoin and the daily averaged T drops below C again.  

In both cases, while the current traffic T is greater than C, the backlog will continue growing at the rate T - C.  If and when T drops below C again, the backlog will still persist for a while, and will be cleared at the rate C - T.   In those situations, the frequency and duration of the traffic jams will be highly variable: a slightly larger demand during the peak hours may cause the jam to last several days longer than usual.  

In those conditions, choosing the right fee will be impossible.  As explained above, the "fee market" that is expected to develop when the network satiurates will be a running semi-blind auction for the N places at the front of the queue, where new bidders are coming in all the time, and those who are already in the hall may raise their bids unpredictably.  There cannot be an algorithm to compute the fee that will ensure service in X hours, for the same reason that there is no algorithm to pick a winning bid in an auction.  

But the small-blockian Core devs obviously do not understand that.

Not to mention that the "fee market" would be a radical change in the way that users are expected to interact with the system.  As bitcoin was designed, and has operated until recently, the user was supposed to prepare the transaction off-line, then connect to a few relay nodes (even just one), send then the transaction, and disconnect again from the network.  That will not be possible once the network gets saturated, or close to saturation.  The wallet will have to connect to several  relay nodes before assembling the transaction, in order to get information about the state of the queue.  Since nodes can have very different "spam filters", the wallet cannot trust just one node, but will have to check a few of them and merge the data it gets.  After sending the transaction, the wallet must remain connected to the network until the transaction is confirmed, periodically checking its progress in the queue and replacing it with a higher fee as needed.  The client will have to provide the wallet in avance with parameters for that process (the desired max delay X and the max fee F), and/or be ready to authorize further fee increases.  From the user's viewpoint,

The small-blockian Core devs do not seem to see this as a significant change.  Or even realize that the "fee market", from the client's perspective, will be the most radical change in the system since it was created.

So, Adam, where is the "fee market" BIP?

And they do not seem to be aware of the fact that the fee market will cause a large jump in the internet traffic load for the relay nodes. Once the "smart wallets" become the norm, each transaction will require at least one additional client-node access (to get the queue state), possibly several; and more accesses to monitor its progress.  So the fee market will certainly harm the nodes a lot more than a size limit increase would.

In fact, it seems that the small-blockian Core devs do not want to understand those problems. I have pointed them out several time to several of them, and they just ignored the problem.

Hence the theory that they want bitcoin to become unusable as a payment system, so that all users are forced to use off-chain solutions...

I did not read everything but your main point is important. You can't be sure to have your transaction being included anymore with too small blocks. Because others compete for the same block with you. And you will never know if your fee might be too low suddenly. The result would be a completely unreliable transaction system. I mean would you use a bank account where you send money to your grandma and you maybe have the chance that it reaches? And a good chance that simply nothing happens? That would be a real fail as a transaction system.
legendary
Activity: 2674
Merit: 1083
Legendary Escrow Service - Tip Jar in Profile
November 16, 2015, 06:52:36 PM
So i lost some money yesterday because my legit transactions with enough fee did hang unconfirmed because there were too many legit transactions to get included. Great outlook on the future if the 1megabyte blocks remain.
How on earth do you manage to run into this? Blocks are reguraly underfilled, the transaction rate is ~2 tps, average blocksize is 0.5 Mb. Your transaction should've been included quite fast. I guess you simply have wallet problems, e.g. improper fees, non-standard tx or something of that kind, that prevents you txs from being included in a block.

Then you probably did not observe the amount of legit transactions in the time when the bitcoin price shot up high. The same goes for when it crashed again.

And no, transactions work fine again now, now that the price stabilized. The fees were the same all the time.

I mean do you really await that the normal amount of transactions remains the same when the value of bitcoin is rising many percent per day? No, you practically can await that the amount of transactions become a lot more.
legendary
Activity: 1386
Merit: 1009
November 15, 2015, 07:43:31 PM
So i lost some money yesterday because my legit transactions with enough fee did hang unconfirmed because there were too many legit transactions to get included. Great outlook on the future if the 1megabyte blocks remain.
How on earth do you manage to run into this? Blocks are reguraly underfilled, the transaction rate is ~2 tps, average blocksize is 0.5 Mb. Your transaction should've been included quite fast. I guess you simply have wallet problems, e.g. improper fees, non-standard tx or something of that kind, that prevents you txs from being included in a block.

It seems like you're blaming the user.  If bitcoin is to be successful it needs to work quickly and reliably for all users, especially non-expert users such as newbies.

It also seems that you don't understand computer systems performance.  Performance begins to degrade noticeably well before capacity limits are reached.  Noticeable degradation and unwanted performance artifacts can appear with loads under 20%.  This is especially true with a system like bitcoin where the service rate depends on the random timing of blocks and where there is priority queuing rather than simple FCFS.
It seems like you're blaming the protocol while I'm blaming the wallet software for this particular problem. I'm not blaming the user.

The latter part is not entirely relevant, while correct. If your wallet is producing valid standard transactions with proper fees, they can be expected to be confirmed with a high degree of certainty for all practical purposes. That changes when blocks are saturated, and there's an ever-growing tx backlog, and fee estimation becomes difficult, but we're still far from that point on average, while we might've already experienced this over short periods of time. I have guessed it's a problem caused by the wallet, but I don't have enough information.

Bitcoin has a fair number of probabilistic variables built into it, and the wallet's job is to account for that fact. Can the protocol be designed better? Yes, it can, but it's hard to change what we have now. There's still room to improve UX by improving the wallet software, though.

PS. Please stop with your didactic tone. Arrogance is counter-productive.

I thank Jorge for ably demonstrating that my didactic tone was appropriate, given the level of understanding expressed in your posts.
You are apparently not able to comprehend what I'm trying to convey, as is evident from most of your replies to me in this and other topics. Nothing of what you write is new to me (I'm a software engineer, if that matters), nor is of much value to the discussion. I'd be happy if you put more effort into understanding my posts, or stopped replying to them altogether, if your need to demonstrate your superiority allows that. Still, it's your choice.
sr. member
Activity: 278
Merit: 254
November 14, 2015, 01:08:29 PM
So i lost some money yesterday because my legit transactions with enough fee did hang unconfirmed because there were too many legit transactions to get included. Great outlook on the future if the 1megabyte blocks remain.
How on earth do you manage to run into this? Blocks are reguraly underfilled, the transaction rate is ~2 tps, average blocksize is 0.5 Mb. Your transaction should've been included quite fast. I guess you simply have wallet problems, e.g. improper fees, non-standard tx or something of that kind, that prevents you txs from being included in a block.

It seems like you're blaming the user.  If bitcoin is to be successful it needs to work quickly and reliably for all users, especially non-expert users such as newbies.

It also seems that you don't understand computer systems performance.  Performance begins to degrade noticeably well before capacity limits are reached.  Noticeable degradation and unwanted performance artifacts can appear with loads under 20%.  This is especially true with a system like bitcoin where the service rate depends on the random timing of blocks and where there is priority queuing rather than simple FCFS.
It seems like you're blaming the protocol while I'm blaming the wallet software for this particular problem. I'm not blaming the user.

The latter part is not entirely relevant, while correct. If your wallet is producing valid standard transactions with proper fees, they can be expected to be confirmed with a high degree of certainty for all practical purposes. That changes when blocks are saturated, and there's an ever-growing tx backlog, and fee estimation becomes difficult, but we're still far from that point on average, while we might've already experienced this over short periods of time. I have guessed it's a problem caused by the wallet, but I don't have enough information.

Bitcoin has a fair number of probabilistic variables built into it, and the wallet's job is to account for that fact. Can the protocol be designed better? Yes, it can, but it's hard to change what we have now. There's still room to improve UX by improving the wallet software, though.

PS. Please stop with your didactic tone. Arrogance is counter-productive.

I thank Jorge for ably demonstrating that my didactic tone was appropriate, given the level of understanding expressed in your posts.


 
hero member
Activity: 910
Merit: 1003
November 13, 2015, 04:58:05 PM
Jorge, I certainly agree with what you wrote. Note that my answer was shorter and less detailed, but contained essentially the same idea.

OK, but my point is that smart wallets will not be able to cope with the variable fees.   Even if a client pays the fee recommended by the wallet, his transaction may be delayed as much as if he had paid the minimum fee -- and he will not get a refund.

There are several common sense reasons why no business on this side of the Galaxy sets its prices as the "fee market" is supposed to work.   I can believe that some of the Core devs do not have that common sense.  But others should see the problems.  It is obviosu that they just don't care: they want the network to become unusable.

Quote
It seems to me that Adam Back's proposal of 2-4-8 will be chosen quite soon (early next year), maybe with some modifications.

Is there a BIP for it?  Is there code to test?

Excuse the cynicism, but I see that vague "proposal"  as a mere demagogical ruse:  it lets Adam pretend that he is a reasonable person, open to increasing the limit to 2 MB before saturation hits --- but in fact he has no intention at all of doing so.

Quote
It's gonna buy us some time to try to get something better than simply increasing the blocksize limit.

I agree that the scalability problem will not be solved by merely lifting the block size limit.  Right now, there is no solution for that problem, not even a believable sketch of one.  

The Lightning Network, even if it was viable, would not be a solution to bitcoin's scaling problem.  It would be just a new payment network, that could use bitcoin and/or any other cryptocoin as the occasional settlement layer, but would be totally unlike bitcoin -- in its goals, premises, and design.  After going through a few transactions, the "virtual bitcoins" that LN clients will own and send will be quite remote from their supposed counterparts on the blockchain.  They will start to resemble more and more what libertarians and ancaps call "debt mony" (and normal people call just "money").   Saying that LN will let bitcoin to scale is like saying that paper checks allowed gold to scale.

An increase to 2 MB in Q1/2016 would be better than nothing, but it may accomodate another year of traffic increase, at best.

Raising the limit to 2 MB or to 20 MB would make no difference until 2017 (except that 20 MB would reduce the risk of a spam attack).  With 2 MB, the blocksize limit issue would return in late 2016.

A large block size limit does not create any significant risk.  When the block size limit was implemented, it was more than 100x the traffic at the time.  Not once in all these 6 years was there an attempt by a miner to harm the system by posting a full 1 MB block.  

As I pointed out, the onset of the fee market itself will immediately create a large extra internet load on the relay nodes, much larger and sudden than the extra load that would result from natural traffic growth beyond 1 MB.  But the Core devs do not seem to care.  (Not surprisingly, because their alleged concern about the load of those nodes was just a bogus excuse.)

But hey, my investment in bitcoin cannot lose value, no matter what happens.  Only investors who hold a positive amount of bitcoin should be concerned...  Grin
legendary
Activity: 1386
Merit: 1009
November 13, 2015, 07:38:23 AM
Jorge, I certainly agree with what you wrote. Note that my answer was shorter and less detailed, but contained essentially the same idea.

It seems to me that Adam Back's proposal of 2-4-8 will be chosen quite soon (early next year), maybe with some modifications. It's gonna buy us some time to try to get something better than simply increasing the blocksize limit.
Pages:
Jump to: