Author

Topic: Gold collapsing. Bitcoin UP. - page 168. (Read 2032248 times)

legendary
Activity: 1764
Merit: 1002
June 30, 2015, 12:55:39 PM

They cannot handle 11.6MB unconfirmed transactions then how will they handle 20 MB blocks with 200 MB uncorfirmed transactions ?

It would be dramatically more expensive to spam the network with 200 MB of transactions.

It will be more expensive with 1 MB block. The bigger block the cheaper spam.

no,look at the current spam attacks.  

they are small tx's with the minimum fee.  if that spam wants to actually get included in the next block or 2, it has to increase the amount of fees it pays which would cause the attack budget to skyrocket.
legendary
Activity: 1764
Merit: 1002
June 30, 2015, 12:46:01 PM

If this is a true demand situation based on the recent price movement, then if we have a real bubble move the network is simply going to not be able to handle demand. Increasing fees is not an option because there isn't enough space for everyone regardless of the fees they offer.

What is happening right now shows that P2P nodes can in fact handle larger blocks, they are processing the transaction volume fine and have enough BW to forward transactions. In fact by not clearing transactions in blocks and causing the memory pool to increase beyond what it should, the 1MB limit is probably more stressful on nodes than simply letting larger blocks get processed....

Fantastic!  Some incentive to deal with memory exhaustion (which is hardly an unheard of problem in computer-land.)

If I were doing a system from scratch, one would have to do several things (with analogs taken from the real world.)

  • pay to get in (e.g., a ticket at the festival gate.  Some sort of real infrastructure support as opposed to useless SPV wallet 'support'.)
  • pay to get in the queue (e.g., to get a _chance_ to buy your cheese-dog.)
  • pay at least enough for the raw materials an labor for your cheese-dog when you get through the queue.
  • periodically clense the queue of those who, for whatever reason, not making sufficient progress.

Of course if Bitcoin had been designed like that from the get-go, it would not have gotten to first base.  It needed a pretty face to attract the following of Utopians such as fap.doc and the multitude of other mouth-breathers that got the system going.

Now we are at a point where there can safely be free cheese-dogs for all in the form of sidechains.  That means that the backing store can safely eject the welfare bums and they won't starve to death.  Very humanitarian.



you being a socialist, of course you come up with these bizarre conclusions that disadvantage the masses.

the ideal situation would be that there is NO LIMIT.  this would give miners the incentive to filter spam based on their own internal analyses that the core devs can't possibly ever factor in, and then construct block sizes accordingly with the assistance of user tx fee feedback.  in other words, a free market that gets the core dev apparatchiks out of the way.

legendary
Activity: 1764
Merit: 1002
June 30, 2015, 12:42:15 PM

They cannot handle 11.6MB unconfirmed transactions then how will they handle 20 MB blocks with 200 MB uncorfirmed transactions ?

It would be dramatically more expensive to spam the network with 200 MB of transactions.

It will be more expensive with 1 MB block. The bigger block the cheaper spam.

but also way more expensive for Bitcoin's growth prospects as new users get turned away or fed up.  Cripplecoin, in the long run, fails to the benefit of Blockstreams SC project.

you keep ignoring the fact that miners can filter spam; if they want.
legendary
Activity: 1153
Merit: 1000
June 30, 2015, 12:41:30 PM

They cannot handle 11.6MB unconfirmed transactions then how will they handle 20 MB blocks with 200 MB uncorfirmed transactions ?

It would be dramatically more expensive to spam the network with 200 MB of transactions.

Those were 1.2GB DRAM nodes, which are very limited VM instances with only a couple 100MB for the memory pool.

The poster said that his 2GB and 4GB DRAM nodes were fine, and they should be through 200MB blocks.
legendary
Activity: 1764
Merit: 1002
June 30, 2015, 12:39:53 PM

They cannot handle 11.6MB unconfirmed transactions then how will they handle 20 MB blocks with 200 MB uncorfirmed transactions ?

It would be dramatically more expensive to spam the network with 200 MB of transactions.

plus, miners have the power to choose to process the spam or not.  process it, make lotsa fee income thus strengthening themselves (which is the last thing a spammer wants), but also risk getting orphaned with given today's connectivity rates.  or choose not to, protecting themselves from an orphan, and continue processing small efficient blocks for maximum propagation. 
legendary
Activity: 1414
Merit: 1000
June 30, 2015, 12:38:25 PM

They cannot handle 11.6MB unconfirmed transactions then how will they handle 20 MB blocks with 200 MB uncorfirmed transactions ?

It would be dramatically more expensive to spam the network with 200 MB of transactions.

It will be more expensive with 1 MB block. The bigger block the cheaper spam.
legendary
Activity: 1008
Merit: 1000
June 30, 2015, 12:33:37 PM

They cannot handle 11.6MB unconfirmed transactions then how will they handle 20 MB blocks with 200 MB uncorfirmed transactions ?

It would be dramatically more expensive to spam the network with 200 MB of transactions.
legendary
Activity: 4690
Merit: 1276
June 30, 2015, 12:32:54 PM

If this is a true demand situation based on the recent price movement, then if we have a real bubble move the network is simply going to not be able to handle demand. Increasing fees is not an option because there isn't enough space for everyone regardless of the fees they offer.

What is happening right now shows that P2P nodes can in fact handle larger blocks, they are processing the transaction volume fine and have enough BW to forward transactions. In fact by not clearing transactions in blocks and causing the memory pool to increase beyond what it should, the 1MB limit is probably more stressful on nodes than simply letting larger blocks get processed....

Fantastic!  Some incentive to deal with memory exhaustion (which is hardly an unheard of problem in computer-land.)

If I were doing a system from scratch, one would have to do several things (with analogs taken from the real world.)

  • pay to get in (e.g., a ticket at the festival gate.  Some sort of real infrastructure support as opposed to useless SPV wallet 'support'.)
  • pay to get in the queue (e.g., to get a _chance_ to buy your cheese-dog.)
  • pay at least enough for the raw materials an labor for your cheese-dog when you get through the queue.
  • periodically clense the queue of those who, for whatever reason, not making sufficient progress.

Of course if Bitcoin had been designed like that from the get-go, it would not have gotten to first base.  It needed a pretty face to attract the following of Utopians such as fap.doc and the multitude of other mouth-breathers that got the system going.

Now we are at a point where there can safely be free cheese-dogs for all in the form of sidechains.  That means that the backing store can safely eject the welfare bums and they won't starve to death.  Very humanitarian.

legendary
Activity: 1414
Merit: 1000
June 30, 2015, 12:32:27 PM
what's interesting to me is that all these full blocks that have been coming more frequently and consecutively have not caused any block delays.  that is good news b/c there are some who thought that as we filled the 1MB limit, there might be delays and have pointed to just this mechanism in the past when we've had such delays.  so we know 1MB blocks don't slow down the network.  so just how high can we push this limit w/o breaking it?

what's also really interesting is that currently, i'm not seeing any 0 tx defensive blocks being mined.  maybe the Chinese miners are figuring out that it's not necessary.  that's more good news b/c we want them munching as many tx's as possible in a consistent manner.  

and that's good news b/c they are probably figuring out that a block size increase can't hurt them if done in a "safe" way, whatever that means.

I think that if we see blocks fill up and the network starts functioning poorly, we are going to see a change pushed out far quicker then any of us ever imagined.

As of 12:03pm eastern time blockchain.info is reporting 11.6MB unconfirmed transactions and 1.95BTC in fees (mostly from minimum fees).

Is there another stress test going on? Or did a bunch of guys decide to flood the network to push for larger blocks...

Not sure but as I understand it an increase in value is new information and new information results is market players adjusting and I imagine that means moving their BTC around as a result of changes in behavior.  

I'm not expecting a pop just yet but I don't think maintaining as we are with relatively full blocks is an indication that we'll cope with an increase of tx's if we see the growth the market is anticipating.

If this is a true demand situation based on the recent price movement, then if we have a real bubble move the network is simply going to not be able to handle demand. Increasing fees is not an option because there isn't enough space for everyone regardless of the fees they offer.

What is happening right now shows that P2P nodes can in fact handle larger blocks, they are processing the transaction volume fine and have enough BW to forward transactions. In fact by not clearing transactions in blocks and causing the memory pool to increase beyond what it should, the 1MB limit is probably more stressful on nodes than simply letting larger blocks get processed....[/b]

absolutely.  look here.  guys reporting their bitcoind shut down from memory overload.  that sucks:

https://www.reddit.com/r/Bitcoin/comments/3bmb5r/stress_test_in_full_effect/csniofb

They cannot handle 11.6MB unconfirmed transactions then how will they handle 20 MB blocks with 200 MB uncorfirmed transactions ?
legendary
Activity: 1764
Merit: 1002
legendary
Activity: 1512
Merit: 1005
June 30, 2015, 12:15:44 PM

I think replace by fee is good. It is a way to unstick a transaction that is stuck with too low fee, when you are in a hurry.  It does not change the protocol in any way, it is just the miner chooses the transaction with the higher fee, making the old one illegal. It does not fill up blocks, only the network.

On the other hand, I don't think it is critically important, necessary or even specially useful. I think the market will sort out fees, making normal high-paying transactions fill up only half the space available in blocks.
legendary
Activity: 1764
Merit: 1002
June 30, 2015, 12:08:47 PM
dammit.  here we go again upon just checking.  unconf tx's over 7000:


any chances of an on going stress test ?

i doubt it as yesterday's scheduled test didn't conform to it's stated plan.  

and really, does it matter?  how can you tell in most circumstances unless spam tx's are coming from a single address?  and if the tx's are paying the regular fees, which they are, who cares?  the miners will make hay and be more profitable.  actually, i do care b/c that means ordinary users that might square the network effect (Metcalfe) wanting in from places like Greece might not be able to get in.  yes, the 1MB'ers will say that they're just coming in thru places like Coinbase, which they are, but i'd counter argue that in the medium to long run Coinbase has to buy supply from the mainchain to keep their reserves high.  so it does feed through.

I agree, I've asked only because I know you're active on reddit, that usually is a better source for this kind of events.

that said I was wondering if another thing we could do is limiting max tx size. Did you rember that no more than a one or two months ago Peter Todd said that chain someone put an entire book on the blockchain?

maybe not an absolute limit but a progressive fee per kb... just a thougth.

at least those books aren't getting into the UTXO set.  simply a blockchain storage problem which is easily addressed with storage growth.

https://github.com/bitcoin/bitcoin/blob/master/doc/release-notes/release-notes-0.9.0.md
hero member
Activity: 544
Merit: 500
hero member
Activity: 544
Merit: 500
June 30, 2015, 12:02:40 PM
. In fact by not clearing transactions in blocks and causing the memory pool to increase beyond what it should, the 1MB limit is probably more stressful on nodes than simply letting larger blocks get processed....

This is a very valid point not many have talked about yet. I'd like to see what the 'nodemongers' have to say about this?
legendary
Activity: 1764
Merit: 1002
June 30, 2015, 12:00:22 PM
what's interesting to me is that all these full blocks that have been coming more frequently and consecutively have not caused any block delays.  that is good news b/c there are some who thought that as we filled the 1MB limit, there might be delays and have pointed to just this mechanism in the past when we've had such delays.  so we know 1MB blocks don't slow down the network.  so just how high can we push this limit w/o breaking it?

what's also really interesting is that currently, i'm not seeing any 0 tx defensive blocks being mined.  maybe the Chinese miners are figuring out that it's not necessary.  that's more good news b/c we want them munching as many tx's as possible in a consistent manner.  

and that's good news b/c they are probably figuring out that a block size increase can't hurt them if done in a "safe" way, whatever that means.

I think that if we see blocks fill up and the network starts functioning poorly, we are going to see a change pushed out far quicker then any of us ever imagined.

As of 12:03pm eastern time blockchain.info is reporting 11.6MB unconfirmed transactions and 1.95BTC in fees (mostly from minimum fees).

Is there another stress test going on? Or did a bunch of guys decide to flood the network to push for larger blocks...

Not sure but as I understand it an increase in value is new information and new information results is market players adjusting and I imagine that means moving their BTC around as a result of changes in behavior.  

I'm not expecting a pop just yet but I don't think maintaining as we are with relatively full blocks is an indication that we'll cope with an increase of tx's if we see the growth the market is anticipating.

If this is a true demand situation based on the recent price movement, then if we have a real bubble move the network is simply going to not be able to handle demand. Increasing fees is not an option because there isn't enough space for everyone regardless of the fees they offer.

What is happening right now shows that P2P nodes can in fact handle larger blocks, they are processing the transaction volume fine and have enough BW to forward transactions. In fact by not clearing transactions in blocks and causing the memory pool to increase beyond what it should, the 1MB limit is probably more stressful on nodes than simply letting larger blocks get processed....[/b]

absolutely.  look here.  guys reporting their bitcoind shut down from memory overload.  that sucks:

https://www.reddit.com/r/Bitcoin/comments/3bmb5r/stress_test_in_full_effect/csniofb
legendary
Activity: 1764
Merit: 1002
June 30, 2015, 11:58:22 AM
that said I was wondering if another thing we could do is limiting max tx size. Did you rember that no more than a one or two months ago Peter Todd said that chain someone put an entire book on the blockchain?

maybe not an absolute limit but a progressive fee per kb... just a thougth.

can you explain the technicalities of that attack?

was it only possible b/c of p2sh?  also, did it only involve a non standard tx?  if so, isn't that only something that a miner could include into a block he self generates?  and why haven't we seen widespread usage of that attack if it's so effective that we have to worry about it?
legendary
Activity: 1153
Merit: 1000
June 30, 2015, 11:55:22 AM
what's interesting to me is that all these full blocks that have been coming more frequently and consecutively have not caused any block delays.  that is good news b/c there are some who thought that as we filled the 1MB limit, there might be delays and have pointed to just this mechanism in the past when we've had such delays.  so we know 1MB blocks don't slow down the network.  so just how high can we push this limit w/o breaking it?

what's also really interesting is that currently, i'm not seeing any 0 tx defensive blocks being mined.  maybe the Chinese miners are figuring out that it's not necessary.  that's more good news b/c we want them munching as many tx's as possible in a consistent manner.  

and that's good news b/c they are probably figuring out that a block size increase can't hurt them if done in a "safe" way, whatever that means.

I think that if we see blocks fill up and the network starts functioning poorly, we are going to see a change pushed out far quicker then any of us ever imagined.

As of 12:03pm eastern time blockchain.info is reporting 11.6MB unconfirmed transactions and 1.95BTC in fees (mostly from minimum fees).

Is there another stress test going on? Or did a bunch of guys decide to flood the network to push for larger blocks...

Not sure but as I understand it an increase in value is new information and new information results is market players adjusting and I imagine that means moving their BTC around as a result of changes in behavior.  

I'm not expecting a pop just yet but I don't think maintaining as we are with relatively full blocks is an indication that we'll cope with an increase of tx's if we see the growth the market is anticipating.

If this is a true demand situation based on the recent price movement, then if we have a real bubble move the network is simply going to not be able to handle demand. Increasing fees is not an option because there isn't enough space for everyone regardless of the fees they offer.

What is happening right now shows that P2P nodes can in fact handle larger blocks, they are processing the transaction volume fine and have enough BW to forward transactions. In fact by not clearing transactions in blocks and causing the memory pool to increase beyond what it should, the 1MB limit is probably more stressful on nodes than simply letting larger blocks get processed....
legendary
Activity: 1764
Merit: 1002
June 30, 2015, 11:53:44 AM
this was just too compelling of a post from a business merchant to not link to:

https://www.reddit.com/r/Bitcoin/comments/3bmb5r/stress_test_in_full_effect/csnj9op
legendary
Activity: 4690
Merit: 1276
June 30, 2015, 11:45:32 AM

I going to say the majority of Europeans have been successfully brainwashed, so as to not even consider Gold as an option. In the fiat we trust.

If I were under capital controls and the mainstream media were constantly crowing about one solution while not mentioning another solution at all, that would act as powerful incentive for me to not trust the promoted solution and look to the non-promoted solution with more interest.  But I'm an odd-ball.

I'm guessing that a fraction of Greeks long ago diversified into PM's, Bitcoin, and a plethora of other options.  Relatively few Greeks who have significant wealth to protect kept a bulk of it demand accounts (or other slightly less easy to appropriate forms) I would think.  Maybe I'm wrong about this though.

legendary
Activity: 1260
Merit: 1008
June 30, 2015, 11:44:11 AM
dammit.  here we go again upon just checking.  unconf tx's over 7000:



any chances of an on going stress test ?

i doubt it as yesterday's scheduled test didn't conform to it's stated plan.  

and really, does it matter?  how can you tell in most circumstances unless spam tx's are coming from a single address?  and if the tx's are paying the regular fees, which they are, who cares?  the miners will make hay and be more profitable.  actually, i do care b/c that means ordinary users that might square the network effect (Metcalfe) wanting in from places like Greece might not be able to get in.  yes, the 1MB'ers will say that they're just coming in thru places like Coinbase, which they are, but i'd counter argue that in the medium to long run Coinbase has to buy supply from the mainchain to keep their reserves high.  so it does feed through.

I agree, I've asked only because I know you're active on reddit, that usually is a better source for this kind of events.

that said I was wondering if another thing we could do is limiting max tx size. Did you rember that no more than a one or two months ago Peter Todd said that chain someone put an entire book on the blockchain?

maybe not an absolute limit but a progressive fee per kb... just a thougth.
Jump to: