Pages:
Author

Topic: Now that we've reached the 250kb soft limit... - page 2. (Read 3711 times)

legendary
Activity: 1764
Merit: 1002
Clients should re-broadcast transactions or assume they are lost, if they fail to be included after X * 4 [blocks | seconds]

The current behavior of clients is fine:  rebroadcast continually, when you are not in a block.

Optionally, in the future, clients may elect to not rebroadcast.  That is fine too, and works within the current or future system.


yes, clients should be allowed to revise a previous unconfirmed tx.
legendary
Activity: 1764
Merit: 1002
...you can see how any long-running node will eventually accumulate a lot of dead weight.

Wow...tightrope walking with no net. If blocks are always filled and fees go up, the SatoshiDICE transactions (low fee) will clog the memory pool and I guess eventually there will need to be a patch.

Correct.  It's not needed right now, thus we are able to avoid the techno-political question of what to delete from the mempool when it becomes necessary to cull.

Quote
Quote
The mempool only stores provably spendable transactions, so it is DoS'able, but you must do so with relay-able standard transactions.

Why aren't mempool transactions purged after some fixed amount of time? This way someone could determine with certainty that their transaction will never make it into a block. Apologies if this has already been asked many times (it probably has).

As a matter of fact, that is my current proposal on the table, with has met with general agreement:

   Purge transactions from the memory pool, if they do not make it into a block within X [blocks | seconds].  

Once this logic is deployed widely, it has several benefits:

  • TX behavior is a bit more deterministic.
  • Makes it possible (but not 100% certain) that a transaction may be revised or double-spent-to-recover, if it fails to make it into a block.
  • mempool is capped by a politically-neutral technological limit

Patches welcome Smiley  I haven't had time to implement the proposal, and nobody else has stepped up.



i like this idea.
sr. member
Activity: 364
Merit: 250
Well if you think about it,

you could decide in an arbitrary way how many bitcoins a block should try and reward the miners.  for the purpose of this I say we could define it as (50btc - $current_block_reward)  

Then in the same operation that decides the difficulty, you could also adjust the block size using an algorithm that would attempt to keep the fees to that proper level to maintain the 50 btc / block.  The block size could be adjusted on the same schedule as the difficulty.  

You could design the algorithm to inversely increase as the mining reward decreases to try and keep the same balance of fees that we are seeing now.  It just seems like a simple way to keep everything running status quo.

The other cool thing is that it would still allow for free transactions to go through if there happened to be a temporary decrease in the transactional volume.  I'm sure there is something that I'm missing here. I'm sure its not that simple.  But the concept is there.

legendary
Activity: 1120
Merit: 1152
It seems to me the most fair way to decide block size it is to have the block chain size be proportional to the hashing speed of the network.  

Over Bitcoin's relatively short period, the graph of the hashing speed of the network looks like this:



...and you want to link block size to that wild roller coaster, when we haven't even seen that ASICS will do?

Even on a log scale the hashing power has been wild, and it still could be if anyone finds a partial shortcut in SHA256:



That said, since a shortcut is rather unlikely, and probably would break SHA256 entirely anyway, not to mention cause a 51% attack, I could consider supporting a max block size calculated as something like 1MB*log10(hashs/second * k) But really, I'm mostly saying that because log10 of anything doesn't grow very fast...
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
It seems to me the most fair way to decide block size it is to have the block chain size be proportional to the hashing speed of the network.  

Hashing speed is not relevant.

This is a good discussion of the critical factors:

https://bitcointalksearch.org/topic/m.1568633
sr. member
Activity: 364
Merit: 250
It seems to me the most fair way to decide block size it is to have the block chain size be proportional to the hashing speed of the network.  

1) The more transactions per second then what can fit in the current block size results in more fees paid to miners.  
2) The more fees paid to miners, the more miners exist.  
3) The more miners exist, the higher the hashing speed.  
4) The higher hashing speed results in larger block size (because thats how we determining it).
5) The larger block size results in lower transaction fees.
6) Lower transaction fees result is less fees paid to miners.
7) Less fees to miners results in less miners.
8 ) Less miners results in smaller hashing speed.
9) Smaller hashing speed results in smaller block size.

And So on.  This creates an equilibrium that would basically allow supply and demand to dictate the block size.  

I'm sorry if this has already been discussed.  Thoughts?
legendary
Activity: 1120
Merit: 1152
I'm not saying that Bitcoin will shrink, I'm saying that it will reach an equilibrium where there is no more growth in terms of new users operating directly on the block chain. Instead, there will be an industry of companies that do things off the block chain and settle up once a day or every few hours. Of course miners would love that, since the fees will be maximized. And anyone using Bitcoin as a store of value will be happy with it as well, since the network hash rate will be maximized.

This.

I'd consider myself a pretty knowledgeable guy about Bitcoin - I've even gotten a few lines of code into the reference client and once found a (minor) bug in code written by Satoshi dating back to almost the very first version.

You wanna know where I keep my coins for day-to-day spending? Easywallet. So it's centralized - so what? If it goes under I'm not going to cry about the $100 I have there, and I do care about how it ensures that every transaction I make it unlinkable, and since I access it over Tor, even Easywallet has a hard time figuring out who I am.

My savings though? Absolutely they're on the blockchain, with rock-solid security that I can trust - I've read most of the source-code myself. Sure, transactions won't be free, or even cheap, but you get what you pay for, and I know I'm getting the hashing security I paid for.

With cryptography we can create ways to audit services like Easywallet and make it impossible for them lie about how many coins back the balances in the ledgers they maintain. Eventually we can even create ways to shutdown those services instantly if they commit fraud. In fact, with some changes to the Bitcoin scripting, I think we can even make it possible for users of those services to automatically get refunds when fraud is proven, although I and others are still working on that idea - don't quote me on that yet. Tongue

The point is, we have options, and those options don't have to destroy the truly decentralized and censorship-proof blockchain we have now, just so people can make cheap bets on some silly gambling site.
legendary
Activity: 1064
Merit: 1001
At step 13, transactions (and the fees they would have paid to miners) are fleeing bitcoin in droves.

I'm not saying that Bitcoin will shrink, I'm saying that it will reach an equilibrium where there is no more growth in terms of new users operating directly on the block chain. Instead, there will be an industry of companies that do things off the block chain and settle up once a day or every few hours. Of course miners would love that, since the fees will be maximized. And anyone using Bitcoin as a store of value will be happy with it as well, since the network hash rate will be maximized.
full member
Activity: 236
Merit: 100

13. Spurred by the profitability of Bitcoin transactions, alternate chains appear to capture the users that Bitcoin lost.
14. Pleased with their profitability, miners refuse to accept any hard fork to block size.


I'm sorry, I don't get it.

At step 13, transactions (and the fees they would have paid to miners) are fleeing bitcoin in droves.  And at step 14, the bitcoin miners are *pleased* with this?  Why?

It makes no sense to me at all to impose a permanent hard limit of 1mb.  Whatever reasons are given for keeping it, could be used as reasons to *lower* it.  And no one thinks we should lower it.  I don't agree with this "artificial scarcity" business, unless the point of it is to help level the playing field in terms of hardware requirements.  In that sense, it's not really artificial scarcity, is it?  It's scarcity of real resources:  bandwidth, storage and CPU speed.

I mean, we all agree that if everyone had 10 gigabit ethernet, 256 cores and 100tb of storage, the 1mb limit would seem laughable, right?  Well, soon we'll all have that.  And a few years after that we'll all have it the palm of our hand.

Here's my modest (and likely naive) proposal.

1) See if a scheme to reduce resource consumption in the protocol can be worked out (I think storage requirements are already being addressed, but not sure about bandwidth)
2) Whatever comes of that, plot historical hardware capability progress, project the curve into the future.
3) Hard fork the client to follow the curve projection
4) If hardware doesn't end up matching predictions, fork again as necessary.

I doubt a second fork would be needed for decades. 

full member
Activity: 188
Merit: 100
100% agree with prediction, this can be seen as a major flaw in bitcoin.

maybe it is time to present the btc's little brother to the world, LTC,.

Is there any limit in LTC block size?

if the answer is yes maybe is a good time for the devs of both coins to colaborate.

We can have the gold and the silver like someone said.

better now than when it is to late
legendary
Activity: 1064
Merit: 1001
...
75%  Train wreck, emergency block size increase. misterbigg: "Sorry, my next prediction will be better!"
...

heh...well, remember that the negative consequences for leaving the block size alone are far less severe than if we implement a faulty system for making it adjustable. Because if we do nothing, we can always change it later. The worst that happens is we have a period of time where transaction fees are higher than normal and take longer to confirm. Certainly not the end of the world by any stretch.

Compare this with adjusting the block size and then discovering that well, yeah it seems retep was right about losing some decentralization due to bandwidth.
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
Let's be generous and assume an average 90% probability that each step in the predicted chain of events occurs as described...

Event  Probability
1     100%
2     90%
3     81%
4     73%
5     66%
6     59%
7     53%
8     48%
9     43%
10   39%
11   35%
12   31%
13   28%
14   25%

End result:

25%  Smooth transition.  All: "Hail misterbigg"
75%  Train wreck, emergency block size increase. misterbigg: "Sorry, my next prediction will be better!"
legendary
Activity: 1596
Merit: 1100
Clients should re-broadcast transactions or assume they are lost, if they fail to be included after X * 4 [blocks | seconds]

The current behavior of clients is fine:  rebroadcast continually, when you are not in a block.

Optionally, in the future, clients may elect to not rebroadcast.  That is fine too, and works within the current or future system.
legendary
Activity: 1232
Merit: 1094
As a matter of fact, that is my current proposal on the table, with has met with general agreement:

   Purge transactions from the memory pool, if they do not make it into a block within X [blocks | seconds].  

Once this logic is deployed widely, it has several benefits:

  • TX behavior is a bit more deterministic.
  • Makes it possible (but not 100% certain) that a transaction may be revised or double-spent-to-recover, if it fails to make it into a block.
  • mempool is capped by a politically-neutral technological limit

Patches welcome Smiley  I haven't had time to implement the proposal, and nobody else has stepped up.

Clients should re-broadcast transactions or assume they are lost, if they fail to be included after X * 4 [blocks | seconds]

I would also add a rule that a tx which is the same as a transaction already in the pool, except that it has a tx fee at least double the current version should replace the current version and be relayed.

The client could tell the user the transactions failed to be sent and ask if the user wants to increase the fee.
legendary
Activity: 1596
Merit: 1100
...you can see how any long-running node will eventually accumulate a lot of dead weight.

Wow...tightrope walking with no net. If blocks are always filled and fees go up, the SatoshiDICE transactions (low fee) will clog the memory pool and I guess eventually there will need to be a patch.

Correct.  It's not needed right now, thus we are able to avoid the techno-political question of what to delete from the mempool when it becomes necessary to cull.

Quote
Quote
The mempool only stores provably spendable transactions, so it is DoS'able, but you must do so with relay-able standard transactions.

Why aren't mempool transactions purged after some fixed amount of time? This way someone could determine with certainty that their transaction will never make it into a block. Apologies if this has already been asked many times (it probably has).

As a matter of fact, that is my current proposal on the table, with has met with general agreement:

   Purge transactions from the memory pool, if they do not make it into a block within X [blocks | seconds].  

Once this logic is deployed widely, it has several benefits:

  • TX behavior is a bit more deterministic.
  • Makes it possible (but not 100% certain) that a transaction may be revised or double-spent-to-recover, if it fails to make it into a block.
  • mempool is capped by a politically-neutral technological limit

Patches welcome Smiley  I haven't had time to implement the proposal, and nobody else has stepped up.

sr. member
Activity: 310
Merit: 250

14. Pleased with their profitability, miners refuse to accept any hard fork to block size.


Because why sell 1000 apples for $0.75 each when you can instead sell 10 for $1.00 each. Especially when your variable cost for additional apples is effectively zero. Makes perfect fucking sense.

Even better, turns out there are enough oranges for everyone to have one, and nobody gives a shit about apples at all anymore.
legendary
Activity: 1064
Merit: 1001
...you can see how any long-running node will eventually accumulate a lot of dead weight.

Wow...tightrope walking with no net. If blocks are always filled and fees go up, the SatoshiDICE transactions (low fee) will clog the memory pool and I guess eventually there will need to be a patch.

Quote
The mempool only stores provably spendable transactions, so it is DoS'able, but you must do so with relay-able standard transactions.

Why aren't mempool transactions purged after some fixed amount of time? This way someone could determine with certainty that their transaction will never make it into a block. Apologies if this has already been asked many times (it probably has).
legendary
Activity: 1596
Merit: 1100
Is there a place that describes how the reference client deals with the memory pool? Like, what happens when it fills up (which transactions get purged, if any, and after how long)?

The only way transactions are purged is by appearing in a block.  At present it cannot "fill up" except by using all available memory, and getting OOM-killed.  Therefore, you can see how any long-running node will eventually accumulate a lot of dead weight.

The mempool only stores provably spendable transactions, so it is DoS'able, but you must do so with relay-able standard transactions.
legendary
Activity: 1064
Merit: 1001
Is there a place that describes how the reference client deals with the memory pool? Like, what happens when it fills up (which transactions get purged, if any, and after how long)?
legendary
Activity: 1596
Merit: 1100
Just reiterating my prediction so we can see how it plays out. We are currently on #2, a lot of unconfirmed transactions and starting to see #3. We should see transaction fees increase and also more and more blocks larger than 250kb as miners uncap the soft limit.

The amount of unconfirmed transactions is not larger than average, over a 24 hour period.

A snapshot of the mempool -- like the blockchain.info link above -- does not fit the thesis for two reasons:

  • Never-will-confirm transactions and low priority transactions bloat the mempool
  • Some miners sweep far more than 250k worth of transactions, so some miners already sweep large swaths into blocks

This situation has been ongoing for months now.

Pages:
Jump to: