Pages:
Author

Topic: WARNING! Bitcoin will soon block small transaction outputs - page 12. (Read 58546 times)

sr. member
Activity: 448
Merit: 254
why hasn't the price of BTC dropped substantially as a result of this announcement?

Because nobody really cares that they'll have a hard time sending less than $0.0062445 (5430 satoshis @ $115/BTC).  There's always been a lower limit on divisibility.
member
Activity: 74
Merit: 10
Relevant question:

why hasn't the price of BTC dropped substantially as a result of this announcement?
legendary
Activity: 1596
Merit: 1100
We can't design the network assuming miners will mine at a loss.

No one has ever suggested doing so.  A situation that can be realistic one miner might not be realistic for another -- free market, free choice.



legendary
Activity: 1246
Merit: 1077
One realistic scenario is that some players mine at a loss, simply because they find other value in mining -- keeping bitcoin decentralized, keeping bitcoin secure, processing non-standard transactions, etc.

We can't design the network assuming miners will mine at a loss. That's why so much stuff will have to eventually be changed.

Although microtransactions constitute little of the transaction volume, they do constitute a significant amount of the fees. As competition goes down, miner revenue will decrease. And as mining revenue decreases, increased centralization will result. Although it's good to reduce spam while block subsidies form the majority of miner revenue, we should be open to revoking these changes in the future.

I believe that there should be a certain "network health" index that reports on how much miners earn in fees. If this index drops too precipitously, the Bitcoin network should band together to stimulate miner revenue. Luckily, we have many untapped sources of fees yet:

  • The block size limit is at 1 MB. Assuming continued Bitcoin growth, this will eventually be stalling miner revenue. Increasing this will increase transaction volume and therefore miner revenue. This is probably the most powerful method of strengthen the network, but must be done cautiously as it requires a hardfork.
  • Non-standard transactions, including these microtransactions, are plenty and can eventually be upgraded to standard. This will increase fee-paying transaction volume and help stimulate network health. These don't require hardforks, but do require many people to upgrade to have an effect.
  • If eventually Bitcoin is integrated into the financial system, governments can transparently offer subsidies to Bitcoin miners through otherwise meaningless but high-fee transactions. This would be similar to a stimulus plan for economic failure, as we have now. This is probably the best solution to intermittent problems, as no end-user action is required.
legendary
Activity: 1596
Merit: 1100
One realistic scenario is that some players mine at a loss, simply because they find other value in mining -- keeping bitcoin decentralized, keeping bitcoin secure, processing non-standard transactions, etc.
hero member
Activity: 826
Merit: 1000
Who gets to decide how slow is too slow?
BTC Guild right now.
Okey dokey.

Have you contributed any patches to p2pool to make it more efficient / easier to install / etc? If not, why not if you're so worried about centralization?

(honest question, I don't keep up with p2pool development because I'm personally not terribly worried about mining centralization)


Bitcoin mining for profit will be like farming profitably IRL. Farms gets larger and # of farmers gets smaller.

Key word : for profit
legendary
Activity: 1652
Merit: 2311
Chief Scientist
Who gets to decide how slow is too slow?
BTC Guild right now.
Okey dokey.

Have you contributed any patches to p2pool to make it more efficient / easier to install / etc? If not, why not if you're so worried about centralization?

(honest question, I don't keep up with p2pool development because I'm personally not terribly worried about mining centralization)
legendary
Activity: 1120
Merit: 1160
The fact that usually it works because most of the outputs were recently created is incredibly dangerous. If the ratio of best case to worst case performance gets bad enough the attacker just has to come along with a block spending outputs that weren't recently created, or otherwise picked in a way where retrieval happens to be slow, to knock slower miners offline.

Who gets to decide how slow is too slow?

Mining these days requires investing in ASIC hardware. Solo mining or running a pool will very soon require investing in a reasonably fast network connection and a machine with at least a few gigabytes of memory.

Knocking the slowest N% of solo miners/pools off the network every year (where N is less than 20 or so) is not a crisis. That is the way free-market competition works.

BTC Guild right now.
member
Activity: 84
Merit: 10
MEC - MFLtNSHN6GSxxP3VL8jmL786tVa9yfRS4p
Is this why I sent a tx of .025 BTC with a .001 transaction fee 3 hours ago and it still is only seen by 1 peer Huh
member
Activity: 106
Merit: 10
Now, we will be regulated to only sending transactions of a certain size.  No free market choice here...

Shame of them limiting the amount to one satoshi! it should be 1/100000 of a satoshi.....

What kind of argument is that?  Roll Eyes
You have a point, but Bitcoin started with an understanding that 1 satoshi was the minimum.  Now, we're being told that the limit is 5340 satoshis, with no free-market input on the matter.  It's rather disappointing.  Individuals should be able to decide what size of transaction is too small - we shouldn't all be forced to suddenly abide by the same arbitrary rule.

5340 satoshis is negligible, less than a US or Euro cent, and a very sensible minimum. This cutoff is a needed arbitrary rule which mirrors the real-world where fiat sub-cent transactions are also unwelcome.  The 5340 will be reduced as BTC value increases.

This whole thread is a fuss about a benefit interpreted wrongly.

The Achilles Heel of Bitcoin is being swamped by transactions worth less than a cent because, unlike fiat coinage transactions, Bitcoin transactions are stored on thousands of servers for years or forever.



This...


It increases the costs of that dataset that cannot be pruned

There's no real reason the dataset cannot be pruned - i've been playing with a DB copy of a blockchain, looking at ways of "removing" the records for accounts with a nil balance (amount out = total amounts in) where date is > 30 days ago

I think you misunderstand.  Nobody is saying the blockchain can't be pruned.  IT CAN be pruned however the UXTO (set of unspent outputs which can still be inputs for future txs) CAN'T be pruned.  That is fine because generally the UXTO is going to grow slower than the blockchain (people tend to spend unspent outputs creating roughly the same number of unspent outputs).  There is one exception.  That is UNECONOMICAL outputs.

If you have a 0.0000001 output but it would cost 100x as much in fees to spend it would you spend it?  Of course not.  Kinda like mailing a penny (at a cost of $0.46) to your bank to apply to your mortgage principal.  Nobody does that it doesn't make economical sense.  So these uneconomically outputs are likely NEVER going to be spent.  Each one that is produced won't be spent and thus won't be pruned and will remain in the UXTO forever (or a very long time on average) this is causing the UXTO to bloat and will continue to bloat as there is no reason for anyone to ever spend these outputs (and spending is what allows an output to be pruned).

The UXTO is the critical resources.  In order to validate tx quickly the UXTO needs to be in memory.  So what happens when the UXTO is 32GB? 64GB? 200GB?  Now if those are "valid" outputs likely to be used in future tx well that is just the cost of being a full node.  But when 50%, 70%, 95%+ of the outputs are just unspendable garbage it greatly increases the processing requirements of full nodes without any benefit, to anyone.

Quote
I don't *need* as a _user_ of bitcoins the whole blockchain, if I could get "balances at point in time" and the journal entries after that.
Of course you don't which is the whole point of pruning the blockchain however you do need to retain a copy of every unspent output otherwise when you receive tx or block containing that output as an input in a new tx you can't validate the tx or block.  If the input is coming to you, you can't even know if the tx/block is valid or just some nonsense garbage that an attacker sent to trick you into thinking your got paid.

This unprunable dataset is a subset of the blockchain however tx below the dust thresholding simply bloat this.

Absolutely,
 the solution could be "send all dust transactions in a transaction to miners as fees or back as change" IF the destination address IS empty(or under 0.1btc),

that will help with the spam and spam like transactions, example: 1000 million chinese/indian creating new addreses, going to a faucet and receiving satoshis that will never be used because they will lost their wallet.dat in hd crash... If they want to use the faucet they should get 0.1 btc first.

legendary
Activity: 1652
Merit: 2311
Chief Scientist
The fact that usually it works because most of the outputs were recently created is incredibly dangerous. If the ratio of best case to worst case performance gets bad enough the attacker just has to come along with a block spending outputs that weren't recently created, or otherwise picked in a way where retrieval happens to be slow, to knock slower miners offline.

Who gets to decide how slow is too slow?

Mining these days requires investing in ASIC hardware. Solo mining or running a pool will very soon require investing in a reasonably fast network connection and a machine with at least a few gigabytes of memory.

Knocking the slowest N% of solo miners/pools off the network every year (where N is less than 20 or so) is not a crisis. That is the way free-market competition works.
legendary
Activity: 1050
Merit: 1000
You are WRONG!
donator
Activity: 1218
Merit: 1079
Gerald Davis
So is this basicaly because there is highly illegal shit like CP embedded in the blockchain forever via nano-transactions?

Of course this would never be admitted, but it comes right on the heels of rumors of its use for this purpose

No that has nothing to do with this.  One could still "embed" data in the blockchain by just ensuring the outputs are larger than the dust threshold.   
sr. member
Activity: 322
Merit: 250
So is this basicaly because there is highly illegal shit like CP embedded in the blockchain forever via nano-transactions?

Of course this would never be admitted, but it comes right on the heels of rumors of its use for this purpose
legendary
Activity: 1120
Merit: 1160
It doesn't need to be in _ram_ in needs to be in fast reliable storage— online storage not nearline or offline, not on a tape jukebox in the basement or on on far away storage across a WAN—, and the validation time depends on how fast it is. If you put it on storage with a 10ms random access time and your block has 2000 transactions with 10 inputs each, you're looking at 200 seconds just to fetch the inputs which is just going to utterly really bad network convergence and cause a ton of hashrate loss due to forks and make people need more confirmations for security.  But in practice it's not quite that bad since _hopefully_ a lot of spent outputs were recently created.

The fact that usually it works because most of the outputs were recently created is incredibly dangerous. If the ratio of best case to worst case performance gets bad enough the attacker just has to come along with a block spending outputs that weren't recently created, or otherwise picked in a way where retrieval happens to be slow, to knock slower miners offline. Even worse is if they can come up with two blocks where each of those blocks trigger performance problems on one implementation but not the other they can split the network. They don't even have to mine those blocks themselves if the transactions in them are standard enough that they can get someone else to mine them.

In Bitcoin any performance problem can become a serious security problem. We only get away with it now because computers are so fast in comparison to the transaction volume and 10 minute target, but if we start needing to "optimize" things, including solutions like aggressively passing around transaction hashes rather than transactions themselves when a new block is propagated, we open ourselves up to serious security problems.
staff
Activity: 4284
Merit: 8808
The UXTO is the critical resources.  In order to validate tx quickly the UXTO needs to be in memory.  So what happens when the UXTO is 32GB? 64GB? 200GB?  Now if those are "valid" outputs likely to be used in future tx well that is just the cost of being a full node.  But when 50%, 70%, 95%+ of the outputs are just unspendable garbage it greatly increases the processing requirements of full nodes without any benefit, to anyone.
It doesn't need to be in _ram_ in needs to be in fast reliable storage— online storage not nearline or offline, not on a tape jukebox in the basement or on on far away storage across a WAN—, and the validation time depends on how fast it is. If you put it on storage with a 10ms random access time and your block has 2000 transactions with 10 inputs each, you're looking at 200 seconds just to fetch the inputs which is just going to utterly really bad network convergence and cause a ton of hashrate loss due to forks and make people need more confirmations for security.  But in practice it's not quite that bad since _hopefully_ a lot of spent outputs were recently created.

The 'memory' stuff is mostly a tangent, the issue is that the utxo data can't be pruned. All full validators must have access to it— bloat in this dataset pressures people to run SPV nodes instead of full validators... which risks a loss of decenteralization, loss of motivations by miners to behave honestly, etc.
legendary
Activity: 1400
Merit: 1013
In order to validate tx quickly the UXTO needs to be in memory.  So what happens when the UXTO is 32GB? 64GB? 200GB?  Now if those are "valid" outputs likely to be used in future tx well that is just the cost of being a full node.  But when 50%, 70%, 95%+ of the outputs are just unspendable garbage
...they'll get pushed to swap space along with all the other memory pages that haven't been accessed for a while? We expect caching algorithms and virtual memory to still be a thing in the future, right?
donator
Activity: 1218
Merit: 1079
Gerald Davis
It increases the costs of that dataset that cannot be pruned

There's no real reason the dataset cannot be pruned - i've been playing with a DB copy of a blockchain, looking at ways of "removing" the records for accounts with a nil balance (amount out = total amounts in) where date is > 30 days ago

I think you misunderstand.  Nobody is saying the blockchain can't be pruned.  IT CAN be pruned however the UXTO (set of unspent outputs which can still be inputs for future txs) CAN'T be pruned.  That is fine because generally the UXTO is going to grow slower than the blockchain (people tend to spend unspent outputs creating roughly the same number of unspent outputs).  There is one exception.  That is UNECONOMICAL outputs.

If you have a 0.0000001 output but it would cost 100x as much in fees to spend it would you spend it?  Of course not.  Kinda like mailing a penny (at a cost of $0.46) to your bank to apply to your mortgage principal.  Nobody does that it doesn't make economical sense.  So these uneconomically outputs are likely NEVER going to be spent.  Each one that is produced won't be spent and thus won't be pruned and will remain in the UXTO forever (or a very long time on average) this is causing the UXTO to bloat and will continue to bloat as there is no reason for anyone to ever spend these outputs (and spending is what allows an output to be pruned).

The UXTO is the critical resources.  In order to validate tx quickly the UXTO needs to be in memory.  So what happens when the UXTO is 32GB? 64GB? 200GB?  Now if those are "valid" outputs likely to be used in future tx well that is just the cost of being a full node.  But when 50%, 70%, 95%+ of the outputs are just unspendable garbage it greatly increases the processing requirements of full nodes without any benefit, to anyone.

Quote
I don't *need* as a _user_ of bitcoins the whole blockchain, if I could get "balances at point in time" and the journal entries after that.
Of course you don't which is the whole point of pruning the blockchain however you do need to retain a copy of every unspent output otherwise when you receive tx or block containing that output as an input in a new tx you can't validate the tx or block.  If the input is coming to you, you can't even know if the tx/block is valid or just some nonsense garbage that an attacker sent to trick you into thinking your got paid.

This unprunable dataset is a subset of the blockchain however tx below the dust thresholding simply bloat this.
legendary
Activity: 3920
Merit: 2349
Eadem mutata resurgo
Quote
Imagine buying a car for £5000 and taking 500x£10 notes to the deaer but you find they cant sell it to you because they came from 702 different amounts of change from your wages, and some are "worth" less when spending than £10 because they're notes only printed that morning, or were made up of 200x5p transactions... in the "real" world £5k is £5k is £5k not some variable equivalent that might eventually be 5k

Yes, this is another manifestation of the weak fungibility problem of bitcoin ... that also manifests as pseudo-anonymity and not strong anonymity. I think Satoshi mentions something about lightweight clients only keeping the previous 2 TX records deep for each coin in the DB.
Pages:
Jump to: