Author

Topic: Gold collapsing. Bitcoin UP. - page 1040. (Read 2032266 times)

legendary
Activity: 1764
Merit: 1002
August 08, 2014, 01:04:19 PM
iirc, the utxo set takes up about 75% of RAM required to run a node.  as i run 4 nodes myself, it takes a minimum of 1GB RAM to run smoothly meaning 750MB could be estimated for the utxo set itself.  that's pretty big to be including in a block...
Not in the block, alongside the block.

Just like when people merge mine Namecoin the nmc blocks aren't included in the btc blocks.

The merged-mine implementations that don't include the memory fix: (mutable vs imutable storage) and no auxpow data in cblockindex (blocks stored in memory) will eat up alot of RAM... so far only Devcoin and I0coin are the ones that have this fix and only Devcoin is running on 0.9.2 (currently in testing) with all the other fixes that bitcoin rolled up along the way included. So for most of these incompatible alt-coins who now wish to merge-mine (because I called that they would, and the ones that dont will die) are now suffering from either memory bloat issues or incorrect implementions/bugs and its next to impossible for a general programmer to port it over because its increasingly convoluted code to keep track of.

which is exactly why i've been warning everyone about these leech coins or sidechains that wish to piggyback themselves onto the Bitcoin mining network to secure themselves.
legendary
Activity: 2044
Merit: 1005
August 08, 2014, 01:01:16 PM
iirc, the utxo set takes up about 75% of RAM required to run a node.  as i run 4 nodes myself, it takes a minimum of 1GB RAM to run smoothly meaning 750MB could be estimated for the utxo set itself.  that's pretty big to be including in a block...
Not in the block, alongside the block.

Just like when people merge mine Namecoin the nmc blocks aren't included in the btc blocks.

The merged-mine implementations that don't include the memory fix: (mutable vs imutable storage) and no auxpow data in cblockindex (blocks stored in memory) will eat up alot of RAM... so far only Devcoin and I0coin are the ones that have this fix and only Devcoin is running on 0.9.2 (currently in testing) with all the other fixes that bitcoin rolled up along the way included. So for most of these incompatible alt-coins who now wish to merge-mine (because I called that they would, and the ones that dont will die) are now suffering from either memory bloat issues or incorrect implementions/bugs and its next to impossible for a general programmer to port it over because its increasingly convoluted code to keep track of.
sr. member
Activity: 350
Merit: 251
Dolphie Selfie
August 08, 2014, 12:41:33 PM
i think the UTXO set will have to have a pre-agreed order that will need to be synced amongst all miners in real-time.  then, the successful miner can publish the header hash along with some signal as to how far down in that list he went before he stopped adding tx's.  then, all other miners will know how to verify the published header hash and remove those tx's from the utxo set before moving on to the next block.

Yes, as I understand it, the trick is the deterministic order. The "signal as to how far down in that list" is the thing gavin tweeted about.

how reliable is a synced utxo set across all miners?

I don't think it is the utxo set, that has to be in an deterministic order, but the txs in the mempool. The utxo set is a result of the valid blockchain, so it should be synchronized already (except a node regards a block as valid, which later will become orphaned).

Gavin proposed to use "Invertible Bloom Lookup Tables". The lookup table tells the miner which transactions from its mempool should be considered for the verification of the published blockheader. Because now the miner knows which txs are used to form the block and in which order, he can reconstruct the merkle root and compare that against the published blockheader. However the proposed "Invertible Bloom Lookup Tables" are not 100% reliable. From a quick glance at the paper I found, that about <1% of keys are lost during an experiment with 10000 keys. Keys can be seen as txs in this case.
Now I ask myself what happens in case of the remaining 1%? Maybe the failure rate can be reduced by publishing multiple lookup tables of the same set?
legendary
Activity: 1135
Merit: 1166
August 08, 2014, 12:29:13 PM
iirc, the utxo set takes up about 75% of RAM required to run a node.  as i run 4 nodes myself, it takes a minimum of 1GB RAM to run smoothly meaning 750MB could be estimated for the utxo set itself.  that's pretty big to be including in a block...
Not in the block, alongside the block.

Just like when people merge mine Namecoin the nmc blocks aren't included in the btc blocks.

why would you do that?

I may be mistaken, but isn't this basically what the "ultimate UTXO pruning" thing is about?  As far as I have heard, it allows new clients to be synced almost immediately (at least a lot faster than when downloading the full chain) with less required trust than SPV clients and the like (or even almost the same trustless-ness as a full node?).  I've never really dived into the details, though.
legendary
Activity: 1764
Merit: 1002
August 08, 2014, 11:47:11 AM
iirc, the utxo set takes up about 75% of RAM required to run a node.  as i run 4 nodes myself, it takes a minimum of 1GB RAM to run smoothly meaning 750MB could be estimated for the utxo set itself.  that's pretty big to be including in a block...
Not in the block, alongside the block.

Just like when people merge mine Namecoin the nmc blocks aren't included in the btc blocks.

why would you do that?
legendary
Activity: 1764
Merit: 1002
August 08, 2014, 11:46:36 AM

BGP hijaking, huge threat to bitcoin?: http://www.wired.com/2014/08/isp-bitcoin-theft/



nah, miners who aren't paying attention, maybe.  but not the protocol itself.
legendary
Activity: 1764
Merit: 1002
August 08, 2014, 11:45:12 AM
i think the UTXO set will have to have a pre-agreed order that will need to be synced amongst all miners in real-time.  then, the successful miner can publish the header hash along with some signal as to how far down in that list he went before he stopped adding tx's.  then, all other miners will know how to verify the published header hash and remove those tx's from the utxo set before moving on to the next block.

Yes, as I understand it, the trick is the deterministic order. The "signal as to how far down in that list" is the thing gavin tweeted about.

how reliable is a synced utxo set across all miners?
legendary
Activity: 1260
Merit: 1002
August 08, 2014, 11:45:11 AM
i'm not sure it has already been discussed here, but here is two matters that i think might be worth a thought :

NSA's 1996 report talking about minting & cryptocurrencies : http://groups.csail.mit.edu/mac/classes/6.805/articles/money/nsamint/nsamint.htm

BGP hijaking, huge threat to bitcoin?: http://www.wired.com/2014/08/isp-bitcoin-theft/

legendary
Activity: 1400
Merit: 1013
August 08, 2014, 11:44:49 AM
iirc, the utxo set takes up about 75% of RAM required to run a node.  as i run 4 nodes myself, it takes a minimum of 1GB RAM to run smoothly meaning 750MB could be estimated for the utxo set itself.  that's pretty big to be including in a block...
Not in the block, alongside the block.

Just like when people merge mine Namecoin the nmc blocks aren't included in the btc blocks.
legendary
Activity: 1764
Merit: 1002
August 08, 2014, 11:43:35 AM
i think the UTXO set will have to have a pre-agreed order that will need to be synced amongst all miners in real-time.
What if the UTXO set itself is a structure that gets merge-mined with the blocks themselves?

iirc, the utxo set takes up about 75% of RAM required to run a node.  as i run 4 nodes myself, it takes a minimum of 1GB RAM to run smoothly meaning 750MB could be estimated for the utxo set itself.  that's pretty big to be including in a block...
sr. member
Activity: 350
Merit: 251
Dolphie Selfie
August 08, 2014, 11:41:09 AM
i think the UTXO set will have to have a pre-agreed order that will need to be synced amongst all miners in real-time.  then, the successful miner can publish the header hash along with some signal as to how far down in that list he went before he stopped adding tx's.  then, all other miners will know how to verify the published header hash and remove those tx's from the utxo set before moving on to the next block.

Yes, as I understand it, the trick is the deterministic order. The "signal as to how far down in that list" is the thing gavin tweeted about.
legendary
Activity: 1400
Merit: 1013
August 08, 2014, 11:39:40 AM
i think the UTXO set will have to have a pre-agreed order that will need to be synced amongst all miners in real-time.
What if the UTXO set itself is a structure that gets merge-mined with the blocks themselves?
legendary
Activity: 1764
Merit: 1002
August 08, 2014, 11:32:54 AM
the question is, where is that point?

This is not a question we can answer. The right approach is to build a machine for constantly discovering the answer to that question in real time a.k.a a market.

the other thing i don't understand is doesn't the miner who has received a valid header from another miner still have to wait for the tx's themselves to show up so he can remove them from the UTXO set before constructing the next block?

Presumably you assume the miners have already seen the transactions before they see the Merkle tree, and they've seen the Merkle tree before they see the header.

People who want their transactions processed, and miners who want their headers to propagate, have an incentive to make sure it is so.

i think the UTXO set will have to have a pre-agreed order that will need to be synced amongst all miners in real-time.  then, the successful miner can publish the header hash along with some signal as to how far down in that list he went before he stopped adding tx's.  then, all other miners will know how to verify the published header hash and remove those tx's from the utxo set before moving on to the next block.
legendary
Activity: 1400
Merit: 1013
August 08, 2014, 11:23:40 AM
the question is, where is that point?

This is not a question we can answer. The right approach is to build a machine for constantly discovering the answer to that question in real time a.k.a a market.

the other thing i don't understand is doesn't the miner who has received a valid header from another miner still have to wait for the tx's themselves to show up so he can remove them from the UTXO set before constructing the next block?

Presumably you assume the miners have already seen the transactions before they see the Merkle tree, and they've seen the Merkle tree before they see the header.

People who want their transactions processed, and miners who want their headers to propagate, have an incentive to make sure it is so.
legendary
Activity: 1764
Merit: 1002
August 08, 2014, 10:42:08 AM
It would also mean that the current incentive to keep block sizes small due to increased orphaning probability would be removed ... and the block size limit debate is back on the table with more urgency.
No matter how much they optimize block creation, somebody (not necessarily the same entity that does the hashing) has to build the Merkle tree, and whoever they are they can't process an infinite number of transactions per second.

yes, i've been asking about this on Reddit w/o an answer yet.  there must be some finite time required to construct a Merkle Tree where it becomes cumbersome timewise.  the question is, where is that point?

Quote
As long as the ability of the network to process transactions isn't infinite, there will be some equilibrium price where the supply curve for transaction processing intersects with the demand curve.

It may just be that the price of transaction processing dropped a few orders of magnitude, which is a good thing.

The ability to provide the same service at a lower price makes the network more useful.

the other thing i don't understand is doesn't the miner who has received a valid header from another miner still have to wait for the tx's themselves to show up so he can remove them from the UTXO set before constructing the next block?
legendary
Activity: 1400
Merit: 1013
August 08, 2014, 10:04:04 AM
It would also mean that the current incentive to keep block sizes small due to increased orphaning probability would be removed ... and the block size limit debate is back on the table with more urgency.
No matter how much they optimize block creation, somebody (not necessarily the same entity that does the hashing) has to build the Merkle tree, and whoever they are they can't process an infinite number of transactions per second.

As long as the ability of the network to process transactions isn't infinite, there will be some equilibrium price where the supply curve for transaction processing intersects with the demand curve.

It may just be that the price of transaction processing dropped a few orders of magnitude, which is a good thing.

The ability to provide the same service at a lower price makes the network more useful.
full member
Activity: 154
Merit: 100
Is there life on Mars?
August 08, 2014, 09:38:34 AM
Gold is like Bitcoin, it also has those rabid movements up and down. I think they're highly influenced by the state of the western economies! The downside of Gold is that its bubbles aren't getting bigger and bigger every time!
legendary
Activity: 3430
Merit: 3080
August 08, 2014, 07:32:30 AM
Headers-only block propagation could also make some room to examine the effects of reducing the block interval. Who knows, we could end up with 1Mb blocks @ 2 blocks/min
legendary
Activity: 1764
Merit: 1002
August 08, 2014, 07:19:03 AM

What's he talking about exactly from a  technical standpoint?

We will need to wait for him to write up a blog summary, but my interpretation is that it is a method of propagating through the network "abbreviated" blocks which will always be the same size no matter how much Bitcoin tx volume occurs (within reason!). The full blocks would follow separately to those nodes which want them, perhaps to be known as "archive" nodes. It means that mining can happen fast on the next block without waiting for the slow propagation and verification of larger and larger blocks. It is a more sophisticated extension of the headers-first proposals.

Fantastic if it can be done. To the moon and all that...

It would also mean that the current incentive to keep block sizes small due to increased orphaning probability would be removed ... and the block size limit debate is back on the table with more urgency.

Good point.

But doesn't constructing a huge Merkle Root take longer than a small one?
newbie
Activity: 16
Merit: 0
August 08, 2014, 07:15:12 AM
thats nor sync, gold never touch bitcoin
i think thats all just at moment,
Jump to: