Author

Topic: Gold collapsing. Bitcoin UP. - page 306. (Read 2032266 times)

legendary
Activity: 1764
Merit: 1002
May 29, 2015, 03:42:09 PM
...
The UXTO constraint may never be solved in an acceptable (sub)linear way, or the solution(s) could for political reasons never be implemented in BTC.
...
Almost certainly 'never' by any realistic definition of various things.
...
Solving 'the UTXO problem' would require what is by most definitions 'magic'.  Perhaps some future quantum-effect storage, communications, and processing schemes could 'solve' the problem but I'm not expecting to pick up such technology at Fry's by the next holiday season (Moore's law notwithstanding.)

A comment from chriswilmer got me thinking…

The UTXO set is actually bounded. The total amount of satoshis that will ever exists is

   (21x10^6) x (10^8) = 2.1 x 10^15 = 2.1 "peta-sats"
...
...
OK, now let's be reasonable!  Let's assume that 10 billion people on earth each control about 4 unspent outputs on average.  That's a total of 40 billion outputs, or

    (40 x 10^9) x (65 bytes) = 2.6 terabytes

With these assumptions, it now only takes about 20 of those SD cards to store the UTXO set:

    (2.6 x 10^12) / (128 x 10^9) = 20.3,

or, three 1-terrabyte SSDs, for a total cost of about $1,500.  
...

I have thought about this bounding (mostly in the context of the current rather awkward/deceptive 'unspendable dust' settings.)  I think that there is currently, and probably for quite some time, some big problems with this rosy picture:

 - UTXO is changing in real time through it's entire population.  This necessitates currently (as I understand things) a rather more sophisticated data-structure than something mineable like the blockchain.  UTXO is in ram and under the thing that replaced BDB (forgot the name of that database at the moment) because seeks, inserts, and deletes are bus intensive and, again, in constant flux.

Agreed.  The UTXO can be thought of as "hot" storage that's continually being updated, while the blockchain can be thought of as "cold" storage that does not have the same requirements for random memory reads and writes.  However, the UTXO doesn't need to sit entirely in RAM (e.g., the uncompressed UTXO set is, AFAIK, around 2 GB, but bitcoind runs without problem on machines with less RAM).  

Quote
...but would be interested to see a proof-of-concept, simulator, prototype, etc.

Agreed.  What I'm curious about is the extent to which the UTXO database could be optimized algorithmically and with custom hardware.  

Consider the above scenario where 10 billion people control on average 4 unspent outputs (40 billion coins), giving us a UTXO set approximately 2.6 TB in size.  Now, let's assume that we sort these coins perfectly and write them to a database.  Since they're sorted, we can find any coin using binary search in no more than 36 read operations (about 65 bytes each):

   log2(40x10^9) = 36  

Rough numbers: A typical NAND FLASH chip permits random access reads within about 30 us, a typical NOR FLASH chip within about 200 ns, and perhaps less than 20 ns for SDRAM, so it takes about

   36 x 30 us = 900 us (NAND FLASH)
   36 x 200 ns = 7.2 us (NOR FLASH)
   36 x 20 ns = 0.72 us (SDRAM)

to find a particular coin if there's 40 billion of them.  If we commit 10% or our time to looking up coins, to match Visa's average rate of 2,000 TPS means we need to be able to find a particular coin in

   (1 sec x 10%) / (2000 /sec) = 50 us.  

My gut tells me that looking up coins isn't too daunting a problem, even if 10 billion people each control 4 coins, and, in aggregate, make 2,000 transactions per seconds.  

...Of course, the UTXO is constantly evolving.  As coins get spent, we have to mark them as spent and then eventually erase them from the database, and add the new coins to that database that were created.  If we assume the typical TX creates 2 new coins, then this means we need to write about

    (65 bytes per coin) x (2 coins per TX) x (2000 TXs per sec) = 0.26 MB/sec

Again, this isn't fast.  Even an SD card like the SanDisk Extreme Pro has a write speed up to 95 MB/s.

Of course, this is the speed for sequential writes, and we'll need to do plenty of (slower) random writes and erase operations, but note that 0.26 MB means that only  

    0.26 MB / 2.6 TB = 0.00001 %

of our database is modified each second, or about 1% per day.  


My questions, then, are:

  - To what extent can we develop algorithms to optimize the UTXO database problem?  
  - What is the best hardware design for such a database?

Mike Hearns on reddit:

If the block creates the outputs that it does itself spend all those outputs will be in RAM cache. So they will be effectively free to check.

LevelDB is very fast, even on a spinning hard disk. You can't assume 1 UTXO = 1 seek.

https://github.com/google/leveldb

readrandom   :      16.677 micros/op;  (approximately 60,000 reads per second)

60,000 reads per second, for random access.

This isn't a problem. Gavin is being highly conservative and I think panicked a little - LevelDB effectively sorts keys by hotness (due to the leveled SSTable structure). So old/dormant coins will eventually fall to the bottom of the pile and be a bit slower to access, with the benefit that recent coins are near the top and are cheaper to access. People who try and create then spend coins over and over again to waste IOPs are going to be very frustrated by their lack of success.

And now I'm off to the airport for a two week holiday. Have fun guys! Smiley
legendary
Activity: 1260
Merit: 1008
May 29, 2015, 03:35:19 PM
Blockstream hired Rusty Russell to work on a lightning network implementation. This is going to be big.

Now it appears that the development efforts for sidechains and lightning networks are coming together. Russel, who joined Blockstream a few weeks ago, is working on lightning networks, and one of his first actions was to set up a Blockstream-hosted mailing list for “Discussion of the development of the Lightning Network, a caching layer for bitcoin.” The new mailing list archives are freely accessible.

“They hired me,” Russel said on Reddit. “We agreed I’d be working on developing lightning. I set up a mailing list and am developing a toy prototype to explore the ideas. Will put on github once that’s ready (two weeks?) but it’s a long long way from anything someone could use. I’m excited about lightning, but it’s a marathon, not a sprint.”

Explain to me why requiring centralized lightning nodes to be up 24/7 is a good thing?

To buy your coffee? /sarcasm

I though that increasing on-chain bitcoin network capacity is necessary (even for a lightning network
implementation 3tps avg isn't enough(*)), that said I believe that a mainly-offline-sync-once-in-while
solution to increase system capacity even more is desirable.

This solution would be the result of a trade-off between different aspects: centralization, capacity, privacy, etc. etc.

As I already said I won't use LN to buy a house, but I will definitely use it for my daily expenses.

Sure If we were able to find some kind of mechanism that would not require a softfork, it would be amazing.
Unfortunately, as sidechains also LN need a new opcode (and a solution to tx malleability) to be properly implemented.

(*) on this podcast:

https://letstalkbitcoin.com/blog/post/epicenter-bitcoin-joseph-poon-and-tadge-dryja-scalability-and-the-lightning-network

Joseph Poon & Tadge Dryja, authors of the LN white paper, explicitly state that a max block size
has to be increased to implement what they envision.

legendary
Activity: 1764
Merit: 1002
May 29, 2015, 03:09:44 PM
Blockstream hired Rusty Russell to work on a lightning network implementation. This is going to be big.

Now it appears that the development efforts for sidechains and lightning networks are coming together. Russel, who joined Blockstream a few weeks ago, is working on lightning networks, and one of his first actions was to set up a Blockstream-hosted mailing list for “Discussion of the development of the Lightning Network, a caching layer for bitcoin.” The new mailing list archives are freely accessible.

“They hired me,” Russel said on Reddit. “We agreed I’d be working on developing lightning. I set up a mailing list and am developing a toy prototype to explore the ideas. Will put on github once that’s ready (two weeks?) but it’s a long long way from anything someone could use. I’m excited about lightning, but it’s a marathon, not a sprint.”

Explain to me why requiring centralized lightning nodes to be up 24/7 is a good thing?
legendary
Activity: 1764
Merit: 1002
May 29, 2015, 03:05:28 PM
Dow down - 115.24.

All hands on deck.
legendary
Activity: 1260
Merit: 1008
May 29, 2015, 02:58:33 PM
Blockstream hired Rusty Russell to work on a lightning network implementation. This is going to be big.

Now it appears that the development efforts for sidechains and lightning networks are coming together. Russel, who joined Blockstream a few weeks ago, is working on lightning networks, and one of his first actions was to set up a Blockstream-hosted mailing list for “Discussion of the development of the Lightning Network, a caching layer for bitcoin.” The new mailing list archives are freely accessible.

“They hired me,” Russel said on Reddit. “We agreed I’d be working on developing lightning. I set up a mailing list and am developing a toy prototype to explore the ideas. Will put on github once that’s ready (two weeks?) but it’s a long long way from anything someone could use. I’m excited about lightning, but it’s a marathon, not a sprint.”
hero member
Activity: 574
Merit: 500
May 29, 2015, 01:35:08 PM
With the upcoming collapse of mainstream Western fiat currencies, including the U.S. dollar, cryptocurrencies could very well save us.
legendary
Activity: 1372
Merit: 1000
May 29, 2015, 01:24:24 PM
some call it altruistic but I prefer to think they will use greed for the grater good.

 

Nash was a great man.  watch the video:

https://twitter.com/cypherdoc2/status/602533856290349056
thanks,
I had seen that but thatch itagain, we are interdependent, its great, I also revised my post as it didn't express my thought as intended. (i love my predictive text ;-)
legendary
Activity: 3108
Merit: 1531
yes
May 29, 2015, 01:08:10 PM
Gold preparing to take the next dive.

mayeb the last one

but probably not.

i firmly believe we will see <900$ gold and eventually <500$ gold


yes, gold is useless in the new digital age.

The same goes for paper. Gold has its use and it have its day, eventually....
hero member
Activity: 574
Merit: 500
May 29, 2015, 01:05:41 PM
Bitgold looks good,

I'm saving my $$ in gold,

The only "issue" is the time of "transactions", are a bit slow, imho.

(24 - 48 hrs)
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
May 29, 2015, 12:24:27 PM
Gold preparing to take the next dive.

mayeb the last one

but probably not.

i firmly believe we will see <900$ gold and eventually <500$ gold


yes, gold is useless in the new digital age.

Kindly tell me, when gold goes down in terms of price, how bitcoin makes use of this ?

Thank you.

poeple try to figure out why gold is collapsing, then they learn about bitcoin and how its like gold but better...

Bitcoin: Divisible  Gold: ever try to take 42.00$ worth of gold off of a 1oz gold coin? fuck no! that's just crazy!?

Bitcoin: Impossible to counterfeit       Gold: fake gold plated coins and bars are known to be on the market, nearly impossible to know without drilling a hole in your bar / coin.

Bitcoin: Scarce    Gold: who the fuck knows how many Mega Tones of Gold are hidden in the next Asteroid

Bitcoin: Portable  Gold: Ever try to cross a border with 10,000$ of gold in your ass??

Bitcoin : Fungible  1BTC = 1BTC always  Gold: 1 gold coin don't necessary have the same value as another gold coin, in fact they all have slightly different values.

Bitcoin: Shiny As fuck   Gold: only Shiny because it's been processed and polished , a long and expensive process  


side note gold has a $6,946,890,374,198 market cap

gold isn't threatening to displace WU
gold can't be used online
gold can't be used for micro payment
gold can't survive bitcoin period the end
legendary
Activity: 1162
Merit: 1007
May 29, 2015, 12:21:28 PM
...
The UXTO constraint may never be solved in an acceptable (sub)linear way, or the solution(s) could for political reasons never be implemented in BTC.
...
Almost certainly 'never' by any realistic definition of various things.
...
Solving 'the UTXO problem' would require what is by most definitions 'magic'.  Perhaps some future quantum-effect storage, communications, and processing schemes could 'solve' the problem but I'm not expecting to pick up such technology at Fry's by the next holiday season (Moore's law notwithstanding.)

A comment from chriswilmer got me thinking…

The UTXO set is actually bounded. The total amount of satoshis that will ever exists is

   (21x10^6) x (10^8) = 2.1 x 10^15 = 2.1 "peta-sats"
...
...
OK, now let's be reasonable!  Let's assume that 10 billion people on earth each control about 4 unspent outputs on average.  That's a total of 40 billion outputs, or

    (40 x 10^9) x (65 bytes) = 2.6 terabytes

With these assumptions, it now only takes about 20 of those SD cards to store the UTXO set:

    (2.6 x 10^12) / (128 x 10^9) = 20.3,

or, three 1-terrabyte SSDs, for a total cost of about $1,500.  
...

I have thought about this bounding (mostly in the context of the current rather awkward/deceptive 'unspendable dust' settings.)  I think that there is currently, and probably for quite some time, some big problems with this rosy picture:

 - UTXO is changing in real time through it's entire population.  This necessitates currently (as I understand things) a rather more sophisticated data-structure than something mineable like the blockchain.  UTXO is in ram and under the thing that replaced BDB (forgot the name of that database at the moment) because seeks, inserts, and deletes are bus intensive and, again, in constant flux.

Agreed.  The UTXO can be thought of as "hot" storage that's continually being updated, while the blockchain can be thought of as "cold" storage that does not have the same requirements for random memory reads and writes.  However, the UTXO doesn't need to sit entirely in RAM (e.g., the uncompressed UTXO set is, AFAIK, around 2 GB, but bitcoind runs without problem on machines with less RAM).  

Quote
...but would be interested to see a proof-of-concept, simulator, prototype, etc.

Agreed.  What I'm curious about is the extent to which the UTXO database could be optimized algorithmically and with custom hardware.  

Consider the above scenario where 10 billion people control on average 4 unspent outputs (40 billion coins), giving us a UTXO set approximately 2.6 TB in size.  Now, let's assume that we sort these coins perfectly and write them to a database.  Since they're sorted, we can find any coin using binary search in no more than 36 read operations (about 65 bytes each):

   log2(40x10^9) = 36  

Rough numbers: A typical NAND FLASH chip permits random access reads within about 30 us, a typical NOR FLASH chip within about 200 ns, and perhaps less than 20 ns for SDRAM, so it takes about

   36 x 30 us = 900 us (NAND FLASH)
   36 x 200 ns = 7.2 us (NOR FLASH)
   36 x 20 ns = 0.72 us (SDRAM)

to find a particular coin if there's 40 billion of them.  If we commit 10% or our time to looking up coins, to match Visa's average rate of 2,000 TPS means we need to be able to find a particular coin in

   (1 sec x 10%) / (2000 /sec) = 50 us.  

My gut tells me that looking up coins isn't too daunting a problem, even if 10 billion people each control 4 coins, and, in aggregate, make 2,000 transactions per seconds.  

...Of course, the UTXO is constantly evolving.  As coins get spent, we have to mark them as spent and then eventually erase them from the database, and add the new coins to that database that were created.  If we assume the typical TX creates 2 new coins, then this means we need to write about

    (65 bytes per coin) x (2 coins per TX) x (2000 TXs per sec) = 0.26 MB/sec

Again, this isn't fast.  Even an SD card like the SanDisk Extreme Pro has a write speed up to 95 MB/s.

Of course, this is the speed for sequential writes, and we'll need to do plenty of (slower) random writes and erase operations, but note that 0.26 MB means that only  

    0.26 MB / 2.6 TB = 0.00001 %

of our database is modified each second, or about 1% per day.  


My questions, then, are:

  - To what extent can we develop algorithms to optimize the UTXO database problem?  
  - What is the best hardware design for such a database?
legendary
Activity: 1386
Merit: 1027
Permabull Bitcoin Investor
May 29, 2015, 12:12:15 PM
Gold preparing to take the next dive.

mayeb the last one

but probably not.

i firmly believe we will see <900$ gold and eventually <500$ gold


yes, gold is useless in the new digital age.

Kindly tell me, when gold goes down in terms of price, how bitcoin makes use of this ?

Thank you.
legendary
Activity: 1764
Merit: 1002
May 29, 2015, 12:07:55 PM
Gold preparing to take the next dive.

mayeb the last one

but probably not.

i firmly believe we will see <900$ gold and eventually <500$ gold


yes, gold is useless in the new digital age.
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
May 29, 2015, 12:03:14 PM
Gold preparing to take the next dive.

mayeb the last one

but probably not.

i firmly believe we will see <900$ gold and eventually <500$ gold
full member
Activity: 280
Merit: 100
May 29, 2015, 11:36:34 AM
Gold preparing to take the next dive.

mayeb the last one
legendary
Activity: 1036
Merit: 1000
May 29, 2015, 11:15:57 AM
the mempool is rarely uniform across all nodes.  it would be impossible to reconstruct which unconf tx's a node would be missing.

OK, good point. I thought maybe having a time cutoff where no new tx are added to the first mempool after 10 minutes would help, but I guess there's no way to know for sure. That's the whole point of a consensus network after all. Oh well, there goes that shower thought. Thanks for the quick reply.
legendary
Activity: 1764
Merit: 1002
May 29, 2015, 11:12:39 AM
Perhaps some of the coders here can help me understand something.

Why not have an new "mempool" be created every 10 minutes, so that if it takes 30 minutes to find a block the winning miner will just take all the valid transactions in the first mempool, no matter how huge the total "blocksize" would be, and put only the hash of those transactions into the block? That way the block itself would be tiny so propagation wouldn't be an issue. All miners and other full nodes would have the first mempool transactions already(?), those being set in stone, so they would just have to check that the hash matches the set of all valid tx in the first mempool. Then the next winning miner would take all the valid tx in the second mempool, etc.

Of course if a miner finds the next block in less than 10 minutes and there is no mempool queued up yet, this doesn't work. Perhap difficulty would have to be adjusted to ensure miners were usually a little bit behind the curve.

This seems to shift the burden from bandwidth to CPU power for checking the hash, but as long as miners are behind the curve it seems to avoid the "race" where lower-bandwidth miners/nodes are at a disadvantage.

Does this, or anything like it, make any sense?

the mempool is rarely uniform across all nodes.  it would be impossible to reconstruct which unconf tx's a node would be missing.

your idea is a variation on IBLT.  but in that case, nodes can reconstruct their missing tx's due to the math of the IBLT.

and your idea would totally render SPV clients unusable as they rely on retrieving the Merkle tree path with it's block header to their specific tx history they are interested in.
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
May 29, 2015, 11:08:44 AM
Perhaps some of the coders here can help me understand something.

Why not have an new "mempool" be created every 10 minutes, so that if it takes 30 minutes to find a block the winning miner will just take all the valid transactions in the first mempool, no matter how huge the total "blocksize" would be, and put only the hash of those transactions into the block? That way the block itself would be tiny so propagation wouldn't be an issue. All miners and other full nodes would have the mempool transactions already(?), so they would just have to check that the hash matches the set of all valid tx in the first mempool. Then the next winning miner would take all the valid tx in the second mempool, etc.

Of course if a miner finds the next block in less than 10 minutes and there is no mempool queued up yet, this doesn't work. Perhap difficulty would have to be adjusted to ensure miners were usually a little bit behind the curve.

Does this, or anything like it, make any sense?

i think thats more or less how its currently works when there's a backlog of unconfirmed TX

this is fine for now, but at one point if theres alot of TX the mem pool will just grow and grow, and TX will confirm slower and slower.

You're saying miners currently sometimes only put the hash of all the tx in a block, instead of the tx themselves? Huh

i miss read, ya no they put the Full TX

I dont see how knowing which TX to include in the next block is going to help tho.
legendary
Activity: 1036
Merit: 1000
May 29, 2015, 11:06:09 AM
Perhaps some of the coders here can help me understand something.

Why not have an new "mempool" be created every 10 minutes, so that if it takes 30 minutes to find a block the winning miner will just take all the valid transactions in the first mempool, no matter how huge the total "blocksize" would be, and put only the hash of those transactions into the block? That way the block itself would be tiny so propagation wouldn't be an issue. All miners and other full nodes would have the mempool transactions already(?), so they would just have to check that the hash matches the set of all valid tx in the first mempool. Then the next winning miner would take all the valid tx in the second mempool, etc.

Of course if a miner finds the next block in less than 10 minutes and there is no mempool queued up yet, this doesn't work. Perhap difficulty would have to be adjusted to ensure miners were usually a little bit behind the curve.

Does this, or anything like it, make any sense?

i think thats more or less how its currently works when there's a backlog of unconfirmed TX

this is fine for now, but at one point if theres alot of TX the mem pool will just grow and grow, and TX will confirm slower and slower.

You're saying miners currently sometimes only put the hash of all the tx in a block, instead of the tx themselves? Huh
Jump to: