Pages:
Author

Topic: Your bets for 2014 - page 4. (Read 6803 times)

sr. member
Activity: 462
Merit: 250
Clown prophet
January 06, 2013, 02:15:43 PM
#27
Why not what? Why regular home computer resource not enough to handle load of widely used payment system? Should I explain this for you? Do you think Paypal uses only its CEO desktop to handle system load? I think they have some number of full 42U racks of servers. Even not in one DC. And load is balanced between them.

And we have here that every Bitcoin node replays full system load: receive, verify, store, broadcast.

Are you all really so stupid here?
legendary
Activity: 1288
Merit: 1076
January 06, 2013, 11:40:40 AM
#26
You consider that common user home computer is able to serve the worldwide payment system?

Why not?   Computers are quite powerful and humanity is not that large.
sr. member
Activity: 462
Merit: 250
Clown prophet
January 06, 2013, 09:41:17 AM
#25
I'm not talking about a header-only version of the client here.  I'm talking about which part of the index it is useful to load in memory.
Each bitcoin node does the whole work for serving system: receive, verify, store and broadcast transactions and blocks. This work not divided between participants, but every one doing the same over-redunant work.

So when bitcoin will grow and serve millions transactions per day, every client will serve all theese transactions.

You consider that common user home computer is able to serve the worldwide payment system? And all that we need is to reduce index size loaded into memory?

You dont know what are you talking about. You all here are full of illusions and disconnected from reality.

And reality is disappointing. Bitcoin will meet huge bottlenecks in not far future while Bitcoin Foundation members kidding with their VIP status.
hero member
Activity: 490
Merit: 500
January 05, 2013, 04:12:41 PM
#24
I develop DIANNA chain to test DHT chain storage there.
sr. member
Activity: 462
Merit: 250
Clown prophet
January 05, 2013, 04:04:21 PM
#23
pent, you always break my pessimism  Grin

But this idea does not exist even on draft for today.
hero member
Activity: 490
Merit: 500
January 05, 2013, 03:57:11 PM
#22
Ah, crunches... All right Smiley Lets see where it will go.

I see the future of bitcoin chain in some sort of distributed hash table with good redunancy to avoid information drop.

So I see the main load may be put on network layer, instead of SATA bus.
legendary
Activity: 1288
Merit: 1076
January 05, 2013, 03:51:00 PM
#21
Allright. The client with DB consisting only with block headers receives the transaction. What next he must do with it?

Store? No. It must be sure transaction is valid to store it, but he can not check its validity as he dont have full chain.
Broadcast? No. Same reason. If trx isnt valid, he will be banned as DoS source by vanilla clients.

If network will consist of huge part of such light clients - it will be paralyzed.

I'm not talking about a header-only version of the client here.  I'm talking about which part of the index it is useful to load in memory.
hero member
Activity: 490
Merit: 500
January 05, 2013, 03:37:59 PM
#20
Allright. The client with DB consisting only with block headers receives the transaction. What next he must do with it?

Store? No. It must be sure transaction is valid to store it, but he can not check its validity as he dont have full chain.
Broadcast? No. Same reason. If trx isnt valid, he will be banned as DoS source by vanilla clients.

If network will consist of huge part of such light clients - it will be paralyzed.
legendary
Activity: 1288
Merit: 1076
January 05, 2013, 03:30:35 PM
#19
You know that spent transactions can be burried, don't you?
With the cost of security, don't you know?

If client receive bcast block/transaction from network it must check its prev outputs in full block chain to know whether its a valid transaction for futher broadcast.

If client burn old transaction - it will not be able to check new transactions for validity.

If it broadcast not valid transaction - it will be banned by vanilla clients.

So burning old block bodies makes client vulnerable to Sybill and DoS attacks.

There is no security flaw in this.  New transactions are not supposed to try to spend an already spent transaction.  So it makes sense to remove spent transactions (only once buried under a few blocks though) from the index.  They will still be in the database, but accessing to them will take more time.

Though I'm no database expert, I'm pretty sure an index does not have to be exhaustive.
hero member
Activity: 490
Merit: 500
January 05, 2013, 03:26:56 PM
#18
You know that spent transactions can be burried, don't you?
With the cost of security, don't you know?

If client receive bcast block/transaction from network it must check its prev outputs in full block chain to know whether its a valid transaction for futher broadcast.

If client burn old transaction - it will not be able to check new transactions for validity.

If it broadcast not valid transaction - it will be banned by vanilla clients.

So burning old block bodies makes client vulnerable to Sybill and DoS attacks, coz he unable to check what is true and what is false comming from network.
legendary
Activity: 1288
Merit: 1076
January 05, 2013, 03:21:55 PM
#17
I'm not big expert in Berkeley DB, but it is obvious that one index record takes about 300 bytes in space. It's a sha256 hash + some index data. Remind me true number if you please.

So... To get index size greater than 4G we need only 13m transactions in whole chain. This is nothing for widely used payment system. For Bitcoin it is a 180 days of blocks containing 500 transactions each.

So Bitcoin is about to grow? Hehe.

You know that spent transactions can be ignored, don't you?
sr. member
Activity: 462
Merit: 250
Clown prophet
January 05, 2013, 12:58:55 PM
#16
I'm not big expert in Berkeley DB, but it is obvious that one index record takes about 300 bytes in space. It's a sha256 hash + some index data. Remind me true number if you please.

So... To get index size greater than 4G we need only 13m transactions in whole chain. This is nothing for widely used payment system. For Bitcoin it is a 180 days of blocks containing 500 transactions each.

So Bitcoin is about to grow? Hehe.
legendary
Activity: 1428
Merit: 1021
January 05, 2013, 12:25:16 PM
#15
Quote
above 100$/BTC    - 17 (15.9%)
between 20 and 100 $/BTC    - 67 (62.6%)

Nice bets for 2014 - you guys should buy those cheap coins now, right? Wink
sr. member
Activity: 462
Merit: 250
Clown prophet
January 05, 2013, 12:02:21 PM
#14
Growing unspent transactions number mean growing key lookup load on database.

And when index size do not fit in RAM - software will perform key lookup reading whole index from HDD (gigabytes of data on single lookup).

It is obvious that such conditions will require good SCSI or SSD backing array.
sr. member
Activity: 462
Merit: 250
Clown prophet
January 05, 2013, 11:49:30 AM
#13
I'm some kind of expert on high load servers maintenance. Including high load databases. I see future bottlenecks when they don't even exists. Project owners pay me good money for my skills.
legendary
Activity: 1288
Merit: 1076
January 05, 2013, 11:38:09 AM
#12
They key is index size, which is growing. Another key is number of broadcasted new transactions per time, which require database lookup to verify them.

As index size will run out of ~double avg RAM size of common computer - clients will start goxxxxxing on each incoming unspent trans. Because any disk caching mechanism will not be effective with such data and RAM sizes.

So every client on every incoming unspent trans will perform uncached disk lookup through gigabytes of data.

Hehe.

I'm no expert in databases but it seems to me you don't know what you're talking about.  Can someone confirm or infirm?
sr. member
Activity: 462
Merit: 250
Clown prophet
January 05, 2013, 11:34:25 AM
#11
I mean no software optimizations will be effective on such conditions.

Gavin will have to publish minimum hardware req to run client. And those req will include good amount of RAM and good ssd disk.
sr. member
Activity: 462
Merit: 250
Clown prophet
January 05, 2013, 11:29:03 AM
#10
There is no matter, how much data it loads to memory.

They key is index size, which is growing. Another key is number of broadcasted new transactions per time, which require database lookup to verify them.

As index size will run out of ~double avg RAM size of common computer - clients will start goxxxxxing on each incoming unspent trans. Because any disk caching mechanism will not be effective with such data and RAM sizes.

So every client on every incoming unspent trans will perform uncached disk lookup through gigabytes of data.


Hehe.
legendary
Activity: 1904
Merit: 1002
January 05, 2013, 05:15:56 AM
#9
But causes total hang of system IO scheduler on open, close, sync.

You ppl do not have brains? When DB will reach size of RAM of common computer - customers will better shut themselves rather than use vanilla client.

Have you tried the 0.8 prerelease?  It reduces the working set (the part needed in memory) to around 150 MB with the current blockchain.  It synced on my machine in under four hours without even keeping one core busy and mostly idle disk (it was network bound, which is the next area targeted for speed up).  Someone ran it on an old pentium 4 machine and it synced in under 6 hours (easily done overnight).

In other words, I know you love Satan, but there are enough truths to strike fear in people's hearts.  You don't need to make up lies.
sr. member
Activity: 462
Merit: 250
Clown prophet
January 04, 2013, 10:00:06 PM
#8
But causes total hang of system IO scheduler on open, close, sync.

You ppl do not have brains? When DB will reach size of RAM of common computer - customers will better shut themselves rather than use vanilla client.
Pages:
Jump to: