Pages:
Author

Topic: [Fundraising] Finish “Ultimate blockchain compression” - page 2. (Read 25515 times)

legendary
Activity: 1120
Merit: 1149
To be clear, I'm not sure that example would have the effect you're imagining. Creating a purposefully time-wasting block to validate would hurt, not help the miner. But presumably you meant broadcasting DoS transactions which, if they got in other miner's blocks, would slow down their block propogations. That's possible but not economical for reasons that I'll explain in a minute.

You can't assume that slowing down block propagation is a bad thing for a miner. First of all block propagation is uneven - it'd be quite possible for large pools to have much faster propagation than the more decentralized solo/p2pool miners we want in the network. Remember, if you can reliably get your blocks to just over 50% of the hashing power, you're better off than if they get to more than that because there are more fee paying transactions for you to pick from, and because in the long run you drive difficulty down due to the work your competitors are wasting. Of course reliably getting to 50% isn't easy, but reliably getting to, say, no more than 80% is probably quite doable, especially if efficient UTXO set handling requires semi-custom setups with GPUs to parallelize it that most full-node operators don't have.

Secondly if block propagation does become an issue miners will switch to mining empty or near empty blocks building on top of the most recent blockheader that they know about until they finally get around to receiving the block in full as a way to preserve their income. Unfortunately even with UTXO commits they can still do that - just re-use the still valid UTXO commitment from the previous block.(1) Obviously this has serious issues with regard to consensus... which is why I say over and over again that any UTXO commitment scheme needs to also have UTXO possession proofs as a part of it to make sure miners can't do that.

1) Modulo the fact that the UTXO set must change because a previously unspendable coinbase output became spendable; if we don't do explicit UTXO possession proofs we should think very carefully about whether or not a miner (or group of miners) can extract the info required to calculate the delta to the UTXO set that the coinbase created more efficiently than just verifying the block honestly. I'm not sure yet one way or another if this is in fact true.

But stepping back from that example, block propagation is why sub-second index construction/validation would be ideal. It places index operations in the same ballpark as the other verification tasks, meaning that it won't significantly affect the time it takes for blocks to propagate through the network. If index construction is done in parallel (using GPU acceleration, for example), it might not have any effect at all.

Rather than just quote numbers I'll explain where they come from:

There are two indices: the validating index containing unspent transactions keyed by , and the wallet index containing unspent outputs keyed by .

Let T be the number of transactions with at least one unspent output, N the total number of unspent outputs, X the number of transactions in a new block, I the number of inputs, and O the number of outputs.

The number of hash operations required to perform an insert, delete, or update to the indices is 1, log(T) or log(N), and len(key) for the best, average, and worst case respectfully. Since the keys of both indices involve hashed values (txid, and typically also the content of scriptPubKey), creating artificially high pathways through the index structure is a variant of the hash pre-image problem - you are essentially constructing transactions whose hash and output scriptPubKeys have certain values so as to occupy and branch specific nodes in the index structure, for the entire length of a given key. This is computationally expensive to do, and actually growing the pathway requires getting the transactions into the UTXO set, which requires some level of burnt BTC to avoid anti-dust protections. So it is not economically viable to construct anything more than a marginally worse pathway; the average case really is the only case worth considering.

re: content of the scriptPubKey, change your algorithm to index by H(scriptPubKey) please - a major part of the value of UTXO commitments is that they allow for compact fraud proofs. If your tree is structured by scriptPubKey the average case and worst case size of a UTXO proof differs by 454x; it's bad engineering that will bite us when someone decides to attack it. In any case the majority of core developers are of the mindset that in the future we should push everything to P2SH addresses anyway; making the UTXO set indexable by crap like "tags" - the rational for using the scriptPubKey directly - just invites people to come up with more ways of stuffing data into the blockchain/UTXO set.

For the average case, the number of index update operations are:

validating: (X+I) * log(T)
wallet:     (I+O) * log(N)

Currently, these parameters have the following values:

T = 2,000,000
N = 7,000,000
X =     7,000 (max)
I =    10,000 (max)
O =    15,000 (max)

Those are approximate maximum, worst case values for X, I, and O given a maximum block size of 1MB. You certainly couldn't reach all three maximums simultaneously. If you did though, it'd be about 1,000,000 hash operations (half a "megahash," since we're talking single SHA-2), and more importantly 1,000,000 updates to the leveldb structure, which is about 4-5 seconds on a fast disk. My earlier calculation of a max 200,000 updates (computable in about 0.8s) was based on actual Bitcoin block data, and is somewhat more realistic as a worst case.

Worst case would be if a miner creates a block filled with transactions spending previously created 10,000 byte scriptPubKeys and creating no new ones. A minimal CTxIn is 37 bytes, so you'd get roughly 1,000,000/37=27,027 txouts spent. Each txout requires on the order of 10,000 hash operations to update it, so you've actually got an upper limit of ~270,270,000 updates, or 22 minutes by your metric... There's a good chance something else would break first, somewhere, which just goes to show that indexing by scriptPubKey directly is a bad idea. Sure it'd be tricky and expensive to create such a monster block, but it is possible and it leads to results literally orders of magnitude different than what is normally expected.

re: single SHA256, I'd recommend SHA256^2 on the basis of length-extension attacks. Who cares if they matter; why bother analyzing it? Doubling the SHA256 primitive is a well accepted belt-and-suspenders countermeasure, and done already elsewhere in Bitcoin.

EDIT: And again, even if a call to 'getblocktemplate' takes multiple seconds to complete, the *next* call is likely to include mostly the same transactions again, with just a few additions or deletions. So successive getblocktemplate calls should take just milliseconds to compute by basing the index on the last returned tree.

Conceivably, when receiving and validating a block it's also possible to start with a pre-generated index based off the last block + mempool contents, and simply undo transactions that were not included. Assuming, of course, that the mempool is not getting backlogged, in which case it would make more sense to start with the last block. If you collect partial proof-of-works, then you could pre-compute updates from transactions which appear in many of the partial blocks. There's a large design space here to explore, in terms of optimization.

Lots of optimizations sure, but given that Bitcoin is a system where failure to update sufficiently fast is a consensus failure if any of those optimizations have significantly different average case and worst case performance it could lead to a fork; that such optimizations exist isn't a good thing.
legendary
Activity: 905
Merit: 1011
This structure is a variant of the current ultraprune index used for block validation in the 0.8 client, which consumes about 250MB of data (you can check this with the 'gettxoutsetinfo' RPC). My work uses these data as the leaf nodes of a binary tree, adding about 2 x 10^6 internal nodes to the validation index, for example. This adds about 120MB, currently. The wallet index is a little bit larger, so I would expect the total to be somewhere around 1GB, as of the current block height. This is small enough to fit in RAM, but needs to be persisted to disk as it is an expensive to recreate, hence the dependence on disk speed.
hero member
Activity: 714
Merit: 500
Thanks. This is the kind of information I was after.

Those are approximate maximum, worst case values for X, I, and O given a maximum block size of 1MB. You certainly couldn't reach all three maximums simultaneously. If you did though, it'd be about 1,000,000 hash operations (half a "megahash," since we're talking single SHA-2), and more importantly 1,000,000 updates to the leveldb structure, which is about 4-5 seconds on a fast disk. My earlier calculation of a max 200,000 updates (computable in about 0.8s) was based on actual Bitcoin block data, and is somewhat more realistic as a worst case.

How big is the data structure that is being manipulated? How does it grow with block count, based upon your best/worst/avg cases?

I guess I could infer that its probably waay too big to fit in RAM since you suggest performance depends on disk speed
legendary
Activity: 905
Merit: 1011
To be clear, I'm not sure that example would have the effect you're imagining. Creating a purposefully time-wasting block to validate would hurt, not help the miner. But presumably you meant broadcasting DoS transactions which, if they got in other miner's blocks, would slow down their block propogations. That's possible but not economical for reasons that I'll explain in a minute.

But stepping back from that example, block propagation is why sub-second index construction/validation would be ideal. It places index operations in the same ballpark as the other verification tasks, meaning that it won't significantly affect the time it takes for blocks to propagate through the network. If index construction is done in parallel (using GPU acceleration, for example), it might not have any effect at all.

Rather than just quote numbers I'll explain where they come from:

There are two indices: the validating index containing unspent transactions keyed by , and the wallet index containing unspent outputs keyed by .

Let T be the number of transactions with at least one unspent output, N the total number of unspent outputs, X the number of transactions in a new block, I the number of inputs, and O the number of outputs.

The number of hash operations required to perform an insert, delete, or update to the indices is 1, log(T) or log(N), and len(key) for the best, average, and worst case respectfully. Since the keys of both indices involve hashed values (txid, and typically also the content of scriptPubKey), creating artificially high pathways through the index structure is a variant of the hash pre-image problem - you are essentially constructing transactions whose hash and output scriptPubKeys have certain values so as to occupy and branch specific nodes in the index structure, for the entire length of a given key. This is computationally expensive to do, and actually growing the pathway requires getting the transactions into the UTXO set, which requires some level of burnt BTC to avoid anti-dust protections. So it is not economically viable to construct anything more than a marginally worse pathway; the average case really is the only case worth considering.

For the average case, the number of index update operations are:

validating: (X+I) * log(T)
wallet:     (I+O) * log(N)

Currently, these parameters have the following values:

T = 2,000,000
N = 7,000,000
X =     7,000 (max)
I =    10,000 (max)
O =    15,000 (max)

Those are approximate maximum, worst case values for X, I, and O given a maximum block size of 1MB. You certainly couldn't reach all three maximums simultaneously. If you did though, it'd be about 1,000,000 hash operations (half a "megahash," since we're talking single SHA-2), and more importantly 1,000,000 updates to the leveldb structure, which is about 4-5 seconds on a fast disk. My earlier calculation of a max 200,000 updates (computable in about 0.8s) was based on actual Bitcoin block data, and is somewhat more realistic as a worst case.

EDIT: And again, even if a call to 'getblocktemplate' takes multiple seconds to complete, the *next* call is likely to include mostly the same transactions again, with just a few additions or deletions. So successive getblocktemplate calls should take just milliseconds to compute by basing the index on the last returned tree.

Conceivably, when receiving and validating a block it's also possible to start with a pre-generated index based off the last block + mempool contents, and simply undo transactions that were not included. Assuming, of course, that the mempool is not getting backlogged, in which case it would make more sense to start with the last block. If you collect partial proof-of-works, then you could pre-compute updates from transactions which appear in many of the partial blocks. There's a large design space here to explore, in terms of optimization.
legendary
Activity: 1120
Merit: 1149
(Note that a very simple optimization prevents this from being an onerous requirement for miners: cache the indices for last generated block. There is likely to be a lot of similarity between two consecutively generated blocks (being the same high priority transactions from the same mempool), and all the miner would have to calculate is the differences. Nonce-rolling is not a problem as inclusion of the coinbase in the index is delayed by a number of blocks anyway.)

What's the worst case vs. best case vs. avg case performance of the UTXO updating process?

Remember that we can't make it possible for a miner to intentionally make a block whose UTXO set modifications take an especially long amount of time to process to get an edge on the competition.
legendary
Activity: 905
Merit: 1011
Re: compression ratios, the naming of this project is unfortunate. When originally proposed, @sipa's ultraprune branch was still being implemented. Ultraprune does basically the same "compression," in the form of maintaining a validation index that enables blockchain data to be pruned. This proposal adds no further compression beyond that, and in fact as proposed doubles disk requirements (from about 250MB for ultraprune, to 500MB for this proposal, currently) because it maintains two authenticated UTXO indices: one for validation, one for wallet operations. (Only the former is in ultraprune, in unauthenticated form.)

I have suggested the title “Authenticated index structures for secure lightweight operation of non-mining bitcoin nodes” which much better describes the benefits. I much prefer this to “Ultimate blockchain compression,” but unfortunately the latter is the name it is known by in this community.

Re: performance cost, I have extrapolations. The current code was written for the purpose of evaluating alternative designs and testing asymptotic performance. Through profiling and extrapolation I have a good idea of what final performance numbers will be, so please don't be shocked when you see the current numbers.

The current Python implementation (available here) can take up to 10^3 seconds to index a full 1MB block, using PostgreSQL as a storage backend. A full 95% of that time is spent in the SQL database performing between 10^4 - 10^5 uncached query operations. Switching to LevelDB or similar nosql datastore and adding caching should reduce time spent in the database by nearly two orders of magnitude. Furthermore, 80% of the remaining time is spent marshaling blockchain and index data into Python objects. Even within Python this could be optimized by using buffer objects or a C extension. Reasonable extrapolation shows that updates on the order of 10 seconds should be possible for a straightforward implementation of the current design in C++ using LevelDB as a storage backend.

You can use asymptotic analysis to look at the bare minimum work that must be done to update an index structure, and you end up with a further order of magnitude reduction: If you assume about 10,000 inputs+outputs for a full block - the far upper end of what is possible - then that is about 200,000 updates to the index structure in the worst case. LevelDB could handle this in under a second on a beefy machine. The real performance limiter is the SHA-2 hashes required for each node update, but hardware SHA-256 -- such as the new Intel CPU extensions, or even a GPU accelerated implementation -- should make the database the bottleneck again.

Sub-second updates for a 1MB block is then the long-term performance goal. Reaching it might require some hardware assumptions or follow-on work. But if this project can at least reach the achievable near-term milestone of less than 10 seconds (for a full block, worst case scenario), then performance would probably be adequate for inclusion in the reference client.

(Note that a very simple optimization prevents this from being an onerous requirement for miners: cache the indices for last generated block. There is likely to be a lot of similarity between two consecutively generated blocks (being the same high priority transactions from the same mempool), and all the miner would have to calculate is the differences. Nonce-rolling is not a problem as inclusion of the coinbase in the index is delayed by a number of blocks anyway.)

Re: a technical document / whitepaper, the folks at Invictus Innovations have generously offered to support the writeup of a series of BIPs explaining the data structure and a few example use cases - they are particularly interested in using it for merged mining of BitShares, using a scheme that I developed for Freicoin's eventual merged mining support. This is still being sorted out, but hopefully there will be technical document soon, and the associated discussion from other Bitcoin developers.
hero member
Activity: 714
Merit: 500
This is looking extremely promising Smiley

Is there some document somewhere that explains the key contributions that have been made? I am basically looking for a TL;DR

i.e.
  • what level of compression can you achieve
  • and at what performance cost?

And perhaps a doc explaining the solution?  Pseudocode of the algorithms / discussion of your data structures would be great

thanks
legendary
Activity: 1428
Merit: 1093
Core Armory Developer
As you all have probably heard by now, Armory just got a huge injection of funds.  Obviously, we didn't get the money to just "give it away", but I see that what Mark is doing is a huge benefit to Bitcoin, even if it ultimately doesn't work.  And if it does work, Armory would use it pretty much immediately.    As such, I think it make sense for Armory to commit some funding to this project.  I really want to see this idea explored, for the benefit of Bitcoin (not just because I originally came up with it Smiley).

Armory Technologies, Inc. is pledging 50 BTC for this.  The only catch is that we don't actually have 50 BTC available right now (only USD), but we should have it available some time next week and I will send it then.  Please email/PM me to pester me about it in a week if you don't see it.  

Looking forward to seeing what you can do with another few months!
legendary
Activity: 905
Merit: 1011
Thank you justusranvier. Your generous support over the last few months is what has kept me going to hit this milestone after the funding would have run out at the end of August. In case people missed it, here's the ReDonate campaign:

http://redonate.net/dashboard/bitcoin-developer-maaku
legendary
Activity: 1400
Merit: 1009
I'll continue donating monthly through your ReDonate campaign for an extra 3 months.
legendary
Activity: 905
Merit: 1011
Final revision of the index structure has been committed to python-bitcoin. It is now a level-compressed and node-compressed binary prefix tree. This is a pretty big change from the original proposal, but clearly the right decision to make in light of the performance numbers. The purpose of a 256-way radix tree was to reduce the number of hash operations, but that came at the price of greatly increased node sizes. This made index traversal more complex, and resulted in enormous proof sizes. Alternatives such as having a Merkle-ized internal node structure would have resulted in the exact same performance as a binary tree, but with much greater code complexity.

There may be one more minor change (eliminating the no longer needed coinbase flags from the ultraprune structure), but after that hash values and proof structure should remain unchanged from then until the deployment to the Satoshi client. The design phase is now over.

I also made some slight but important adjustments to how the utxosync.py program constructs the root hash. Insertion of coinbase transactions into the ledger is delayed by 99 blocks, meaning that an output shows up in the ledger only once it is eligible to be spent in the next block, according to the network rules. The primary reason is that the index of a block cannot contain its own coinbase, if the hash of the index is itself going to be committed to the coinbase. Otherwise you'd have a circular hash dependency (input depends on output), which is obviously impossible to resolve.

The final performance characteristics: average-case proof size of 64 * log2(utxo transactions) for the txid index, which is currently about 1.35kB. Average case for the address index is: 65 * log2(utxo outputs), currently 1.45kB. The worst case size for a txid proof is 16.25kB, although that will never happen because setting it up would require a SHA-256 pre-image attack. Worst case for the address index is 65 bytes * number of bits in the scriptPubKey, which doesn't require a pre-image but can be protected by not reusing addresses.

Syncing with the block chain takes a few seconds per block, or a few dozen for the really big blocks, with 95% of that being spent in the SQL database, and 4 of the remaining 5% spent marshaling data. As mentioned earlier, moving to LevelDB and a compiled language should eliminate most of that, hopefully resulting in two orders of magnitude speedup. So performance is looking good, so far.



I have approx two weeks left on the first round of funding the members of this forum have so generously provided. I will use that time finish and push the pruning and proof generation code, and time permitting expand the utxosync.py script into a small Django web API for querying the UTXO indicies. This could be used for prototyping wallet software that uses the new proofs. However this will provide no increase in security until bitcoin is soft-forked to include UTXO index commitments, or a merged-mined meta-chain is implemented.

This brings me to the next round funding. I am requesting a final 3 months of time to implement the final revision of the index structure in C++ and using LevelDB, optimize it to achieve sub-second update times (or as close to that performance goal as possible), and either (1) submit a documented pull request adding coinbase UTXO hash commitments to the Satoshi client, or (2) write a proxy server for miners to do their own commitments on a merged-mined meta chain. Which outcome depends on whether acceptance of a coinbase commitment pull request is likely to happen, which in turn depends on the achievable performance metrics which we do not know for certainty at this time.

Because of unexpected costs (insurance) and volatility, I'm forced to increase my monthly cost to 65btc, and will fully cash it out on receipt. I am therefore requesting a total of 195btc to finish the implementation as described above. If this is received by the end of the month, then I can promise the deliverable by the end of the year. Otherwise I will continue work, but at a rate in proportion to funds received, as funds are received ("billing" my time hourly). Please send donations to the same address:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Donation address for implementing phase 2
of "Ultimate blockchain compression"

13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
-----BEGIN PGP SIGNATURE-----

iQIcBAEBAgAGBQJSLkT7AAoJECsa1YSJj/UDJ0IP/26oyR1COsLs/f+rEvz33kFP
0HtGvsjbEoF+7cetJpIV0eyFomngOWpafr0uhy+uO6mGguPHbXPJlmcgywFdKDwB
pQrRVYcT2DQx+Hfwnhn51QNIJoB6LhnykUi9KrDar8FwbNfYOgLaSUDGqKShtDOC
lc/qVkP56cCvalcqs6a6Q8O0D69BLO+TwifMPJmtdzdnn/2Fs9ONdgXpnnNLGwpJ
g/LWPy9Zdjspq7qoH/p3kFWo2S8TmX5EShsMDM8C4oUTnMjXbBvodJQwm6AzC0KY
XWdg+/W82YpMpmAmhSxqw43/VzUrODw9WAn7cXrCA86/IwhihZnNhLsELYP7Cd77
kgBWR9HE+NILWTRn+x8CONfi6+gk8ZqYsKmcj+PcYbf1u2BAknKb1EVpTwNp2ehD
8y6oNFj99vkDfZz8hSmt8HLn7YbU9jnmJcFqXwCwDFZD+nvWi1dHFeqnJppH3lWX
HaIF3V+psYQuMpglk+fFVrbmF+omCzsd7dArCXk4n+txA8rGIVBD2nWz4MUUxB9e
TLIeVr4EkH+yjvEK00VzjryfINE6mG58hetm1/y4Y/1EvoDATOyZhR91UFB6x/21
+pCagBDSVquc7DVYk7e275PnKSxjM4miAcf88jkO6TvcdiUaiYnYGxZQRCBY89/6
WgWf1x6CQvknPrYT6sZv
=hg1Y
-----END PGP SIGNATURE-----

As always, I welcome questions and comments regarding this proposal and my work towards implementing it, particularly from funders or developers of lightweight clients that might use it.
legendary
Activity: 905
Merit: 1011
I have pushed to three related code updates, representing work done during the month of August towards implementation of the UTXO index structures for bitcoin. As I've mentioned earlier on my blog, I'm doing the preliminary exploration work in Python, and so far that's paid off. I've gone through three iterations of the prefix-tree design, at each step uncovering weaknesses that require various modifications to the original design. The design, implement, test, benchmark, redesign cycle is averaging about a week, much faster than developing directly in C++ would have been. Python is, however, much slower, chiefly because of the time spent marshaling in and out of the block store or constructing hash preimages, and the resulting dynamic string manipulations. Now that the design is starting to solidify, I will probably do one more iteration before moving to C++ and beginning implementation for the Satoshi client.

The code updates include a new release of python-bitcoin, which contains the UTXO index data structures. Of particular interest to other developers would be the modules patricia.py, ledger.py, and merkle.py. The patricia.py module contains the PatriciaNode class which implements the node-compressed, level-compressed trie structure and whose API looks very similar to an ordered dictionary. ledger.py contains the supporting structures necessary to TxIdIndex and ContractIndex - the two authenticated indices underlying the “Ultimate blockchain compression” proposal. The merkle.py module contains a similarly constructed albeit not finished implementation of prunable Merkle trees. You can access this code here:

https://github.com/monetizeio/python-bitcoin/

The second project, which I'm releasing now for the first time, is a SQL blockchain storage backend using the absolutely wonderful SQLAlchemy object relational mapper. For this project, using SQL for storage has been beneficial as many profiling tools exist, and it allows arbitrary queries to be constructed after-the-fact to help diagnose bottlenecks. There are, however, obvious performance limitations and nobody should ever consider using a SQL backend for a fully validating node. Some people may find it useful for specialized applications requiring join queries, or perhaps a block explorer-like website. The code is available on github, and interfaces with python-bitcoin:

https://github.com/monetizeio/sqlalchemy-bitcoin/

Finally, in case anyone wants to play around with what I've been working on, there's a tool for synchronizing the SQL storage backend with the block chain via bitcoind's JSON-RPC interface. I *only* recommend using this for testnet data, and preferably the smaller testnet-in-a-box chain. Performance is abysmal, but for known reasons explained above. Back-of-the-envelope calculations based on profiling data show that a compiled implementation with smart memory management and a fast key/value data store should operate at least 100x faster, which would be enough to handle the live bitcoin block chain. Code is available here:

https://github.com/maaku/utxo-ledger/

I will be posting more updates soon.
member
Activity: 61
Merit: 15
cat debug.log | grep 'Verify '
- Verify 130 txins: 29.40ms (0.226ms/txin)
- Verify 345 txins: 76.33ms (0.221ms/txin)
- Verify 171 txins: 38.20ms (0.223ms/txin)
- Verify 201 txins: 44.65ms (0.222ms/txin)
- Verify 212 txins: 51.27ms (0.242ms/txin)
- Verify 385 txins: 83.74ms (0.218ms/txin)

ok, so this is like 4000-5000 /s
legendary
Activity: 1428
Merit: 1093
Core Armory Developer
I know sipa was working on an optimized ECDSA verifier that would outperform SSL about 4x.  I don't know if he finished it (and got it into Bitcoin-Qt/bitcoind yet).

But I do remember that a single core of my i5-2500K did something like 200-500 ECDSA verifications per sec.  So with 4 cores that would be like 1000-2000/s.  And probably something like 5,000/sec when using sipa's optimization. 

However, it still doesn't change the fact that it's a couple orders of magnitude slower than hashing, which can execute about 15,000,000/s on the same hardware.  In the grand scheme of things, the address-indexed DB will not consume much time when implemented properly.  Each block only updates a couple MB of DB values which should take far less than a second, even with synchronous writes.
legendary
Activity: 905
Merit: 1011
fBenchmark (-benchmark) doesn't measure ECDSA performance. Are you sure you're reading the output correctly?
member
Activity: 61
Merit: 15
A modern consumer PC can hash about 100 MB/s, but only do about 100 ECDSA signature verifications per second.

Just wanted to add that this number seems a little bit low to me, I have run bitcoin with the -benchmark flag, and it tells me
that I can verify over 10000 signatures per second on a 4x4GHz cpu.
legendary
Activity: 905
Merit: 1011
Validation of a full block currently takes more than 1 second on most commodity hardware, and that's with the 1MB blocksize limit. There's a reason Satoshi choose 10 minute interblock times.

But it's a fair question to ask, and we should be concerned about performance. The number of hash operations required to update the UTXO index is approximately:

2 * (I + O) log (N)
I: # inputs in new block
O: # outputs in new block
N: # outputs in UTXO set

So it scales linearly with the size of the block, and logarithmically with the size of the UTXO set. The constant 2 is because there will be two indices maintained, not the 1 index currently used for block validation, although one should expect the constant factor to be different anyway.

Besides the constant-factor difference, the added cost is that log(N) factor. That's not a super-significant change to scalability, and in my own opinion is definitely a worthwhile cost for secure lightweight clients and quicker sync times.

However when it comes to performance and scalability, the elephant in the room is ECDSA, not hashing. A modern consumer PC can hash about 100 MB/s, but only do about 100 ECDSA signature verifications per second. The number of signature operations is related to the number of inputs, and therefore the size of the block. That's a speed difference of 3 to 5 orders of magnitude, depending on natural variation on the size of outputs and the number of signature operations per input.

EDIT: And thank you for your contribution!
member
Activity: 61
Merit: 15
Does the UTXO root hash have to be recalculated after every block, and maybe other tree recalculations?

If so, I am a little bit concerned about the performance impact. Lets say bitcoin grows, then the recalculation of this thing could
take minutes.

So do you think, it is possible to do all the necessary recalculations after every block in lets say 1 second, even if bitcoin gets really big and there is huge database?

(other than this, many thanks for working on the project, I have donated 1BTC)
donator
Activity: 293
Merit: 250
Hi Arcturus, that address should work just fine for freicoin as well. Thanks! I'll see what the RSS  problem is about .. it's a tumblr blog so I'm not sure what control I have over that, but I will try.

nice!

Freicoins are coming Smiley

And good luck for your coding effords!
Arcurus
legendary
Activity: 905
Merit: 1011
Hi Arcturus, that address should work just fine for freicoin as well. Thanks! I'll see what the RSS  problem is about .. it's a tumblr blog so I'm not sure what control I have over that, but I will try.
Pages:
Jump to: