Pages:
Author

Topic: ANN: Announcing code availability of the bitsofproof supernode - page 6. (Read 35156 times)

hero member
Activity: 836
Merit: 1030
bits of proof
Is there a more recent snapshot than the one at http://ftp://bitsofproof.com/supernode.zip? It looks like that was last updated about 3 months ago.

Is there a way to configure where it puts the data file for LevelDB? Right now they appear to go to C:\Users\%UserName%\data (Windows 7). It would feel more standard for them go to C:\Users\%UserName%\AppData\Roaming\BitsOfProof or somewhere inside the folder that contains the exe.
No, that build was a demo, long outdated and not supported. I do not currently plan to release a windows executable build.
member
Activity: 98
Merit: 10
Is there a more recent snapshot than the one at http://ftp://bitsofproof.com/supernode.zip? It looks like that was last updated about 3 months ago.

Is there a way to configure where it puts the data file for LevelDB? Right now they appear to go to C:\Users\%UserName%\data (Windows 7). It would feel more standard for them go to C:\Users\%UserName%\AppData\Roaming\BitsOfProof or somewhere inside the folder that contains the exe.
legendary
Activity: 1792
Merit: 1008
/dev/null
It is a modern full validation implementation that loads the chain from scratch in about 8 hours if using the leveldb option.
8 hours is a long time, are the bottlenecks known (ie. tested and made publicly)?
I'd say bootstrap duration is on par with the reference client.
You would use this implementation to create functionality not offered by the reference implementation, not to compete with that within its abilities.
ok then il have to rephrase, 8 hours on which HD/CPU/System?
have you tested it with tmpfs? (its not about to pick a software which syncs the fastest, thats stupid. just wanna know where the known bottlenecks are in generell).
I use a 4 dedicated vCPU server with 2GHz each, 16GB RAM and an encrypted virtual file system. Bootstrap time is about 8 hours if using leveled option. Bottleneck in bootstrap before the checkpoint (hard coded chain hash for height 222222 at the moment) is disk I/O thereafter as it does full script validation, bottleneck becomes CPU. Script validation is fully multithreaded all processors are on 98+% during that time.

Once synced this server is able to cope with current transaction traffic using single digit CPU percentage use. There are very short spikes to 100% as it validates new block coming in. Disk I/O is the limit for serving getdata requests and queries during normal use.

The current disk footprint is 12GB for the chain. Beware that this stores several indices do speed up queries by address to tx and blocks.
thats exactly what i wanted, ty!
hero member
Activity: 836
Merit: 1030
bits of proof
It is a modern full validation implementation that loads the chain from scratch in about 8 hours if using the leveldb option.
8 hours is a long time, are the bottlenecks known (ie. tested and made publicly)?
I'd say bootstrap duration is on par with the reference client.
You would use this implementation to create functionality not offered by the reference implementation, not to compete with that within its abilities.
ok then il have to rephrase, 8 hours on which HD/CPU/System?
have you tested it with tmpfs? (its not about to pick a software which syncs the fastest, thats stupid. just wanna know where the known bottlenecks are in generell).
I use a 4 dedicated vCPU server with 2GHz each, 16GB RAM and an encrypted virtual file system. Bootstrap time is about 8 hours if using leveled option. Bottleneck in bootstrap before the checkpoint (hard coded chain hash for height 222222 at the moment) is disk I/O thereafter as it does full script validation, bottleneck becomes CPU. Script validation is fully multithreaded all processors are on 98+% during that time.

Once synced this server is able to cope with current transaction traffic using single digit CPU percentage use. There are very short spikes to 100% as it validates new block coming in. Disk I/O is the limit for serving getdata requests and queries during normal use.

The current disk footprint is 12GB for the chain. Beware that this stores several indices do speed up queries by address to tx and blocks.
hero member
Activity: 836
Merit: 1030
bits of proof
The bitsofproof node sailed through the recent chain turbulences.

It was accepting the forking block in sync with 0.8 and was also able to re-org the 25 blocks as the other branch became longer.

Kudos to you. Are you happening to attend the San Jose conference?
Thanks Jan. Yes I will be there. Will hold a lightning session speech on bitsofproof.
legendary
Activity: 1792
Merit: 1008
/dev/null
It is a modern full validation implementation that loads the chain from scratch in about 8 hours if using the leveldb option.
8 hours is a long time, are the bottlenecks known (ie. tested and made publicly)?
I'd say bootstrap duration is on par with the reference client.
You would use this implementation to create functionality not offered by the reference implementation, not to compete with that within its abilities.
ok then il have to rephrase, 8 hours on which HD/CPU/System?
have you tested it with tmpfs? (its not about to pick a software which syncs the fastest, thats stupid. just wanna know where the known bottlenecks are in generell).
Jan
legendary
Activity: 1043
Merit: 1002
The bitsofproof node sailed through the recent chain turbulences.

It was accepting the forking block in sync with 0.8 and was also able to re-org the 25 blocks as the other branch became longer.

Kudos to you. Are you happening to attend the San Jose conference?
hero member
Activity: 836
Merit: 1030
bits of proof
It is a modern full validation implementation that loads the chain from scratch in about 8 hours if using the leveldb option.
8 hours is a long time, are the bottlenecks known (ie. tested and made publicly)?
I'd say bootstrap duration is on par with the reference client.

You would use this implementation to create functionality not offered by the reference implementation, not to compete with that within its abilities.
legendary
Activity: 1792
Merit: 1008
/dev/null
It is a modern full validation implementation that loads the chain from scratch in about 8 hours if using the leveldb option.
8 hours is a long time, are the bottlenecks known (ie. tested and made publicly)?
hero member
Activity: 836
Merit: 1030
bits of proof
The bitsofproof node sailed through the recent chain turbulences.

It was accepting the forking block in sync with 0.8 and was also able to re-org the 25 blocks as the other branch became longer.
hero member
Activity: 836
Merit: 1030
bits of proof
You might expect great progress of this project now that I work full time on it.

bitsofproof now passes the block tester test cases that is also used to validate the Satoshi client.
Mike Hearn was so kind to notify me that the error he spotted is fixed in the meanwhile.

You can review a continuos integration test of it similar to the official client's pull tests at:

https://travis-ci.org/bitsofproof/supernode

I plan to bring this to production quality definitely before San Jose conference where I plan to present and launch.

Please experiment with it. It is a modern full validation implementation that loads the chain from scratch in about 8 hours if using the leveldb option. An instance of this server is constantly working in sync with the network without exceptions for over a month now at bitsofproof.com, feel free to connect and challenge it.

I am working on further tests and a merchant module that will support multiple wallets and payment notifications over its communication bus. The project is also considered as a building block for bitcoinx.
legendary
Activity: 1526
Merit: 1134
FYI the bug I mentioned way up thread has been found and fixed.
legendary
Activity: 980
Merit: 1008
The problem is that the rules (as defined by Satoshi's implementation) simply pass data directly to OpenSSL, so the network rule effectively is "whatever cryptographic data OpenSSL accepts", which is bad. OpenSSL has all reasons for trying to accept as much encodings as possible, but we don't want every client to need to replicate the behaviour of OpenSSL. In particular, if they add another encoding in the future (which, again, is not necessarily bad from their point of view), we might get a block chain fork.
Considering cases like these, it occurs to me that it might be desirable to - at some point - split the Satoshi code into three parts: "legacy", "current" and "next".

The "legacy" part would handle the odd corner cases described in the above quote. It would basically pull in all the relevant OpenSSL code into the legacy module (including the bugs in question), where it would stay untouched. This module would only be used to verify already-existing blocks in the chain; no new blocks would be verified with this code, as pulling in OpenSSL code into Bitcoin and managing future patches is next to impossible. This is the part that should be possible for future clients to not implement. They will miss the part of the block chain that follows the rules defined by this module, but I reckon that we really don't need 5+ year old blocks for more than archival purposes.

The "current" module would handle verifying current blocks, and be compatible with the "legacy" module. It would depend on OpenSSL still, and if changes are made to OpenSSL that break compatibility with "legacy", patches would need to be maintained against OpenSSL to work around this. This module cannot code-freeze OpenSSL, as vulnerabilities can become uncovered in OpenSSL, and no one must be able to produce blocks that exploit the uncovered attack vectors. Newly uncovered attack vectors aren't a problem for the "legacy" module, as it only verifies already-existing blocks, produced before the vulnerability in question was uncovered.

The "next" module would be backwards incompatible with the "legacy" and "current" module. This module changes verification rules to not accept, for example, the otherwise invalid signatures that OpenSSL accepts. The "next" module would have a block chain cut-off point into the future where, from that point on, a Bitcoin transaction would be considered invalid if, for example, it includes an invalid signature (was it a negative S-value?) even though it's accepted by the old OpenSSL code. It's sort of a staging module, where undesirable protocol behavior is weeded out. These protocol changes wouldn't take effect until some point in the future (a block number). The block number cut-off point would be advertised well in advance, and from this point on, the "next" module would become the "current" module, and the old "current" module would move into the "legacy" module (and no new blocks would be verified using this module). The new "next" module would then target fixing undesirable protocol behavior that was uncovered when running the previous "current" module, and would set a new cut-off time into the future, at which point new blocks would need to follow this improved protocol to get accepted.

Couldn this work? It would mean we could slowly (very slowly) clean up the protocol, while still maintaining backwards compatibility with clients not older than, say, 2 years, or however long into the future we choose the cut-off point for the "next" module's protocol to become mandatory.
legendary
Activity: 1072
Merit: 1181
IIRC, bitcoind has soft (bitcoind's) and hard (Bitcoin protocol) limits for that. Meaning that it will not spend nor build a block spending coinbase less than 120 blocks old, but it will accept a block which spends it a bit earlier (100 blocks I believe, not sure).

Hard limit: 101 confirmations, i.e. created 100 blocks before (network rule)
Soft limit: 120 confirmations, i.e. created 119 blocks before (client policy, which is probably overly conservative)
legendary
Activity: 1106
Merit: 1004
The tests allow spending of coinbase earlier than allowed by bitcoind.

IIRC, bitcoind has soft (bitcoind's) and hard (Bitcoin protocol) limits for that. Meaning that it will not spend nor build a block spending coinbase less than 120 blocks old, but it will accept a block which spends it a bit earlier (100 blocks I believe, not sure).
hero member
Activity: 836
Merit: 1030
bits of proof
An update on testing.

The bitsofproof server validates testnet3 chain, passes its own and bitcoind's script unit tests and run in sync for the last week without any exception connected to the prod net with 100 peers.

I was pointed to the "blocktester" that was developed by BlueMatt for bitcoinj and adapted for bitcoind as part of the jenkins build after each pull.

My findings with the "blocktester" are:
  • It uses the bitcoinj library to generate series of sophisticated block chain test scenarios.
  • The generator is part of bitcoinj but is packaged such that it can not be utilized without modification for an other product, it serves bitcoinj maven test.
  • The chain that it generates is likely dumped into a file and then sourced into bitcoind, I could not find the process that does that fed through P2P into bitcoind.
  • There is a standalone java class that runs on BlueMatt's jenkins build server that downloads the test scenario chain from bitcoind and compares it with expectation (that is simple the residual correct chain)

Above findings explain why the tester could not be used out of the box for bitsofproof, but I made the best out of it in following steps:

  • Modified a copy of bitcoinj such that the block generator could be embedded into an other project.
  • dumped the generated test chain into a JSON file, that contains the behavior the tester expects and the raw block data as wire dump in hex.
  • Wrote a unit test for bitsofproof that sources the JSON encapsulated wire dump as if it would be sent to the validation and storage engine.
  • compare accept/reject of the blocks as expected by the test vs. done by bitsofproof.


Thereby I made following findings:
  • The tests allow spending of coinbase earlier than allowed by bitcoind. The method this is fed to bitcoind must surpass that check.
  • The accept/reject assumes that checks for double spend are not performed for branches of  the chain until that branch becomes the longest.

Especially the second is a difference to bitsofproof behavior, that is more eager to do this check. The result for the longest chain (that counts) is same in both cases but validations happen in different order. Since tests as they are depend on the validation order, conversion of test cases will remain partial.

I believe that the method bitsofproof works is more elegant and easier to follow, but it comes at the cost of eventually performing throw away validation. The cost is low since double spend checks are done after cheaper checks like POW, so using this difference to DoS bitsofproof is not feasible.

Edit: The described difference is not observable on the P2P interface.

Given the unsatisfactory state of test utility for complex behavior, I began developing a tool similar but more advanced than the block tester. I will explicitelly develop it to be suitable for bitcoind and bitcoinj or any other implementation that is P2P compatible, thereby addressing a problem that is not only mine.

Edit: unsatisfactory is that test cases are programmed in a particular implementation and are tightly coupled with its build.
I would like to create test cases for complex behavior as editable text files, so they can be extended by the maintainer of any implementation

 
legendary
Activity: 1526
Merit: 1134
As far as I can tell the issue is still there.
hero member
Activity: 836
Merit: 1030
bits of proof
It's great that you made progress with testing. Unfortunately the first issue I saw is still there. As it's a chain-splitting bug more testing is required.
I just fixed a subtle one that fits your description. Maybe this was it. You might admit without me claiming that I am done Smiley, I know there is still work to do.

https://github.com/bitsofproof/supernode/commit/96c6ffb00f55d2e8b95e55e3237309f86e6e11f2#src/main/java/com/bitsofproof/supernode/core/CachedBlockStore.java


legendary
Activity: 1526
Merit: 1134
It's great that you made progress with testing. Unfortunately the first issue I saw is still there. As it's a chain-splitting bug more testing is required.
legendary
Activity: 1792
Merit: 1008
/dev/null
does an RPC interface like bitcoind ones exist?
This will offer synchronous remote invocation for a few functions and have an asynchronous message bus for notification and broadcast between known extensions or authenticated clients. The API (BCSAPI) is supposed to isolate components that need highest level of compatibility with bitcoind from the rest, where innovation should take place. The BCSAPI seeks to serve low level data, such as validated transactions and blocks chain events or outputs by address eliminating the need to hack into core components or interpret binary storage.

I do not plan to imitate bitcoind's RPC out-of-the-box but, make it possible to implement extensions/proxies connecting to above API that create a compatibility layer if you need.

This is a project in development, so interfaces or database schema will likely change. Keep watching this space for a ready to go before using it for real or committing serious development on top of the API.

Suggestions, wishes are welcome.
thanks, il take a closer look ASAP Smiley
Pages:
Jump to: