Author

Topic: 0.96.5 - BLKDATA DB does not have the requested ZC tx (Read 89 times)

legendary
Activity: 3766
Merit: 1364
Armory Developer
ahhh, that's it. I wonder how that might change if the assumevalid milestone is ditched for the pure p2p-based algorithm that went into 24.0? i've not yet looked at any of it, but as things are, for sure this is a pitfall/footgun for Armory users migrating databases to different machines (users will often be shorter on disk space than planned during these types of task, and if the data requirements stopped growing then either Bitcoin died or if we're lucky, some ZKP scheme ate the blockchain finally)

I'm considering a scheme for blocks over P2P that could work with pruned nodes too. It's meant to work with full nodes but it "could" be used with pruned nodes too. The idea is to create a sort of reverse bloom filter of all addresses in the chain and hint block scanning from these hits. Would probably reduce the average wallet scan to ~1k blocks instead of some 800k. You'd be able to build this filter map from your own block data but you could also download it from somewhere else, and unlike block files, the data wouldn't vary. This was an idea originally emitted by Olaoluwa Osuntokun from Lightning Labs to build some sort UTXO map side data per block  (a wet dream for lightning devs). It applies quite well to addresses.

In this mode, ArmoryDB could grab the few blocks it needs to scan over P2P, and a node could be bootstrapped by downloading the filter map. It could also scan any address relatively quickly, and work against any Core node. The drawbacks are that it would require more disk space (up to 5% of the chain I assume), and it would leak privacy to the Core node you're getting the blocks from (assuming you're lazy and didn't download the chain/are using a pruned node).

It doesn't fix the issue in Core however. Core tracks blocks by offset as well, mess with its blocks file and it will ask to re-verify the whole thing.

Quote
that sounds pretty neat, so you could lose the watch-only copy somehow, and still have wallet metadata backup coverage in the offline wallet. But it wouldn't work on my current system

You could also sustain that data some other way. The point is that it would be available to export and import.

Quote
just signing the encrypted data as part of the protocol seems like a ready solution?

To sign the data means you're carrying a private key. This is a fair assumption for signing a tx in a multisig setup, but not for sharing meta data across time for a simple online WO + offline signer.

Quote
and why would it matter, once someone else encrypts data using a public key of yours, what could they possibly do with that?

You can't assume if you generated the data or not in this setup, the only valid assumption is that you and whoever wrote the message know of its content. Encryption without authentication isn't useful all that often. This scheme needs some identification layer as well to work.

Quote
maybe even next release it will play video using the GPU over IOMMU

VFIO passthrough works pretty well with kvm over qemu, but CPU performance is pretty muted I find. For the most part, cache and RAM timings are quite eroded, I remember messing a lot with cgroups to recoupe some performance. It kinda turned me off the whole thing.
legendary
Activity: 3430
Merit: 3080
Headers first, which are handled across multiple threads. Each valid header (where the PoW checks out) is queued to be written by whatever thread checked it, so blocks often end up out of order on disk past the last height milestone (process is single threaded up until then), which was around 420k? Core discontinued that milestone ever since, AFAIK.

ahhh, that's it. I wonder how that might change if the assumevalid milestone is ditched for the pure p2p-based algorithm that went into 24.0? i've not yet looked at any of it, but as things are, for sure this is a pitfall/footgun for Armory users migrating databases to different machines (users will often be shorter on disk space than planned during these types of task, and if the data requirements stopped growing then either Bitcoin died or if we're lucky, some ZKP scheme ate the blockchain finally)

LMDB is a plain store, unlike SQL databases it doesn't have a concept of tables and records, so it's preferable to split the keys via some prefix scheme. Therefor all keys end up with an extra prefix byte at the I/O layer.

ah, that's pretty simple, good to know


Quote
but it was certainly not 0xFFFF (which is 4 bytes anyway, no?)

2 hexits per byte. Each hexit encodes a 16 bit value: 0-9, A-F (I use hexit cause I've seen that once a long time ago and found the naming elegant, I don't think anybody else does =D)

better a snappy name than some of this overwrought computer science terminology imo. I always liked "nibble" (which is half a byte IIRC)

and yes, 0xFF is obviously 8 bits, I blame old age Smiley


Comments live in the wallet. Address comments are keyed by address, tx comments are keyed by tx hash. Tx comments are resolved at each ArmoryQt load, by checking for tx hashes in the ledger. As long as your DB works, the wallet will find its comments.

phew, thank satoshi for that (well, i suppose that was Alan's work, etotheipi is up amongst the bitcoin deities these days)

I've added a feature to export/import meta data in the new wallet format. It carries out address chain length and comments among other things. Thinking of adding a checkbox in the "Send" dialog to carry the structure along an unsigned tx blob to backup the meta data in the offline wallet. This would persist all that stuff in a saner fashion, though none of that data will be covered by paper backups.

that sounds pretty neat, so you could lose the watch-only copy somehow, and still have wallet metadata backup coverage in the offline wallet. But it wouldn't work on my current system, as I have it set up so that the user partition is a non-persistent LVM snapshot, the OS ditches the snapshot upon closing the VM.


Alan was thinking of a system to share meta data. It was to be an online store where you key your data by some hash that is only known to you. The idea here was to synchronize multiple signers over multisig (something like passing the unsigned tx around between a phone and a PC). Maybe something like that could work to persist wallet meta data, by encrypting it to some private key in the wallet. However there's an issue of authenticity: anyone can encrypt data to your private key. The scheme would need some more thinking to keep the data private. Something to mull over.

just signing the encrypted data as part of the protocol seems like a ready solution? and why would it matter, once someone else encrypts data using a public key of yours, what could they possibly do with that? they can prove the data encrypts to that value, but it's common knowledge that this is how keypair cryptography works for decades. what am I missing?


The whole Qubes thing still eludes me tbh. I don't know how you put up with it. VFIO over kvm is all I can stomach in terms of virtualization, though I've got some chops with docker now and I plan to make ArmoryDB fully dockerizable at some point.

right, docker is the quick and easy way (using native kernel cleverness from cgroups or namespaces), although there's some kind of closed source wrinkle to it all that escapes me, maybe that's outmoded though.

Qubes was always like pulling teeth, but it's made for one thing only: overly cautious security. and so it fits Bitcoin usage well (especially for old timers like me with a stash to defend). using Qubes for anything else is not fun, although it's improved over the years (maybe even next release it will play video using the GPU over IOMMU, in just 1 VM across the whole OS!! atm playing video induces the CPU fan to push so hard that the computer levitates a little)
legendary
Activity: 3766
Merit: 1364
Armory Developer
because of orphan blocks, or something else also?

Headers first, which are handled across multiple threads. Each valid header (where the PoW checks out) is queued to be written by whatever thread checked it, so blocks often end up out of order on disk past the last height milestone (process is single threaded up until then), which was around 420k? Core discontinued that milestone ever since, AFAIK.

Quote
it seems that the first byte is some kind of fixed header?

Transaction keys are 6 bytes long. For a tx, the 4 first bytes identify the block, the next 2 are the txid. For ZC, the first 2 bytes are a marker (0xFFFF), last 4 are the zcid (internal counter, no relation the on-chain data). On disk, each key is prefixed with a byte to signify the data type. This avoids mixing keys when traversing a dataset. LMDB is a plain store, unlike SQL databases it doesn't have a concept of tables and records, so it's preferable to split the keys via some prefix scheme. Therefor all keys end up with an extra prefix byte at the I/O layer.

Quote
but it was certainly not 0xFFFF (which is 4 bytes anyway, no?)

2 hexits per byte. Each hexit encodes a 16 bit value: 0-9, A-F (I use hexit cause I've seen that once a long time ago and found the naming elegant, I don't think anybody else does =D)

Quote
crap. how can I transfer the transaction comments across from the old db to the rebuilt one? you're going to say "write a small lmdb parser that does it manually", right?

Comments live in the wallet. Address comments are keyed by address, tx comments are keyed by tx hash. Tx comments are resolved at each ArmoryQt load, by checking for tx hashes in the ledger. As long as your DB works, the wallet will find its comments. Where you could lose comments is if you were to create a fresh WO from the offline machine and copy that file over on the online machine. However, if you try to restore the wallet via the "Wallet" menu (be it a file or a backup string), it will detect the existing wallet and offer you to merge the meta data.

I've added a feature to export/import meta data in the new wallet format. It carries out address chain length and comments among other things. Thinking of adding a checkbox in the "Send" dialog to carry the structure along an unsigned tx blob to backup the meta data in the offline wallet. This would persist all that stuff in a saner fashion, though none of that data will be covered by paper backups.

Alan was thinking of a system to share meta data. It was to be an online store where you key your data by some hash that is only known to you. The idea here was to synchronize multiple signers over multisig (something like passing the unsigned tx around between a phone and a PC). Maybe something like that could work to persist wallet meta data, by encrypting it to some private key in the wallet. However there's an issue of authenticity: anyone can encrypt data to your private key. The scheme would need some more thinking to keep the data private. Something to mull over.

Quote
still enthusiastic about this, but getting Qubes 4.1 into shape was more challenging than expected, plus I have more excuses Tongue this effort was a part of getting my basic working machine ready to switch permanently, so hopefully I'll start trying out 0.97 again

The whole Qubes thing still eludes me tbh. I don't know how you put up with it. VFIO over kvm is all I can stomach in terms of virtualization, though I've got some chops with docker now and I plan to make ArmoryDB fully dockerizable at some point.
legendary
Activity: 3430
Merit: 3080
If the underlying block data structure was to change past a certain block, swapping in an old clean DB state would eventually lead to an error where the offsets on record mismatch the blocks on disk.

aha, that may well be it. I didn't have juggling space for Armory's very own bitcoin blocks folder, and so re-synced the bitcoin blocks from an older copy that was ~18 months behind the blocks dir that Armory databases were synced against. And of course, the layout of blocks isn't deterministic (because of orphan blocks, or something else also?), and Armory database cannot understand the 18 months that was freshly synced (every block before that was from the same blocks dir I've been using with Armory for perhaps 10 years)

Unless it starts with 0xFFFF, it's not a zero conf. I'm referring the data logged here:

Code:
-ERROR - 18:10:17: (lmdb_wrapper.cpp:2087) (scrubbed_but_valid_hexdata)

yeah definitely not 0xFFFF. I looked at the code in lmdb_wrapper.cpp, it seems that the first byte is some kind of fixed header? that was consistent for each data lines (obviously a different kind of problem if not Cheesy), but it was certainly not 0xFFFF (which is 4 bytes anyway, no?) regardless, the (scrubbed_but_valid_hexdata)  bytes 1-4 were not 0xFFFF either


At any rate, you're likely in for a rebuild & rescan.

crap. how can I transfer the transaction comments across from the old db to the rebuilt one? you're going to say "write a small lmdb parser that does it manually", right?


It's hard to tell without a log. I've fixed a few of these bugs on the way to 0.97. Those were not minor changes, I don't think it's reasonable to try and cherry-pick them back into 0.96 for a cheap fix.

still enthusiastic about this, but getting Qubes 4.1 into shape was more challenging than expected, plus I have more excuses Tongue this effort was a part of getting my basic working machine ready to switch permanently, so hopefully I'll start trying out 0.97 again
legendary
Activity: 3766
Merit: 1364
Armory Developer
strange behavior is that the 2 most recent transactions can handle 'View Details', but all others produce this

These symptoms on top of your explaination with db state swaps are making the picture a bit clearer. Armory, like Core, points to block data on disk using offsets. If the underlying block data structure was to change past a certain block, swapping in an old clean DB state would eventually lead to an error where the offsets on record mismatch the blocks on disk.

The DB failure at setup you're experiencing looks like a symptom of that: it is looking for a transaction that it expects exists. It tries to find the relevant block, fails (because of offset mismatch or something else) and falls back to checking the mempool for the tx, where it fails again and finally gives up. You can confirm this behavior by looking at the tx key logged at failure. Unless it starts with 0xFFFF, it's not a zero conf. I'm referring the data logged here:

Code:
-ERROR - 18:10:17: (lmdb_wrapper.cpp:2087) (scrubbed_but_valid_hexdata)

At any rate, you're likely in for a rebuild & rescan.

Quote
I took a look at that file before the OP, and thought it strange that it was 2500Kb, but then considered that I don't really know what it's purpose is, despite the name.

It carries the 0 conf transactions you emitted. It doesn't cleanup after itself, but a tx that has been confirmed won't be fetched from there anymore. Deleting this file will wipe out that data. If you have no outgoing ZC at the time, it won't affect you. If you have a ZC going out, it will dissappear from the ledger until it gets confirmed. Basically, it's safe to delete. The regenerated file is empty, it's just LMDB pre-assigning some space for later.

Quote
I thought the bug may fix itself during the run-up to that stage. seeing as that's (0.97 RC) not yet on the cards, I figured i'd bring it up now an

It's hard to tell without a log. I've fixed a few of these bugs on the way to 0.97. Those were not minor changes, I don't think it's reasonable to try and cherry-pick them back into 0.96 for a cheap fix.
legendary
Activity: 3430
Merit: 3080
Try deleting "zeroconf" in your db folder, that will wipe the mempool.

sadly, no dice. the .armory/databases/zeroconf file is regenerated (size 16384 bytes), and the same volley of "BLKDATA DB does not have the requested ZC tx" messages occurs
legendary
Activity: 3430
Merit: 3080
I assume you're left hanging with no transactions showing in the lobby?

right, there were no unconfirmeds waiting when I last shut Armory down


edit: the transactions window does show and scrolls etc.

strange behavior is that the 2 most recent transactions can handle 'View Details', but all others produce this:
Code:
Traceback (most recent call last):
  File "/usr/share/bin/../lib/armory/ArmoryQt.py", line 3387, in showContextMenuLedger
    self.showLedgerTx()
  File "/usr/share/bin/../lib/armory/ArmoryQt.py", line 3345, in showLedgerTx
    cppTx = TheBDM.bdv().getTxByHash(txHashBin)
  File "/usr/share/lib/armory/CppBlockUtils.py", line 2824, in getTxByHash
    return _CppBlockUtils.BlockDataViewer_getTxByHash(self, txHash)
: >
(ERROR) Traceback (most recent call last):
  File "/usr/share/bin/../lib/armory/ArmoryQt.py", line 3387, in showContextMenuLedger
    self.showLedgerTx()
  File "/usr/share/bin/../lib/armory/ArmoryQt.py", line 3345, in showLedgerTx
    cppTx = TheBDM.bdv().getTxByHash(txHashBin)
  File "/usr/share/lib/armory/CppBlockUtils.py", line 2824, in getTxByHash
    return _CppBlockUtils.BlockDataViewer_getTxByHash(self, txHash)
DbErrorMsg: >

Traceback (most recent call last):
  File "/usr/share/bin/../lib/armory/ArmoryQt.py", line 3387, in showContextMenuLedger
    self.showLedgerTx()
  File "/usr/share/bin/../lib/armory/ArmoryQt.py", line 3345, in showLedgerTx
    cppTx = TheBDM.bdv().getTxByHash(txHashBin)
  File "/usr/share/lib/armory/CppBlockUtils.py", line 2824, in getTxByHash
    return _CppBlockUtils.BlockDataViewer_getTxByHash(self, txHash)
: >


sorry didn't quite read what you said


Try deleting "zeroconf" in your db folder, that will wipe the mempool.

I took a look at that file before the OP, and thought it strange that it was 2500Kb, but then considered that I don't really know what it's purpose is, despite the name.

a different vm is busy using the external disk I'm using for bitcoin network data, but I'll try this in a short while and report back


aside, this highlights a bug I knew existed but worked around using known-good backups of the .armory/databases dir: occasionally (don't know why), the transaction scan refuses to progress beyond the top block from the previous run of Armory (at which point the known-good backups gets brought in). as we were not yet at the RC stage for 0.97, I thought the bug may fix itself during the run-up to that stage. seeing as that's (0.97 RC) not yet on the cards, I figured i'd bring it up now anyway



legendary
Activity: 3766
Merit: 1364
Armory Developer
It looks like it is trying to grab a zeroconf tx that does not exist in armory db's mempool. It's hard to think of a situation where this would happen (how would it know of a ZC if it's not in its mempool?). One way of getting there is chain 2 ZC and somehow the parent disappears from the mempool. I assume you're left hanging with no transactions showing in the lobby? Try deleting "zeroconf" in your db folder, that will wipe the mempool.
newbie
Activity: 21
Merit: 7
legendary
Activity: 3430
Merit: 3080
I tried getting Armory 96.5 working on my Qubes 4.1 machine (4.0 is really not supported any more), and all was well until this last part

compiling was really the same as 94-96.x, using debian 10/buster toolchain, no issues

dbLog.txt:
Code:
-INFO  - 18:09:08: (BlockUtils.cpp:915) blkfile dir: /home/user/.bitcoin/blocks
-INFO  - 18:09:08: (BlockUtils.cpp:916) lmdb dir: /home/user/.armory/databases
-INFO  - 18:09:08: (lmdb_wrapper.cpp:388) Opening databases...
-INFO  - 18:09:08: (BDM_Server.h:263) Listening on port 63221
-INFO  - 18:09:08: (nodeRPC.cpp:57) RPC connection established
-INFO  - 18:09:08: (BlockDataManagerConfig.cpp:919) waiting on node sync: 99.9997%
-INFO  - 18:09:08: (nodeRPC.cpp:425) Node is ready
-INFO  - 18:09:08: (BlockUtils.cpp:1108) Executing: doInitialSyncOnLoad
-INFO  - 18:09:08: (BDM_Server.cpp:1121) registered bdv: 4a80c146a3552190f2b4
-INFO  - 18:09:08: (DatabaseBuilder.cpp:199) Reading headers from db
-INFO  - 18:09:25: (DatabaseBuilder.cpp:238) Found 723305 headers in db
-INFO  - 18:09:33: (DatabaseBuilder.cpp:64) Rewinding 100 blocks
-INFO  - 18:09:33: (DatabaseBuilder.cpp:71) updating HEADERS db
-INFO  - 18:09:33: (DatabaseBuilder.cpp:493) Found next block after skipping 1660080bytes
...
-INFO  - 18:09:35: (DatabaseBuilder.cpp:281) parsed block file #3599
-INFO  - 17:53:09: (DatabaseBuilder.cpp:281) parsed block file #3600                                                                                                                                                
-INFO  - 17:53:11: (Blockchain.cpp:248) Organizing chain                                                                                                                                                            
-INFO  - 17:53:18: (Blockchain.cpp:370) Organized chain in 6s                                                                                                                                                      
-INFO  - 17:53:24: (DatabaseBuilder.cpp:76) updated HEADERS db in 908s                                                                                                                                              
-INFO  - 17:53:25: (lmdb_wrapper.cpp:388) Opening databases...                                                                                                                                                      
-INFO  - 17:53:25: (DatabaseBuilder.cpp:1231) verifying txfilters integrity                                                                                                                                        
-INFO  - 17:54:06: (DatabaseBuilder.cpp:1314) done checking txfilters                                                                                                                                              
-INFO  - 17:54:07: (DatabaseBuilder.cpp:134) scanning new blocks from #723305 to #790065
...
-INFO  - 18:06:46: (BlockchainScanner.cpp:852) scanned from block #789756 to #790065
-INFO  - 18:06:46: (BlockchainScanner.cpp:214) scanned transaction history in 759s
-INFO  - 18:06:47: (BlockchainScanner.cpp:1789) resolving txhashes
-INFO  - 18:06:47: (BlockchainScanner.cpp:1848) 0 blocks hit by tx filters
-INFO  - 18:06:47: (BlockchainScanner.cpp:1937) found 0 missing hashes
-INFO  - 18:06:47: (BlockchainScanner.cpp:1982) Resolved missing hashes in 0s
-INFO  - 18:06:47: (lmdb_wrapper.cpp:388) Opening databases...
-INFO  - 18:06:47: (DatabaseBuilder.cpp:186) scanned new blocks in 760s
-INFO  - 18:06:47: (DatabaseBuilder.cpp:190) init db in 1726s
-INFO  - 18:06:47: (BDM_supportClasses.cpp:1891) Enabling zero-conf tracking
-INFO  - 18:06:53: (BlockchainScanner.cpp:857) scanned block #790066
-ERROR - 18:10:17: (lmdb_wrapper.cpp:2086) BLKDATA DB does not have the requested ZC tx
-ERROR - 18:10:17: (lmdb_wrapper.cpp:2087) (scrubbed_but_valid_hexdata)
-WARN  - 18:10:17: (LedgerEntry.cpp:334) failed to get tx for ledger parsing


subsequently, ArmoryDB only seems to perform "Wallet Properties" callbacks, most everything else produces errors in the qt log

the ERROR line from lmdb_wrapper.cpp repeats dozens of times. And the gui reports "node offline"
Jump to: