Pages:
Author

Topic: [ANNOUNCE] Abe 0.7: Open Source Block Explorer Knockoff - page 26. (Read 221044 times)

hero member
Activity: 481
Merit: 529
Thanks for the detailed reply John. Once the initial load finishes I'll run some tests.
BTW, the last 50.000 blocks seem to take forever, I guess there are many more TXs.

Yes.  You can get an idea of how far it has to go by examining the datadir table.  "blkfile_number" corresponds to a file in the blocks directory (e.g. 100048 = blk00048.dat) and "blkfile_offset" shows how far in the file Abe has loaded.
legendary
Activity: 1102
Merit: 1014
Does Abe work with leveldb ok? I have an instance that was running on Litecoin 0.6.3 and am trying to upgrade to 0.8.3.7 which uses leveldb.

If you can give me a quick idea on what I need to do to rebuild the db with this new client, that would be great.

Edit: Looks good. I just had to make a new database.
member
Activity: 73
Merit: 10
Thanks for the detailed reply John. Once the initial load finishes I'll run some tests.
BTW, the last 50.000 blocks seem to take forever, I guess there are many more TXs.
hero member
Activity: 481
Merit: 529
Hello,

I have a few questions:

- Do I need to wait for bitcoind to download the complete block files or can I run them in parallel?
- Once the initial data load finishes do I need to stop it and run it with python -m Abe.abe --config abe-my.conf?
- Will running python -m Abe.abe --config abe-my.conf insert new incoming blocks?

Thanks

You can run bitcoind and Abe in parallel.

The instructions call for --no-serve on the initial load, and this makes it exit as soon as it loads the last block, so you don't have to stop it.  The instructions say to run in web-server mode after the initial load, but you can run both at the same time, as long as you add "--datadir=[]" to the end of the web server command:

Code:
python -m Abe.abe --config abe-my.conf "--datadir=[]"

The "--datadir=[]" prevents the two processes from competing to insert blocks, which tends to cause trouble.

Unfortunately, Abe does not automatically insert new blocks every so often.  By default, (without "--datadir=[]") it inserts new blocks when you browse the site, but if you haven't for a long time, it can take Abe quite a while to catch up.  If this is a problem, I suggest you schedule a job to trigger this at regular intervals.  For example, I use this in crontab to check every minute:

Code:
* * * * * /usr/bin/wget -q -O /dev/null http://localhost:2750/

If you want to run a server as opposed to just personal use, I suggest configuring the public server not to load anything (by adding "--datadir=[]") and have a separate process listening on the local interface with a job such as the above to trigger loading new blocks.  Using a local listener keeps the loader single-threaded.  (This information should be in a readme, thanks for asking.)
member
Activity: 73
Merit: 10
Hello,

I have a few questions:

- Do I need to wait for bitcoind to download the complete block files or can I run them in parallel?
- Once the initial data load finishes do I need to stop it and run it with python -m Abe.abe --config abe-my.conf?
- Will running python -m Abe.abe --config abe-my.conf insert new incoming blocks?

Thanks
newbie
Activity: 42
Merit: 0
The latest update seems to have done the trick! Thanks!
hero member
Activity: 481
Merit: 529
I really know nothing about git except how to use it to grab a copy of some code from Github. Running that git command in either my bitcoin-abe directory or its parent isn't showing anything, so I must have moved things around enough that git became confused. I can see in my command history where I fetched Abe using "git clone," so I know that's how I originally got it.

Please pull the latest and see if it gets you past the error.  I would have pushed it earlier but did not have my github keys handy.

Thanks.
newbie
Activity: 42
Merit: 0
I really know nothing about git except how to use it to grab a copy of some code from Github. Running that git command in either my bitcoin-abe directory or its parent isn't showing anything, so I must have moved things around enough that git became confused. I can see in my command history where I fetched Abe using "git clone," so I know that's how I originally got it.

I did think to save a slice of the transaction history. I was running Abe in an infinite loop where it was using RPC to fetch the latest blocks/transactions from bitcoind, sleeping 60 seconds, and doing it again. The last block I got successfully was 251552 ("block_tx 251552 21898522") followed by several minutes of successful "mempool tx" commits, the last one being "mempool tx 21898948", then one of my script's 60-second sleeps, then the error I posted above. There were no successful transactions after that last "mempool tx" one.

I have made no local modifications to the code at all.
hero member
Activity: 481
Merit: 529
After having Abe running for weeks with no issues at all, I suddenly ran into this:

Code:
Failed to catch up {'blkfile_offset': 0, 'blkfile_number': 1, 'chain_id': Decimal('1'), 'loader': u'rpc', 'dirname': u'/home/xxxxxxx/.bitcoin', 'id': 295286L}
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py", line 2592, in catch_up
    if not store.catch_up_rpc(dircfg):
  File "/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py", line 2765, in catch_up_rpc
    store.import_block(block, chain_ids = chain_ids)
  File "/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py", line 1731, in import_block
    tx['tx_id'] = store.import_and_commit_tx(tx, pos == 0)
  File "/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py", line 2220, in import_and_commit_tx
    tx_id = store.import_tx(tx, is_coinbase)
  File "/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py", line 2154, in import_tx
    pubkey_id = store.script_to_pubkey_id(txout['scriptPubKey'])
  File "/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py", line 2532, in script_to_pubkey_id
    for opcode, data, i in deserialize.script_GetOp(script):
  File "/usr/local/lib/python2.7/dist-packages/Abe/deserialize.py", line 236, in script_GetOp
    opcode |= ord(bytes[i])
IndexError: string index out of range

Any ideas? It's using MySQL as the back.

I suspect this comes from new script types.  I have a possible fix here in diff format.  I'll push it to master soon.

I am curious what transaction triggered the error.  I'll ask more about that if the fix doesn't work. Smiley

Thanks for the report.  Please mention the git revision (git rev-parse HEAD) and any local code changes.
newbie
Activity: 42
Merit: 0
After having Abe running for weeks with no issues at all, I suddenly ran into this:

Code:
Failed to catch up {'blkfile_offset': 0, 'blkfile_number': 1, 'chain_id': Decimal('1'), 'loader': u'rpc', 'dirname': u'/home/xxxxxxx/.bitcoin', 'id': 295286L}
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py", line 2592, in catch_up
    if not store.catch_up_rpc(dircfg):
  File "/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py", line 2765, in catch_up_rpc
    store.import_block(block, chain_ids = chain_ids)
  File "/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py", line 1731, in import_block
    tx['tx_id'] = store.import_and_commit_tx(tx, pos == 0)
  File "/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py", line 2220, in import_and_commit_tx
    tx_id = store.import_tx(tx, is_coinbase)
  File "/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py", line 2154, in import_tx
    pubkey_id = store.script_to_pubkey_id(txout['scriptPubKey'])
  File "/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py", line 2532, in script_to_pubkey_id
    for opcode, data, i in deserialize.script_GetOp(script):
  File "/usr/local/lib/python2.7/dist-packages/Abe/deserialize.py", line 236, in script_GetOp
    opcode |= ord(bytes[i])
IndexError: string index out of range

Any ideas? It's using MySQL as the back.
hero member
Activity: 481
Merit: 529
Looks good, can't wait for a more complete API, are you going to match block explorer's urls Grin

Well, I do not spend much time on new features these days, but if you have a particular function in mind, I'll at least comment on how best to do it.  Several Block Explorer API functions already work more or less compatibly.
newbie
Activity: 14
Merit: 0
Looks good, can't wait for a more complete API, are you going to match block explorer's urls Grin
hero member
Activity: 481
Merit: 529
John so your telling me to delete the whole db? Is there any other way to do this

Sorry for the long delay, but I am just busy with other things.  Try pulling the latest, then run:

Code:
python -m Abe.admin --config abe.conf delete-chain-blocks MegaCoin

replacing abe.conf as appropriate, and show me the output.  If it says that it deleted "from block", run Abe with --rescan to reload the chain.  That ought to wipe just the block headers from just that chain, but it may be slow due to a MySQL issue.  I suggest first trying it on a copy of the live database.

Also, try to configure your system so that only process loads chain data at a time.
hero member
Activity: 532
Merit: 500
John so your telling me to delete the whole db? Is there any other way to do this
hero member
Activity: 481
Merit: 529
Hello John again lol could you please look at this I'm going crazy on how to fix it mega
http://blockexplorer.coinworld.us/

Do you mean errors like this starting at MegaCoin block 41798?  It puzzles me.  From what I can piece together (without direct access to your database) this block could not figure out its block_value_in and/or block_value_out, yet it knows its parent block.  This is not supposed to happen.  Maybe one of the transactions failed to load because of bugs affecting parallel loading.

You might try deleting the block and its descendants and rescanning the chain.  Perhaps I can write an tool to help with this, but I have another priority at the moment.
sr. member
Activity: 322
Merit: 250
This is cool, good project.
hero member
Activity: 532
Merit: 500
Hello John again lol could you please look at this I'm going crazy on how to fix it mega
http://blockexplorer.coinworld.us/
hero member
Activity: 481
Merit: 529
Thanks John, that fixed my issue. Now I just need to figure out why Megacoin & Bytecoin aren't updating. They both use the new 0.8 /blocks/ format.

When I manually run it and it hits Megacoin I get

Code:
Opened /home/diatonic/.megacoin/blocks/blk00000.dat
Exception at 15709119118651544644
[...]
OverflowError: Python int too large to convert to C long

This is typical of ppcoin-derived currencies.  Have you tried the "ppcoin" branch?  See: https://github.com/jtobey/bitcoin-abe/issues/4

With Bytecoin it appears to add the new blocks but it's not showing them.

This can result from a block being skipped or incorrectly loaded.  Have you tried rescanning the blockfiles?  Use --rescan or, if you have many coins in one database, you can rescan just the one by setting blkfile_number=1, blkfile_offset=0 in the appropriate row in the datadir table.
sr. member
Activity: 271
Merit: 250
Thanks John, that fixed my issue. Now I just need to figure out why Megacoin & Bytecoin aren't updating. They both use the new 0.8 /blocks/ format.

When I manually run it and it hits Megacoin I get

Code:
Opened /home/diatonic/.megacoin/blocks/blk00000.dat
Exception at 15709119118651544644
Failed to catch up {'blkfile_offset': 25339280, 'blkfile_number': 100000, 'chain_id': 26, 'loader': None, 'dirname': '/home/diatonic/.megacoin', 'id': Decimal('26')}
Traceback (most recent call last):
  File "Abe/DataStore.py", line 2632, in catch_up
    store.catch_up_dir(dircfg)
  File "Abe/DataStore.py", line 2890, in catch_up_dir
    store.import_blkdat(dircfg, ds, blkfile['name'])
  File "Abe/DataStore.py", line 3015, in import_blkdat
    b = store.parse_block(ds, chain_id, magic, length)
  File "Abe/DataStore.py", line 3046, in parse_block
    d['transactions'].append(deserialize.parse_Transaction(ds))
  File "Abe/deserialize.py", line 90, in parse_Transaction
    d['txOut'].append(parse_TxOut(vds))
  File "Abe/deserialize.py", line 65, in parse_TxOut
    d['value'] = vds.read_int64()
  File "Abe/BCDataStream.py", line 72, in read_int64
    def read_int64  (self): return self._read_num('  File "Abe/BCDataStream.py", line 110, in _read_num
    (i,) = struct.unpack_from(format, self.input, self.read_cursor)
OverflowError: Python int too large to convert to C long

With Bytecoin it appears to add the new blocks but it's not showing them.
hero member
Activity: 481
Merit: 529
Hey John, I scanned a coin in with the wrong address version, and now addresses appear incorrectly. I would reload te whole database, but there are 36 coins and it would take over 24 hours. Is there a way to drop a single chain and reload it?

If you have not enabled firstbits, use SQL to update chain.chain_address_version with the correct value.  That should magically fix all the addresses.  To find the correct value, you could start loading the chain in a new database, identically configured except for that address version.  Then:

SQL> SELECT chain_name, chain_address_version FROM chain;

SQL> UPDATE chain SET chain_address_version='value' WHERE chain_name='chain';

This may require conversion to and from a printable format such as hex, if you use binary_type=binary, which is now the default.  In MySQL, replace chain_address_version with HEX(chain_address_version) in the select statement, and replace 'value' with UNHEX('hex value') in the update.

Ideally, we would have a command-line and web interface to manage chains.
Pages:
Jump to: