Pages:
Author

Topic: [ANNOUNCE] Abe 0.7: Open Source Block Explorer Knockoff - page 31. (Read 220986 times)

newbie
Activity: 42
Merit: 0
Thanks, that worked perfectly! Now the only weirdness is that this keeps showing up in my Apache error log every time I search by Bitcoin address (address redacted in example below):

Code:
/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py:484: Warning: Truncated incorrect INTEGER value: '1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
  store.cursor.execute(stmt, params)

Everything still works just fine, though. Thanks again!
hero member
Activity: 481
Merit: 529
Is there any way for Abe to not update the database with the latest transactions? Running in FastCGI mode, most requests take a long time to return a result because Abe is reading the most recent transactions and inserting them into the database. I know I can start in standalone mode, but then what's the point of FastCGI mode at all? Ideally what I'd like to do is have a process that runs in the background loading new transactions, sleeping for a few seconds or minutes, and loading again, forever. And the FastCGI side of things would just be a user interface to the back-end database, basically. Is that possible? Am I missing something obvious?

You are on the right track.  I ought to be typing this into a new "readme-server.txt" file.  Try "datadir=[]" in your FastCGI config to disable loading, and "no-serve" in the loader config to disable serving.
hero member
Activity: 481
Merit: 529
can i have a Abe with sha256d chains and scrypt chains? also where is the docu for scrypt based chains?
Did you try it?  http://explorer.litecoin.net/ appears Abe based, and doesn't LTC use scrypt?  I'd suggest asking them if they had to change anything.  They should give you the explorer source.  (If not, I'd like to know...)

Abe does not (yet) contain any special code for scrypt.  That would be nice to add.
newbie
Activity: 42
Merit: 0
Is there any way for Abe to not update the database with the latest transactions? Running in FastCGI mode, most requests take a long time to return a result because Abe is reading the most recent transactions and inserting them into the database. I know I can start in standalone mode, but then what's the point of FastCGI mode at all? Ideally what I'd like to do is have a process that runs in the background loading new transactions, sleeping for a few seconds or minutes, and loading again, forever. And the FastCGI side of things would just be a user interface to the back-end database, basically. Is that possible? Am I missing something obvious?
legendary
Activity: 1792
Merit: 1008
/dev/null
can i have a Abe with sha256d chains and scrypt chains? also where is the docu for scrypt based chains?
newbie
Activity: 46
Merit: 0
Getting this error on first import

Code:
> python -m Abe.abe --upgrade --dbtype=sqlite3  --connect-args=abe.sqlite --port 2750 --datadir=../
no chain_id
...
OverflowError: long too big to convert

As suggested at https://github.com/jtobey/bitcoin-abe/issues/16 RTFM. the README-SQLITE suggests making int's strings and this fixes it:

Code:
python -m Abe.abe --upgrade --dbtype=sqlite3  --connect-args=abe.sqlite --port 2750 --datadir=../ --int-type=str
newbie
Activity: 46
Merit: 0
Getting this error on first import

Code:
> python -m Abe.abe --upgrade --dbtype=sqlite3  --connect-args=abe.sqlite --port 2750 --datadir=../
no chain_id
catch_up_rpc: abort
Opened ../blocks/blk00000.dat
...
...
block_tx 2260 2290
commit
Exception at 521845
Failed to catch up {'blkfile_offset': 521621, 'blkfile_number': 100000, 'chain_id': None, 'loader': None, 'dirname': '../', 'id': 3}
Traceback (most recent call last):
  File "Abe/DataStore.py", line 2596, in catch_up
    store.catch_up_dir(dircfg)
  File "Abe/DataStore.py", line 2850, in catch_up_dir
    store.import_blkdat(dircfg, ds, blkfile['name'])
  File "Abe/DataStore.py", line 2958, in import_blkdat
    store.import_block(b, chain_ids = chain_ids)
  File "Abe/DataStore.py", line 1811, in import_block
    len(b['transactions']), b['search_block_id']))
  File "Abe/DataStore.py", line 507, in sql
    store._execute(cached, params)
  File "Abe/DataStore.py", line 484, in _execute
    store.cursor.execute(stmt, params)
OverflowError: long too big to convert

.... thats what she said
newbie
Activity: 37
Merit: 0
Is there a way to get a list of transactions to/from an address in JSON format? If not, that would be an awesome feature to have.

blockchain.info has an API like this: https://blockchain.info/address/1dice7fUkz5h4z2wPc1wLMPWgB5mDwKDx?format=json
newbie
Activity: 42
Merit: 0
Manually setting the chain_id with sqlite3 seems to have worked! Thanks!
hero member
Activity: 481
Merit: 529
Hi! I'm trying to get Abe working through RPC instead of reading my blockchain files directly and I'm having problems. I believe I've met all the requirements listed at the end of abe.conf regarding RPC, but when I try to start Abe, I get this:

Code:
$ python -m Abe.abe --config abe.conf
no chain_id

Hi!  Thanks for trying, and my apologies for the rough edges.  If using an already loaded database, issue the following SQL before restarting:

Code:
$ sqlite3 abe.sqlite
sqlite> UPDATE datadir SET chain_id = 1 WHERE dirname='/home/user/.bitcoin';
sqlite> .quit

I see almost no references in the documentation to "chain_id" so I'm really not sure what to do. Google has been no help. I tried renaming abe.sqlite and hoping it would just rebuild the entire database from scratch using RPC and in that case I get this:

Code:
$ python -m Abe.abe --config abe.conf
ddl_implicit_commit=true
create_table_epilogue=''
max_varchar=4294967295
clob_type=CLOB
binary_type=buffer
int_type=str
Created silly table abe_dual
sequence_type=update
limit_style=native
commit
Failed to catch up {'blkfile_offset': 0, 'blkfile_number': 1, 'chain_id': 1, 'loader': u'rpc', 'dirname': u'/home/user/.bitcoin', 'id': 1}
Traceback (most recent call last):
  File "Abe/DataStore.py", line 2551, in catch_up
    if not store.catch_up_rpc(dircfg):
  File "Abe/DataStore.py", line 2716, in catch_up_rpc
    format = "binary")
  File "Abe/DataStore.py", line 2254, in export_tx
    'prevout_hash': store.hashout(prevout_hash),
  File "Abe/DataStore.py", line 319, in rev
    return x[::-1]
TypeError: 'NoneType' object has no attribute '__getitem__'
Abe initialized.
Listening on http://192.168.1.2:2750

Any ideas?

Thanks for this, it is a bug that I may be able to fix.  But try the update statement.
newbie
Activity: 42
Merit: 0
Hi! I'm trying to get Abe working through RPC instead of reading my blockchain files directly and I'm having problems. I believe I've met all the requirements listed at the end of abe.conf regarding RPC, but when I try to start Abe, I get this:

Code:
$ python -m Abe.abe --config abe.conf
no chain_id
Failed to catch up {'blkfile_offset': 77633101, 'blkfile_number': 100060, 'chain_id': None, 'loader': None, 'dirname': u'/home/user/.bitcoin', 'id': 1}
Traceback (most recent call last):
  File "Abe/DataStore.py", line 2553, in catch_up
    raise Exception("RPC load failed")
Exception: RPC load failed
Abe initialized.
Listening on http://192.168.1.2:2750

I see almost no references in the documentation to "chain_id" so I'm really not sure what to do. Google has been no help. I tried renaming abe.sqlite and hoping it would just rebuild the entire database from scratch using RPC and in that case I get this:

Code:
$ python -m Abe.abe --config abe.conf
ddl_implicit_commit=true
create_table_epilogue=''
max_varchar=4294967295
clob_type=CLOB
binary_type=buffer
int_type=str
Created silly table abe_dual
sequence_type=update
limit_style=native
commit
Failed to catch up {'blkfile_offset': 0, 'blkfile_number': 1, 'chain_id': 1, 'loader': u'rpc', 'dirname': u'/home/user/.bitcoin', 'id': 1}
Traceback (most recent call last):
  File "Abe/DataStore.py", line 2551, in catch_up
    if not store.catch_up_rpc(dircfg):
  File "Abe/DataStore.py", line 2716, in catch_up_rpc
    format = "binary")
  File "Abe/DataStore.py", line 2254, in export_tx
    'prevout_hash': store.hashout(prevout_hash),
  File "Abe/DataStore.py", line 319, in rev
    return x[::-1]
TypeError: 'NoneType' object has no attribute '__getitem__'
Abe initialized.
Listening on http://192.168.1.2:2750

Any ideas?
hero member
Activity: 481
Merit: 529
Code:
[2013-05-14 10:42:53] >>   File "Abe/DataStore.py", line 481, in sql
[2013-05-14 10:42:53] >>     store.cursor.execute(cached, params)
[2013-05-14 10:42:53] >> InternalError: invalid page header in block 61022 of relation base/17556/17801

Do you know what's happening? Did my DB get corrupted?

Offhand, it appears to be internal to the database system.  I don't think "block 61022" refers to the blockchain.  I'd try to dump and reload the database, and if that doesn't do it, search the web for "invalid page header in block".
newbie
Activity: 8
Merit: 0
Hi.
I am getting this error:

Code:
[2013-05-14 10:42:53] >> Opened /opt/litecoin/.litecoin/blk0001.dat
[2013-05-14 10:42:53] >> Exception at 1034257218
[2013-05-14 10:42:53] >> Failed to catch up {'blkfile_offset': 1034252127, 'blkfile_number': 1, 'chain_id': 8, 'loader': None, 'dirname': '/opt/litecoin/.litecoin', 'id': Decimal('1')}
[2013-05-14 10:42:53] >> Traceback (most recent call last):
[2013-05-14 10:42:53] >>   File "Abe/DataStore.py", line 2549, in catch_up
[2013-05-14 10:42:53] >>     store.catch_up_dir(dircfg)
[2013-05-14 10:42:53] >>   File "Abe/DataStore.py", line 2809, in catch_up_dir
[2013-05-14 10:42:53] >>     store.import_blkdat(dircfg, ds, blkfile['name'])
[2013-05-14 10:42:53] >>   File "Abe/DataStore.py", line 2917, in import_blkdat
[2013-05-14 10:42:53] >>     store.import_block(b, chain_ids = chain_ids)
[2013-05-14 10:42:53] >>   File "Abe/DataStore.py", line 1692, in import_block
[2013-05-14 10:42:53] >>     tx['tx_id'] = store.import_tx(tx, pos == 0)
[2013-05-14 10:42:53] >>   File "Abe/DataStore.py", line 2146, in import_tx
[2013-05-14 10:42:53] >>     txin['prevout_hash'], txin['prevout_n'])
[2013-05-14 10:42:53] >>   File "Abe/DataStore.py", line 2470, in lookup_txout
[2013-05-14 10:42:53] >>     (store.hashin(tx_hash), txout_pos))
[2013-05-14 10:42:53] >>   File "Abe/DataStore.py", line 605, in selectrow
[2013-05-14 10:42:53] >>     store.sql(stmt, params)
[2013-05-14 10:42:53] >>   File "Abe/DataStore.py", line 481, in sql
[2013-05-14 10:42:53] >>     store.cursor.execute(cached, params)
[2013-05-14 10:42:53] >> InternalError: invalid page header in block 61022 of relation base/17556/17801

Do you know what's happening? Did my DB get corrupted?
hero member
Activity: 481
Merit: 529
The latest commit supports a very crude SVG rendering of the logarithmic hash rate chart.  I do not expect to have time to add basic stuff like grid lines, axis values, and difficulty for a while, but I do hope to see the basic shape of various alt chains someday. Smiley

Trying to look like Sipa's famous chart: http://bitcoin.sipa.be/speed-ever.png

Snapshot taken just now: http://john-edwin-tobey.org/images/nethash.png

Live location: http://abe.john-edwin-tobey.org/chain/Bitcoin/q/nethash?format=svg

The implementation uses the nethash API code and should not burden the server any more than /q/nethash.  There is also JSONP support via /q/nethash?jsonp=.
sr. member
Activity: 426
Merit: 250
Cool. If it would be possible to connect to multiple nodes, I guess it wouldn't be that hard to build some rudimentary double spent check. If only I had the time Sad
hero member
Activity: 481
Merit: 529
I don't know much about the blockchain format but I wonder if there would be much mileage in merely indexing into the blockchain data and only storing the indexes in the database...
Yes, Abe could store blockfile offsets and read from the files as bitcoind does, or store hashes and fetch raw data by RPC to bitcoind.  A table with just address and transaction identifier (hash or blockfile coordinates), indexed by address, would be useful all alone.  So would a table mapping transaction to block number.  Ideally, the whole database would be built of minimal pieces such as these, and the loader would consult metadata to figure out what it has to do.
legendary
Activity: 2576
Merit: 2267
1RichyTrEwPYjZSeAYxeiFBNnKC9UjC5k
I don't know much about the blockchain format but I wonder if there would be much mileage in merely indexing into the blockchain data and only storing the indexes in the database...
hero member
Activity: 481
Merit: 529
Absolute first thing in my book would be to give up on sqlite. It's really not meant for this kind of abuse. Other than that though...

Edit: That is to say, you could continue to offer sqlite as an option but I wouldn't waste any time trying to optimize it.

Well, if you mean the BTC chain, or any kind of server use, I agree.  It would be nice to add some advice about back-end selection in the docs.

On the other hand, given Abe's SQL abstraction layer, any optimizations tend to benefit all databases, unless we start parallel loading, which SQLite won't support.

I'd like to support a lot more configurable features.  Currently, there is keep-scriptsig and use-firstbits, both of which control the existence of certain columns in the database.  The no-statistics branch also omits some columns and a table.  We could have a bare-bones option that omits transaction inputs and outputs from the database, fetching them as needed from bitcoind.  At the other extreme, we could support denormalization (copying columns into dependent tables) for faster queries at the cost of insertion time and space.  This motivated me to clean up the SQL abstraction layer (not yet merged from no-statistics to master).  I want to abstract over more SQL dialect differences to facilitate configuration changes that modify table structure.

Keeping the front end usable under all configurations would be a challenge, but not to worry: we can declare some configurations unsupported by the HTML front end.  One might even imagine a zero-init mode that starts loading and using new blockchain data before it fills in the old.

Of course, optimization could be easier if we commit to MySQL or PostgreSQL, but I think there is enough low-hanging fruit to postpone that idea for a while.
legendary
Activity: 2576
Merit: 2267
1RichyTrEwPYjZSeAYxeiFBNnKC9UjC5k
I'll have to have a closer look. I'm thinking I'd like something of this but more for event-based notification rather than aggregate/historical data.

There may be some way to improve the speed of the inserts too. I'll take a look at that.
I'll appreciate it.  You may want to peruse the GitHub issue.


Absolute first thing in my book would be to give up on sqlite. It's really not meant for this kind of abuse. Other than that though...


Edit: That is to say, you could continue to offer sqlite as an option but I wouldn't waste any time trying to optimize it.
hero member
Activity: 481
Merit: 529
I'll have to have a closer look. I'm thinking I'd like something of this but more for event-based notification rather than aggregate/historical data.

There may be some way to improve the speed of the inserts too. I'll take a look at that.
I'll appreciate it.  You may want to peruse the GitHub issue.
Pages:
Jump to: