Pages:
Author

Topic: [ANNOUNCE] Abe 0.7: Open Source Block Explorer Knockoff - page 37. (Read 220986 times)

hero member
Activity: 481
Merit: 529
OverflowError: long too big to convert

Looks like this problem.  See if --int-type=str solves it.  I was unable to reproduce it, IIRC.
mav
full member
Activity: 169
Merit: 107
MySQL is

Server version: 5.5.24-0ubuntu0.12.04.1 (Ubuntu)

Meh, I trust the mysql docs. Must have just me being a n00b. I'm using 'emma' as the gui to mysql, I've not used emma before.

The update works, sqlite is now parsing the blockchain and storing to the database.

Awesome... I am super excited by this project; and the support ... what can I say... amazing.

Although as I typed this post, I got this error OverflowError: long too big to convert while parsing into a sqlite db:
Quote
...
block_tx 2259 2289
commit
block_tx 2260 2290
commit
block_tx 2261 2291
Failed to catch up {'blkfile_number': 1, 'dirname': '/home/ian/.bitcoin', 'chain_id': None, 'id': 1, 'blkfile_offset': 521621}
Traceback (most recent call last):
  File "/home/ian/bitcoin-abe-sqlite/Abe/DataStore.py", line 2241, in catch_up
    store.catch_up_dir(dircfg)
  File "/home/ian/bitcoin-abe-sqlite/Abe/DataStore.py", line 2275, in catch_up_dir
    store.import_blkdat(dircfg, ds)
  File "/home/ian/bitcoin-abe-sqlite/Abe/DataStore.py", line 2397, in import_blkdat
    store.import_block(b, chain_ids = chain_ids)
  File "/home/ian/bitcoin-abe-sqlite/Abe/DataStore.py", line 1696, in import_block
    block_id))
  File "/home/ian/bitcoin-abe-sqlite/Abe/DataStore.py", line 439, in sql
    store.cursor.execute(cached, params)
OverflowError: long too big to convert
hero member
Activity: 481
Merit: 529
I'm trying to run this with sqlite and am getting unicode errors from the "magic" values in CHAIN_CONFIG (magic values set in lines 52-66 of DataStore.py)

Code:
sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings.

I seem to remember this error, perhaps with an older SQLite.  3.7.7 does not give me it.  Anyway, Abe is supposed to test for binary data types, but the test used only bytes with bit 8 clear.  I've changed that, so the test should now reject bytestrings and use binary_type=hex on your system.  Please retry with this latest fix and post the result.

In README-MYSQL.txt there are instructions to create a new user abe - I had to run FLUSH PRIVILEGES; after creating the user.

Really?  What mysqld version, please?  The 5.0 docs say:

With CREATE USER, FLUSH PRIVILEGES is unnecessary.
mav
full member
Activity: 169
Merit: 107
I'm trying to run this with sqlite and am getting unicode errors from the "magic" values in CHAIN_CONFIG (magic values set in lines 52-66 of DataStore.py)

edit: I have it working on mysql, but still, no reason why it shouldn't work on sqlite also.
In README-MYSQL.txt there are instructions to create a new user abe - I had to run FLUSH PRIVILEGES; after creating the user. If that's to be expected for every person who adds the user 'abe' maybe the command can be added to the readme txt file.

Code:
Traceback (most recent call last):
  File "abe.py", line 2015, in
    sys.exit(main(sys.argv[1:]))
  File "abe.py", line 2009, in main
    store = make_store(args)
  File "abe.py", line 115, in make_store
    store = DataStore.new(args)
  File "/home/ian/bitcoin-abe/Abe/DataStore.py", line 2665, in new
    return DataStore(args)
  File "/home/ian/bitcoin-abe/Abe/DataStore.py", line 148, in __init__
    store.initialize()
  File "/home/ian/bitcoin-abe/Abe/DataStore.py", line 1123, in initialize
    store.binin(conf["address_version"])))
  File "/home/ian/bitcoin-abe/Abe/DataStore.py", line 439, in sql
    store.cursor.execute(cached, params)
sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings.
newbie
Activity: 58
Merit: 0
What happened to me was that Abe would act like it was up to date. I saw blkfile_number set to 1 and the last modified for blk0001.dat at the same date with a blk0002.dat, so I added the row with blkfile_number = 2. Then I got a permissions error from Abe on the second file, changed the permissions, and it worked. I'm checking the datadir now and it appears that the existing row also now has the right blkfile_number, so adding the row was not necessary. I'll correct my previous post.
hero member
Activity: 481
Merit: 529
My Abe installation stopped updating after block 188421. (2012-07-10 11:26:50) The problem appears to be that Abe only looks at blk0001.dat by default and does not look at blk0002.dat, which my Bitcoin client created after block 188421.

Well, I had the same problem, but it turned out to be a permissions issue.  I had forgot that I made blk0001.dat world-readable to be shared by different users on my system, and I had to do the same for blk0002.dat.  Once I did, Abe recognized it and continued processing.

The real problem, at least in my case, is Abe's lack of error reporting for the case where blk0002.dat exists but is not readable.  That should be easy to improve.  But if adding a datadir row solves it for you, there must be something else wrong.

Has anyone else seen this?
newbie
Activity: 58
Merit: 0
My Abe installation stopped updating after block 188421. (2012-07-10 11:26:50) The problem appears to be that Abe only looks at blk0001.dat by default and does not look at blk0002.dat, which my Bitcoin client created after block 188421.

I fixed the issue by adding the following row to my datadir table and then running Abe to update the database with the new block file's data.
[row removed]

However, this is not a permanent fix since Abe will stop updating once the client makes a blk0003.dat and so on until I manually add the row for each block file.


UPDATE: This was a permissions issue on blk0002.dat. Adding a row to datadir for each block file is not necessary.
hero member
Activity: 481
Merit: 529
Before I went to upgrade, I found that Abe had stopped working. My block file just exceeded 1GB in size a couple hours ago which is causing this error:

Quote
  File "Abe\BCDataStream.py", line 27, in map_file
    self.input = mmap.mmap(file.fileno(), 0, access=mmap.ACCESS_READ)
WindowsError: [Error 8] Not enough storage is available to process this command

It is because I'm running a 32-bit system. Apparently this issue will eventually affect Linux 32-bit systems as well because of the way mmap works; except that it will occur when the block file hits 2 or 3 GB in size. Is Abe particularly tied to mmap? If it is, it seems like it will soon be (or is now) for 64 bit systems only.

Interesting.  I don't think it will affect Linux, because bitcoin starts a new file (blk0002.dat) before 2GB:

Just wanted to add: this mmap() issue is a problem on my 32-bit Linux system as well, with a block file size of 1.8GB. I'll try changing the bitcoind file size limit and then try again. (Or maybe I should just dump the ancient 32-bit machine... Smiley)

Thanks for the report.  I saw the error too on Linux but not consistently.  I guess it did not find enough contiguous process address space or hit some system limit.

I think a reasonable approach would be to implement Python's sequence protocol to replace mmap() in Abe/BCDataStream.py.  The implementation may be read-only, need not support negative indices, but must support slices.

I would try to keep using mmap() underneath but with length and offset specifying just a chunk of the file.  I would have some bookkeeping around chunk boundaries to keep the interface compatible.  Using read() could work, too.

Such a fix should interest Gavin for use in BitcoinTools if he's not already working on it, but he may need the array writable.  I do not anticipate having time to work on this in the next couple of weeks.
donator
Activity: 289
Merit: 250
Before I went to upgrade, I found that Abe had stopped working. My block file just exceeded 1GB in size a couple hours ago which is causing this error:

Quote
  File "Abe\BCDataStream.py", line 27, in map_file
    self.input = mmap.mmap(file.fileno(), 0, access=mmap.ACCESS_READ)
WindowsError: [Error 8] Not enough storage is available to process this command

It is because I'm running a 32-bit system. Apparently this issue will eventually affect Linux 32-bit systems as well because of the way mmap works; except that it will occur when the block file hits 2 or 3 GB in size. Is Abe particularly tied to mmap? If it is, it seems like it will soon be (or is now) for 64 bit systems only.

Interesting.  I don't think it will affect Linux, because bitcoin starts a new file (blk0002.dat) before 2GB:

Just wanted to add: this mmap() issue is a problem on my 32-bit Linux system as well, with a block file size of 1.8GB. I'll try changing the bitcoind file size limit and then try again. (Or maybe I should just dump the ancient 32-bit machine... Smiley)
hero member
Activity: 938
Merit: 1000
Midskes: thanks for the report.  I've fixed the new installation bugs and tested on MySQL.


Tested and confirmed working, thanks for the quick patch.
hero member
Activity: 481
Merit: 529
Midskes: thanks for the report.  I've fixed the new installation bugs and tested on MySQL.
hero member
Activity: 938
Merit: 1000
I think there might be a bug when doing up a clean Abe installation using the latest master (with firstbits code). I don't know enough python to supply you with a patch but the following happens.

Code:

~/bin$ ./abe --config ../abe.conf
ddl_implicit_commit=true
create_table_epilogue=''
/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py:424: Warning: Converting column 'a' from VARCHAR to TEXT
  store.cursor.execute(stmt)
/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py:424: Warning: Converting column 'b' from VARCHAR to TEXT
  store.cursor.execute(stmt)
max_varchar=4294967295
clob_type=LONGTEXT
binary_type=hex
int_type=int
Created silly table abe_dual
sequence_type=mysql
limit_style=native
Traceback (most recent call last):
  File "./abe", line 2010, in
    sys.exit(main(sys.argv[1:]))
  File "./abe", line 2004, in main
    store = make_store(args)
  File "./abe", line 115, in make_store
    store = DataStore.new(args)
  File "/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py", line 2600, in new
    return DataStore(args)
  File "/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py", line 148, in __init__
    store.initialize()
  File "/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py", line 1100, in initialize
    config['use_firstbits'] = "false"
NameError: global name 'config' is not defined


When you try running it again you get the following error:

Code:
ddl_implicit_commit=true
create_table_epilogue=''
/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py:424: Warning: Converting column 'a' from VARCHAR to TEXT
  store.cursor.execute(stmt)
/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py:424: Warning: Converting column 'b' from VARCHAR to TEXT
  store.cursor.execute(stmt)
max_varchar=4294967295
clob_type=LONGTEXT
binary_type=hex
int_type=int
sequence_type=mysql
limit_style=native
Failed: CREATE TABLE configvar (
    configvar_name  VARCHAR(100) NOT NULL PRIMARY KEY,
    configvar_value VARCHAR(255)
)
Traceback (most recent call last):
  File "./abe", line 2010, in
    sys.exit(main(sys.argv[1:]))
  File "./abe", line 2004, in main
    store = make_store(args)
  File "./abe", line 115, in make_store
    store = DataStore.new(args)
  File "/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py", line 2600, in new
    return DataStore(args)
  File "/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py", line 148, in __init__
    store.initialize()
  File "/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py", line 1041, in initialize
    store.ddl(stmt)
  File "/usr/local/lib/python2.7/dist-packages/Abe/DataStore.py", line 424, in ddl
    store.cursor.execute(stmt)
  File "/usr/local/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 174, in execute
    self.errorhandler(self, exc, value)
  File "/usr/local/lib/python2.7/dist-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler
    raise errorclass, errorvalue
_mysql_exceptions.OperationalError: (1050, "Table 'configvar' already exists")

Rolling back to 0b7f464b52454555c69a6b77b4ad0cf453a2cf48 fixed it for me. Using mysql as datastorage.
hero member
Activity: 481
Merit: 529
Update on Firstbits before I go back into hibernation.

I do not trust the code to produce the same results as firstbits.com or blockchain.info.  I trust it to provide nice short links such as http://abe.bit/a/14c59f, at least until I change the algorithm and a few of the links change their redirect target.

If you care about agreement with other implementations, which you should if you want to give people your firstbits address to send coins to, I see two approaches.

1. Get agreement on a standard, and wait for everyone to declare adherence to it.  See the firstbits thread: https://bitcointalksearch.org/topic/m.960077

or, 2. Reverse-engineer the implementation of your choice.  See how it handles all extant script types, document the algorithm, and press implementers to confirm, correct, propose modifications to, or implement it.

The demo site http://abe.john-edwin-tobey.org/ is running the Firstbits code, but I will have to turn it off again if it gets a lot of traffic.
hero member
Activity: 481
Merit: 529
I've committed Firstbits calculation to master.  There is not yet an API or UI, but if you upgrade and run with "use-firstbits=true" in config, Abe will create and maintain the abe_firstbits table as described here: https://github.com/jtobey/bitcoin-abe/blob/master/README-FIRSTBITS.txt

If you run without use-firstbits, Abe will default it to false and will never create the table.  I'd like to have a script that turns firstbits on and off, but for now the best you can do is to stop Abe and run these (UNTESTED) SQL commands, once you have configured use-firstbits=true:

Code:
DELETE FROM configvar WHERE configvar_name = 'use_firstbits' AND configvar_value <> 'true';
UPDATE configvar SET configvar_value = 'Abe29.3' WHERE configvar_name = 'schema_version' AND configvar_value = 'Abe30';

I tried a few dozen addresses, they match firstbits.com.  Please report issues here or by email, PM, or the github issue system, since I will not spend much time testing.

Edit:

It seems the last available 4-character firstbits appeared in block 160686, '1u7c'.  I'd appreciate if another implementer can confirm this.
Code:
abe=> select max(cc.block_height) from chain_candidate cc join abe_firstbits fb using( block_id) where cc.chain_id = 1 and fb.address_version = '00' and cc.in_longest = 1 and length(fb.firstbits) < 5;
  max
--------
 160686
(1 row)

abe=> select fb.firstbits from chain_candidate cc join abe_firstbits fb using( block_id) where cc.chain_id = 1 and fb.address_version = '00' and cc.in_longest = 1 and length(fb.firstbits) < 5 and cc.block_height = 160686;
 firstbits
-----------
 1u7c
(1 row)

Twelve 4-character combinations can never be firstbits (according to http://firstbits.com/about.php) because they first appeared as a prefix of two different addresses in the same block.  Regardless, however, firstbits.com shows them, perhaps because the code still follows the original spec, "Ties in a block are by tx ID."  Any preference here?  I don't see a big problem with keeping the extra characters.

It's possible some addresses may never have a firstbits, because an earlier address differs only by upper versus lower case.  I think this would be "cryptographically hard" for two valid public keys, but someone could shadow a valid (or invalid) address with a black-hole address.  In that case, abe_firstbits.firstbits should hold the empty string.  I have not found a real example as of Block 169770, but I'm still finding firstbits on my weak VPS...
hero member
Activity: 481
Merit: 529
I'm very interested in seeing Firstbits support.

what exactly do you have in mind?

an api call "/q/firstbits/15ArtC" that returns "15ArtCgi3wmpQAAfYx4riaFmo4prJA4VsK"?

Yes. I'd prefer some SQL code, but I know that Abe doesn't store addresses in base58 form for now, so that'd be difficult to implement. What you describe is adequate.

I have committed some preliminary work on Git branch "firstbits".  It is not yet usable.  It purposely throws an error to avoid incrementing the schema version after it populates the abe_firstbits table, since it has no logic yet to maintain the data when new blocks arrive.  So don't use it. Smiley  I don't know if or when I will have time to finish this work, so I put it out there for intrepid developers to pick up.

My design would have firstbits support enabled by the --use-firstbits option, which is consulted only at install time or upgrade time.  The data looks like this:

Code:
mysql> select * from abe_firstbits order by pubkey_id limit 10;
+-----------+----------+-----------------+-----------+
| pubkey_id | block_id | address_version | firstbits |
+-----------+----------+-----------------+-----------+
|         1 |        1 | 00              | 1         |
|         3 |        3 | 00              | 12        |
|         4 |        4 | 00              | 1h        |
|         5 |        5 | 00              | 1f        |
|         6 |        6 | 00              | 15        |
|         7 |        7 | 00              | 1j        |
|         8 |        8 | 00              | 1g        |
|         9 |        9 | 00              | 16        |
|        10 |       10 | 00              | 1j6       |
|        11 |       11 | 00              | 12c       |
+-----------+----------+-----------------+-----------+
10 rows in set (0.00 sec)

I expect this to support two-way lookup, /chain/Bitcoin/q/firstbits_to_address and address_to_firstbits.  I plan to use it in address short links where enabled.  I expect the firstbits table to grow quite large, otherwise I'd enable it by default.  Omitting address_version (thus supporting only Bitcoin) would be a nice option, since it would reduce the space requirement.  Since Abe is fundamentally multi-coin, I don't plan to implement this optimization.  Perhaps it could be achieved using views or triggers.
legendary
Activity: 980
Merit: 1003
I'm not just any shaman, I'm a Sha256man
So to convert that with the following formula 1953408036 ­/ 100000000 = 19.53408036 BTC

But when I look at the Blockchain it says there is a slightly higher total received balance: https://blockchain.info/address/1SoMGuYknDgyYypJPVVKE2teHBN4HDAh3
Apparently 19.53408036 was the total received as of Block 180750 and until Block 180779 (8 days ago).  You would see that number if your database were not up to date.  You can check it through the browser interface or with SQL:
Code:
SELECT MAX(block_height) FROM chain_candidate
 WHERE chain_id = 1 AND in_longest = 1;

The output should be the current block number, 181904 as of right now.  Check bitcoind and Abe's log if you get a lower number.


Oh.... yep that was i the issue my database is only at block: 180769

Thanks mate!
hero member
Activity: 481
Merit: 529
So to convert that with the following formula 1953408036 ­/ 100000000 = 19.53408036 BTC

But when I look at the Blockchain it says there is a slightly higher total received balance: https://blockchain.info/address/1SoMGuYknDgyYypJPVVKE2teHBN4HDAh3
Apparently 19.53408036 was the total received as of Block 180750 and until Block 180779 (8 days ago).  You would see that number if your database were not up to date.  You can check it through the browser interface or with SQL:
Code:
SELECT MAX(block_height) FROM chain_candidate
 WHERE chain_id = 1 AND in_longest = 1;

The output should be the current block number, 181904 as of right now.  Check bitcoind and Abe's log if you get a lower number.
legendary
Activity: 980
Merit: 1003
I'm not just any shaman, I'm a Sha256man
what is wrong here? When i run the following SQL command on Bitcoin-abe database:

Quote
SELECT SUM(txout.txout_value) FROM pubkey
      JOIN txout ON txout.pubkey_id=pubkey.pubkey_id
      JOIN block_tx ON block_tx.tx_id=txout.tx_id
      JOIN block b ON b.block_id=block_tx.block_id
      JOIN chain_candidate cc ON cc.block_id=b.block_id
      WHERE
          pubkey.pubkey_hash = LOWER('04E116F6F1236ED1D0E40F03A20DE85E81D6C6DF') AND
          cc.chain_id = 1 AND
          cc.in_longest = 1

I get the value of: 1953408036 ­

So to convert that with the following formula 1953408036 ­/ 100000000 = 19.53408036 BTC

But when I look at the Blockchain it says there is a slightly higher total received balance: https://blockchain.info/address/1SoMGuYknDgyYypJPVVKE2teHBN4HDAh3
hero member
Activity: 686
Merit: 500
who is running ABE and at which url´s??

wanna see how it looks now, but can´t find a working node.

thanks

I have an Abe running, its in my sig. I edited the main page, but you still get the idea on what it looks like.
hero member
Activity: 826
Merit: 500
who is running ABE and at which url´s??

wanna see how it looks now, but can´t find a working node.

thanks
Pages:
Jump to: