Pages:
Author

Topic: [ANNOUNCE] Abe 0.7: Open Source Block Explorer Knockoff - page 39. (Read 220986 times)

sr. member
Activity: 459
Merit: 250
I am curious about Abe's console output.  Have you tried loading the currency to a fresh database with no other datadirs configured?  I'm curious what Abe prints.

I haven't tried that yet but I can...  I'll get a second import running to a new db while the existing one continues.
hero member
Activity: 481
Merit: 529
I don't understand the part about "and then bring abe up to date".  I'm assuming it means import everything now so I'm going ahead with that now.

Yes.  Just to be sure the hash does not refer to a block received since Abe last read the data.

Not sure if this helps or matters but from reading the forums through Google's translator it seems like changes were made recently.  The original client / network (if I read correctly) got hit with a 51% attack.  A new client was released a few days ago with fixes for that.  I don't know if it affects the block chain or not (beyond my knowledge of block chains).

Possible I guess.

Also, if helpful, the client was compiled from sources pulled from https://github.com/alexhz/MMMCoin_public

I don't know if I'll have time to build it, but thanks.

I am curious about Abe's console output.  Have you tried loading the currency to a fresh database with no other datadirs configured?  I'm curious what Abe prints.
sr. member
Activity: 459
Merit: 250
Not sure if this helps or matters but from reading the forums through Google's translator it seems like changes were made recently.  The original client / network (if I read correctly) got hit with a 51% attack.  A new client was released a few days ago with fixes for that.  I don't know if it affects the block chain or not (beyond my knowledge of block chains).

Also, if helpful, the client was compiled from sources pulled from https://github.com/alexhz/MMMCoin_public
sr. member
Activity: 459
Merit: 250
The first occurance of the best= line (when I first launched the new client):
Code:
Loading block index...
Genesis block time: 1325369107
Genesis block bits: 1e0fffff
Genesis block nonce: 386206
Genesis block hash: 00000f2e9f8ffa3abc79a4dcbff01dcfd10f480e897ebcd50cb0c37afbe57d51
Genesis merkle root: 7dee73b5c259a6e2eb06482b289af5cb7f24476310fc80081667baf8f2f27c5b
SetBestChain: new best=00000f2e9f8ffa3abc79a4dcbff01dcfd10f480e897ebcd50cb0c37afbe57d51  height=0  work=1048577
 block index              83ms

The last (or current) is:
Code:
Flushed wallet.dat 24ms
askfor block 000003b4ef02c872071895b03082326f807ee202bdfc33619c3b3122620c2188   0
sending getdata: block 000003b4ef02c872071895b03082326f807ee202bdfc33619c3b3122620c2188
askfor block 000003b4ef02c872071895b03082326f807ee202bdfc33619c3b3122620c2188   1331918649000000
received block 000003b4ef02c872071895b03082326f807ee202bdfc33619c3b3122620c2188
CalculateChainWork() : New trusted block at height 93363. Last work: 9233139665732, new work: 253336966, combined: 9233393002698
SetBestChain: new best=000003b4ef02c872071895b03082326f807ee202bdfc33619c3b3122620c2188  height=93363  work=9233393002698
ProcessBlock: ACCEPTED
03/16/12 17:24:12 Flushing wallet.dat
Flushed wallet.dat 23ms

I don't understand the part about "and then bring abe up to date".  I'm assuming it means import everything now so I'm going ahead with that now.
sr. member
Activity: 459
Merit: 250
By runs fine I mean:

Code:
block_tx 1919268 4979880
commit
block_tx 1919269 4979881
commit
block_tx 1919270 4979882
commit
block_tx 1919271 4979883
commit

I didn't see any errors.  First line was about new chain and setting new chain id.

I hit CTRL C after about 100 blocks or so when I noticed the website wasn't updating properly.

I'll see what I can find for the hash block and will reply.  Give me a few.

Thanks.
hero member
Activity: 481
Merit: 529
I had to setup a new box to run alternate currencies because the existing box is getting too heavily loaded.

Thanks for providing this!

Importing this new chain to the same MySQL db running on a 3rd box looks like it runs fine from the prompt but when looking at it from the web interface, it shows with 0 blocks.  Clicking on block 0 shows a block but no links to the rest...

By "runs fine" do you mean it logs a lot of block_tx records in the new chain?  Are you comfortable issuing SQL queries?  I'd like to know what is in the database.

If you have a block hash that should have been loaded, try finding its block_id.  You can do this with the partial hash shown in ~/.bitcoin/debug.log (or MMMCoin equivalent) as "best=...".  For example, if you see best=000000000661131faf72 and then bring Abe up to date, try:
Code:
SELECT block_id FROM block WHERE block_hash LIKE '000000000661131faf72%';

If this gives a block_id, for example 12165, see whether the block_id is attached to the right chain:
Code:
SELECT cc.*, c.chain_name FROM chain_candidate cc JOIN chain c USING (chain_id) WHERE block_id = 12165;

I may need the first few blocks (or say 10kB) of MMM's blk0001.dat to figure this out.  I don't know anything about the new chain, and it may do something unusual that would prevent Abe from loading it.
sr. member
Activity: 459
Merit: 250
I have a new question / problem for you.

I had to setup a new box to run alternate currencies because the existing box is getting too heavily loaded.

New box setup with same OS, loaded on Abe and a new alternate currency MMMCoin

Importing this new chain to the same MySQL db running on a 3rd box looks like it runs fine from the prompt but when looking at it from the web interface, it shows with 0 blocks.  Clicking on block 0 shows a block but no links to the rest...

Is this a problem with the the new chain or something with the new box I just setup?

Abe site - http://blockexplorer.funkymonkey.org
hero member
Activity: 481
Merit: 529
Before I went to upgrade, I found that Abe had stopped working. My block file just exceeded 1GB in size a couple hours ago which is causing this error:

Quote
Failed to catch up {'blkfile_number': 1, 'dirname': 'C:\\Documents and Settings\
\Jon\\Application Data\\Bitcoin', 'chain_id': None, 'id': 1L, 'blkfile_offset':
0}
Traceback (most recent call last):
  File "Abe\DataStore.py", line 2169, in catch_up
    store.catch_up_dir(dircfg)
  File "Abe\DataStore.py", line 2184, in catch_up_dir
    ds = open_blkfile()
  File "Abe\DataStore.py", line 2180, in open_blkfile
    ds.map_file(open(filename, "rb"), 0)
  File "Abe\BCDataStream.py", line 27, in map_file
    self.input = mmap.mmap(file.fileno(), 0, access=mmap.ACCESS_READ)
WindowsError: [Error 8] Not enough storage is available to process this command

It is because I'm running a 32-bit system. Apparently this issue will eventually affect Linux 32-bit systems as well because of the way mmap works; except that it will occur when the block file hits 2 or 3 GB in size. Is Abe particularly tied to mmap? If it is, it seems like it will soon be (or is now) for 64 bit systems only.

Interesting.  I don't think it will affect Linux, because bitcoin starts a new file (blk0002.dat) before 2GB:
Code:
        // FAT32 filesize max 4GB, fseek and ftell max 2GB, so we must stay under 2GB
        if (ftell(file) < 0x7F000000 - MAX_SIZE)
        {
            nFileRet = nCurrentBlockFile;
            return file;
        }

No mention of mmap, but I think a change from 0x7F000000 to 0x3F000000 is worth proposing, or at least making it a compile-time option:
Code:
diff --git a/src/main.cpp b/src/main.cpp
index 9951952..2d838b7 100644
--- a/src/main.cpp
+++ b/src/main.cpp
@@ -1773,6 +1773,10 @@ FILE* OpenBlockFile(unsigned int nFile, unsigned int nBlockPos, const char* pszM
 
 static unsigned int nCurrentBlockFile = 1;
 
+#ifndef BITCOIN_MAX_BLKFILE_SIZE
+#define BITCOIN_MAX_BLKFILE_SIZE 0x7F000000
+#endif
+
 FILE* AppendBlockFile(unsigned int& nFileRet)
 {
     nFileRet = 0;
@@ -1784,7 +1788,7 @@ FILE* AppendBlockFile(unsigned int& nFileRet)
         if (fseek(file, 0, SEEK_END) != 0)
             return NULL;
         // FAT32 filesize max 4GB, fseek and ftell max 2GB, so we must stay under 2GB
-        if (ftell(file) < 0x7F000000 - MAX_SIZE)
+        if (ftell(file) < BITCOIN_MAX_BLKFILE_SIZE - MAX_SIZE)
         {
             nFileRet = nCurrentBlockFile;
             return file;

Abe inherits the use of mmap() from Bitcointools, so if you can verify the bug exists there, we might get Gavin interested in fixing it.
hero member
Activity: 481
Merit: 529
fyi: had this
  https://bitcointalksearch.org/topic/m.716310
problem again, which is explained here
  https://bitcointalksearch.org/topic/m.718444
and fixed it using this
  https://bitcointalksearch.org/topic/m.718481
solution.
Strange.  I'll try to debug it if you give me steps to reproduce it.  (Steps could include "git clone" and building Litecoin.)
donator
Activity: 2772
Merit: 1019
fyi: had this
  https://bitcointalksearch.org/topic/m.716310
problem again, which is explained here
  https://bitcointalksearch.org/topic/m.718444
and fixed it using this
  https://bitcointalksearch.org/topic/m.718481
solution.
sr. member
Activity: 249
Merit: 251
Before I went to upgrade, I found that Abe had stopped working. My block file just exceeded 1GB in size a couple hours ago which is causing this error:

Quote
Failed to catch up {'blkfile_number': 1, 'dirname': 'C:\\Documents and Settings\
\Jon\\Application Data\\Bitcoin', 'chain_id': None, 'id': 1L, 'blkfile_offset':
0}
Traceback (most recent call last):
  File "Abe\DataStore.py", line 2169, in catch_up
    store.catch_up_dir(dircfg)
  File "Abe\DataStore.py", line 2184, in catch_up_dir
    ds = open_blkfile()
  File "Abe\DataStore.py", line 2180, in open_blkfile
    ds.map_file(open(filename, "rb"), 0)
  File "Abe\BCDataStream.py", line 27, in map_file
    self.input = mmap.mmap(file.fileno(), 0, access=mmap.ACCESS_READ)
WindowsError: [Error 8] Not enough storage is available to process this command

It is because I'm running a 32-bit system. Apparently this issue will eventually affect Linux 32-bit systems as well because of the way mmap works; except that it will occur when the block file hits 2 or 3 GB in size. Is Abe particularly tied to mmap? If it is, it seems like it will soon be (or is now) for 64 bit systems only.
hero member
Activity: 481
Merit: 529
Atheros:

Thanks for the report.  I fixed the sort() bug, made the limit configurable and committed the change to github as 945ed46829.

I have only 63938 blocks loaded in my MySQL database, and the longest address history (1XPTgDRhN8...) has 3207 rows, apparently too few for a good test.  That address loaded much faster with a low limit than with none.  However, MySQL "explain" showed it still having to scan and sort the whole history, so as you observed, it did not solve the problem.  Adding an index on block.block_nTime did not help, so I went BlockExplorer's route and now display no history if the limit is exceeded.  To show partial history efficiently might require significant table and index overhead.

Let me know if the latest commit does not solve the load problem.
sr. member
Activity: 249
Merit: 251
Thank you for your reply.
There was one bug in the code: It fails out on line 1012, in handle_address
rows.sort()
AttributeError: 'tuble' object has no attribute 'sort'

I couldn't quite figure out what to do to fix it so I took out the sort line. This obviously screwed up the way transactions for an address are displayed but the program didn't crash. Upon viewing the 1VayNert address, my SQL server has been working as fast as it can for the last ten minutes in a state of "Copying to tmp table". It appears to me that while Abe might do a better job of limiting the amount of information it tries to send to the browser thanks to this patch, the SQL server is still trying to pull it all.

I would like to get Abe working with FastCGI on my Windows machine but I'll have to wait until someone with more experience writes some documentation.
hero member
Activity: 481
Merit: 529
@Atheros

Good question.  This patch is not the most polished piece of code, but you may try it, editing ADDRESS_HISTORY_ROWS_MAX in Abe/abe.py to your liking:
Code:
diff --git a/Abe/abe.py b/Abe/abe.py
index 1982434..4811e01 100755
--- a/Abe/abe.py
+++ b/Abe/abe.py
@@ -92,6 +92,8 @@ HEIGHT_RE = re.compile('(?:0|[1-9][0-9]*)\\Z')
 HASH_PREFIX_RE = re.compile('[0-9a-fA-F]{0,64}\\Z')
 HASH_PREFIX_MIN = 6
 
+ADDRESS_HISTORY_ROWS_MAX = 500  # XXX hardcoded limit
+
 NETHASH_HEADER = """\
 blockNumber:          height of last block in interval + 1
 time:                 block time in seconds since 0h00 1 Jan 1970 UTC
@@ -930,12 +932,13 @@ class Abe:
             count[txpoint['is_in']] += 1
 
         txpoints = []
-        rows = []
-        rows += abe.store.selectall("""
+        max_rows = ADDRESS_HISTORY_ROWS_MAX
+        in_rows = abe.store.selectall("""
             SELECT
                 b.block_nTime,
                 cc.chain_id,
                 b.block_height,
+                block_tx.tx_pos,
                 1,
                 b.block_hash,
                 tx.tx_hash,
@@ -949,13 +952,22 @@ class Abe:
               JOIN txout prevout ON (txin.txout_id = prevout.txout_id)
               JOIN pubkey ON (pubkey.pubkey_id = prevout.pubkey_id)
              WHERE pubkey.pubkey_hash = ?
-               AND cc.in_longest = 1""",
-                      (dbhash,))
-        rows += abe.store.selectall("""
+               AND cc.in_longest = 1
+             ORDER BY
+                   b.block_nTime,
+                   cc.chain_id,
+                   b.block_height,
+                   block_tx.tx_pos,
+                   txin.txin_pos
+             LIMIT ?""",
+                      (dbhash, max_rows + 1))
+        truncated = in_rows[max_rows] if len(in_rows) > max_rows else None
+        out_rows = abe.store.selectall("""
             SELECT
                 b.block_nTime,
                 cc.chain_id,
                 b.block_height,
+                block_tx.tx_pos,
                 0,
                 b.block_hash,
                 tx.tx_hash,
@@ -968,11 +980,41 @@ class Abe:
               JOIN txout ON (txout.tx_id = tx.tx_id)
               JOIN pubkey ON (pubkey.pubkey_id = txout.pubkey_id)
              WHERE pubkey.pubkey_hash = ?
-               AND cc.in_longest = 1""",
-                      (dbhash,))
+               AND cc.in_longest = 1""" + ("""
+               AND (b.block_nTime < ? OR
+                    (b.block_nTime = ? AND
+                     (cc.chain_id < ? OR
+                      (cc.chain_id = ? AND
+                       (b.block_height < ? OR
+                        (b.block_height = ? AND
+                         block_tx.tx_pos <= ?))))))"""
+                                           if truncated else "") + """
+             ORDER BY
+                   b.block_nTime,
+                   cc.chain_id,
+                   b.block_height,
+                   block_tx.tx_pos,
+                   txout.txout_pos
+             LIMIT ?""",
+                      (dbhash,
+                       truncated[0], truncated[0], truncated[1], truncated[1],
+                       truncated[2], truncated[2], truncated[3], max_rows + 1)
+                      if truncated else
+                      (dbhash, max_rows + 1))
+
+        if len(out_rows) > max_rows:
+            if truncated is None or truncated > out_rows[max_rows]:
+                truncated = out_rows[max_rows]
+        if truncated:
+            truncated = truncated[0:3]
+
+        rows = in_rows + out_rows
         rows.sort()
         for row in rows:
-            nTime, chain_id, height, is_in, blk_hash, tx_hash, pos, value = row
+            if truncated and row >= truncated:
+                break
+            (nTime, chain_id, height, tx_pos,
+             is_in, blk_hash, tx_hash, pos, value) = row
             txpoint = {
                     "nTime":    int(nTime),
                     "chain_id": int(chain_id),
@@ -1008,7 +1050,11 @@ class Abe:
 
         body += abe.short_link(page, 'a/' + address[:10])
 
-        body += ['

Balance: '] + format_amounts(balance, True)
+        body += ['

']
+        if truncated:
+            body += ['Results truncated

']
+        else:
+            body += ['

Balance: '] + format_amounts(balance, True)
 
         for chain_id in chain_ids:
             balance[chain_id] = 0  # Reset for history traversal.

The maximum applies to sends and receives independently, so for example, an address with 600 receives and 100 spends will show the first 500 receives and whichever of the spends occurred before the 501st receive.  I replace the balance with "Results truncated", since it is probably inaccurate, and it may never have been the right balance (may even be negative) due to out-of-order blocks and transactions.

Switching to FastCGI should give you a bit better parallelism, too, though FCGI mode may need fixes for Windows.  You could try Python's multithreaded socket server with the built-in HTTP but I won't be surprised if you hit bugs.

Let us know how it goes.
sr. member
Activity: 249
Merit: 251
I have a feature request. I think it is simple but important.

(I'm running the newest version of Abe on Windows with MySQL. I'm using the Abe web server.)

While random clicking and exploring, I clicked the address 1VayNert3x1KzbpzMGt2qdqrAThiRovi8 which is used by some kind of bot for transactions and it has hundreds of thousands of transactions. It took the SQL server many minutes to process the query and then Abe spent half an hour processing and sending the data to the browser. During this time, Abe would not respond to other requests and the CPU usage on the server was at 100% which killed other things. I didn't let it finish; I gave up and killed it. On your installation and someone else's I tried, the server spits out an internal server error after 45 seconds or so. Neither of these behaviors seems acceptable.

I would like to tell people I am running Abe and let them use it but I can't tell people about it right now because if they click certain addresses, it effectively kills Abe and my server's other tasks. Could Abe do one of the following:

* Only list the first/last couple (thousand?) transactions and then say that the rest are truncated. Perhaps have a next button to go to the next page.
* Say that there are too many records and that the page cannot be displayed. Blockexplorer.com does this.

This is the only problem I've come across. Thank you for all of your hard work. You have made a great piece of software!

EDIT: I started digging through source code. Doesn't it seem like the 'limit' in this query should be limiting the output? Or am I looking in the wrong place?
Quote
            ret += filter(None, map(process, abe.store.selectall(
                "SELECT pubkey_hash FROM pubkey WHERE pubkey_hash" +
                # XXX hardcoded limit.
                neg + " BETWEEN ? AND ? LIMIT 100", (bl, bh))))
sr. member
Activity: 459
Merit: 250
That worked.  Smiley

Uncommented the line, removed bitcoin so it shows "Bitcoin Testnet" (which is how i've named it here) and started.

First line reads:
Ignored bit8 in version 0x558d394b of chain_id 20

The import blocks are listed after that.

Thanks!   Smiley
hero member
Activity: 481
Merit: 529
So I'm stuck at block 42156 on the testnet for now then?  It doesn't sound like a quick upgrade / change to Abe.  Smiley
I've committed a fix.  By default, only Bitcoin and Testnet are allowed to have any block version number.  If there are more such chains, uncomment ignore-bit8-chains and add them to the list in abe.conf.
hero member
Activity: 481
Merit: 529
The output from the SQL query is as follows:

(snipped)
Wow!  I guess everybody creates their own block version to go along with the magic number and address version.

When you say namecoin like merged mining do you mean any chain that's compatible to be merged with namecoin?  In this case, it would be namecoin, ixcoin, i0coin, geistgeld, coiledcoin, devcoin and soon groupcoin (not imported yet).
That list is probably right.  In practice, it's whoever started with a copy of Namecoin's merged-mining code and didn't change it too much.  Small chains try to be compatible with tools like Abe.  Abe tries to be compatible with big chains. Smiley

That would be values  1, 1048577, 1048833, 196609, 262145, 262401, 257, 131073, 131329, 196609, 196865, 65537, 65793; and then whatever groupcoin reports and any other future chains.  Is this correct?
That looks about right.

Would commenting out those lines to get the import running again, then un-commenting afterwords be an option or will every block from this point onwards have the problem?
I don't know.  It depends whether the Testnet block was a one-time test or that miner solved more blocks.  Strange things happen on Testnet.
sr. member
Activity: 459
Merit: 250
The output from the SQL query is as follows:

Code:
chain_name	block_version
Bitchip 1
Bitcoin 1
Bitcoin Testnet 1
Coiledcoin 1
Coiledcoin 1048577
Coiledcoin 1048833
Devcoin 1
Devcoin 196609
Devcoin 262145
Devcoin 262401
Fairbrix 1
GeistGeld 1
GeistGeld 257
i0coin 1
i0coin 131073
i0coin 131329
Ixcoin 1
Ixcoin 196609
Ixcoin 196865
Liquidcoin 1
Litecoin 1
Namecoin 1
Namecoin 65537
Namecoin 65793
Rucoin 1
Rucoin 196609
Rucoin 458753
Tenebrix 1

When you say namecoin like merged mining do you mean any chain that's compatible to be merged with namecoin?  In this case, it would be namecoin, ixcoin, i0coin, geistgeld, coiledcoin, devcoin and soon groupcoin (not imported yet).

That would be values  1, 1048577, 1048833, 196609, 262145, 262401, 257, 131073, 131329, 196609, 196865, 65537, 65793; and then whatever groupcoin reports and any other future chains.  Is this correct?


Would commenting out those lines to get the import running again, then un-commenting afterwords be an option or will every block from this point onwards have the problem?
hero member
Activity: 481
Merit: 529
A less fragile workaround would be to run a separate Abe process for loading Testnet using --no-serve and a copy of the software with the lines commented out.  Your server process would not be able to add new testnet blocks at every request the way it does for other chains, and you would have to schedule the Testnet process to run every minute or so.  But you might prefer this, since it should survive any new block versions introduced by the chains.
Pages:
Jump to: