Pages:
Author

Topic: [ANNOUNCE] Abe 0.7: Open Source Block Explorer Knockoff - page 16. (Read 221044 times)

donator
Activity: 2772
Merit: 1019
Anyone with some tips on how to daemonize the Abe webserver process??

You could just use a tool called "screen" (or gnu screen).

  • #> screen
  • #> python abe
  • close ssh session, abe will keep running in the detached screen
  • log back in and use "screen -x" to reconned to the detached screen

donator
Activity: 2772
Merit: 1019
Thanks John,

Perhaps the HTTP request is triggering a catch-up which takes too long.  Have you tried separating the loader from the server?  One process runs Abe in an infinite loop passing --no-serve, and the web process uses --no-load (or datadir=[]).  Then the web requests will not wait for data to load.

What happens if there is a catch-up triggered by request A, then request B comes in?

That stack-trace happens quite often in a row, not just once.

I'm trying your suggestion now, sounds promising to me.
sr. member
Activity: 477
Merit: 500
hero member
Activity: 481
Merit: 529
New Abe feature: Standard Bitcoin multisig and pay-to-script-hash (P2SH) support is in the master branch, thanks to Jouke's generous sponsorship.  This old post describes what it means.  Upgrade could take a few minutes to over an hour on a fully loaded Bitcoin database as Abe scans for output scripts not yet assigned an address.  Always backup your important data prior to upgrading.

Master also has the beginning of a test suite covering SQLite, MySQL, and PostgreSQL, which you can run by installing pytest and running py.test in the bitcoin-abe directory.  To test with MySQL and PostgreSQL requires those databases' respective instance creation tools.  Specify ABE_TEST=quick or ABE_TEST_DB=sqlite in the process environment to test only with a (much faster) SQLite in-memory database.  The tests cover block, tx, and address pages, prior to HTML rendering.
hero member
Activity: 481
Merit: 529
I made succesfully a working explorer for Trollcoin. One question remains however:

1: I have a process for loading the blockchain
2: I have a process for serving the html pages

process 1 is now handled by a cron job
process 2 is only running as long as i keep an SSH session with server open.

Anyone with some tips on how to daemonize the Abe webserver process??
I tried to find some information with google but this seems to be a hard part!

Search for "upstart" or "daemontools", or you could follow Abe's FastCGI instructions and use a regular web server.

Edit: For keeping an SSH tunnel open, I used to use daemontools, but I think upstart is more usable and standard nowadays, at least on Linux.
legendary
Activity: 1080
Merit: 1055
DEV of DeepOnion community pool
I made succesfully a working explorer for Trollcoin. One question remains however:

1: I have a process for loading the blockchain
2: I have a process for serving the html pages

process 1 is now handled by a cron job
process 2 is only running as long as i keep an SSH session with server open.

Anyone with some tips on how to daemonize the Abe webserver process??
I tried to find some information with google but this seems to be a hard part!
hero member
Activity: 481
Merit: 529
This is the error


Exception happened during processing of request from ('178.63.69.203', 49828)
Traceback (most recent call last):
  File "/usr/lib/python2.7/SocketServer.py", line 295, in _handle_request_noblock
    self.process_request(request, client_address)
  File "/usr/lib/python2.7/SocketServer.py", line 321, in process_request
    self.finish_request(request, client_address)
  File "/usr/lib/python2.7/SocketServer.py", line 334, in finish_request
    self.RequestHandlerClass(request, client_address, self)
  File "/usr/lib/python2.7/SocketServer.py", line 651, in __init__
    self.finish()
  File "/usr/lib/python2.7/SocketServer.py", line 704, in finish
    self.wfile.flush()
  File "/usr/lib/python2.7/socket.py", line 303, in flush
    self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe


(I think nginx (load-balancing here) or the client times out because it's taking too long)

No Abe stack frames here, so I don't have much to go by.

on AuroraCoind debug.log I see this:


 22478  ThreadRPCServer method=getrawtransaction
 22479  ThreadRPCServer method=getrawtransaction
 22480  ThreadRPCServer method=getrawtransaction
 22481  ThreadRPCServer method=getrawtransaction
 22482  ThreadRPCServer method=getrawtransaction
 22483  ThreadRPCServer method=getrawtransaction
 22484  ThreadRPCServer method=getrawtransaction
 22485  ThreadRPCServer method=getrawtransaction
 22486  ThreadRPCServer method=getrawtransaction
 22487  ThreadRPCServer method=getrawtransaction
 22488  ThreadRPCServer method=getrawtransaction
 22489  ThreadRPCServer method=getrawtransaction
 22490  ThreadRPCServer method=getrawtransaction
 22491  ThreadRPCServer method=getrawtransaction
 22492  ThreadRPCServer method=getrawtransaction
 22493  ThreadRPCServer method=getrawtransaction
 22494  ThreadRPCServer method=getrawtransaction
 22495  ThreadRPCServer method=getrawtransaction
 22496  ThreadRPCServer method=getrawtransaction
 22497  ThreadRPCServer method=getrawtransaction
 22498  ThreadRPCServer method=getrawtransaction
 22499  ThreadRPCServer method=getrawtransaction
 22500  ThreadRPCServer method=getrawtransaction
 22501  ThreadRPCServer method=getrawtransaction
 22502  ThreadRPCServer method=getrawtransaction
 22503  ThreadRPCServer method=getrawtransaction
 22504  ThreadRPCServer method=getrawtransaction


about 500 per seconds

psql> select * from pg_stat_activity;

reports only one IDLE connection


Perhaps the HTTP request is triggering a catch-up which takes too long.  Have you tried separating the loader from the server?  One process runs Abe in an infinite loop passing --no-serve, and the web process uses --no-load (or datadir=[]).  Then the web requests will not wait for data to load.
donator
Activity: 2772
Merit: 1019
I have a question about mempool transactions regarding performance:

So I'm running http://blockexplorer.auroracoin.eu and because I have allocated quite a machine to the task everything has been zappy and fine this morning.

However when I checked back from work an hour later I saw loads of exceptions saying "error: [Errno 32] Broken pipe", the nginx I have in front reporting gateway timeouts. I'm hypothesizing the db queries are the bottleneck.

I tried rebuilding the database (drop it completely and rebuild)... that didn't help, it started again right away.

There are loads of mempool transactions in AURoracoin because we're being pool-hopping-attacked to the point where there hasn't been a block for 6 hours or so.

Another instance I run with same setup, but on quite weak machine (vm) was able to cope with the load quite well and didn't suffer broken pipes.

What fixed it was to re-initialize BOTH the blockchain of the AuroraCoind AND the db (just db didn't help).

Now the question: how are mempool transactions handled and could the existance of many mempool transactions have considerable impact on db (or abe.py) performance?

I'm a bit confused... does anyone have an idea what could've been causing this?

It could be transactions being left open, suboptimal SQL, or Huh  If you have a collection of Python stack traces or database process lists from during the timeouts, they might point to the offender(s).


This is the error


Exception happened during processing of request from ('178.63.69.203', 49828)
Traceback (most recent call last):
  File "/usr/lib/python2.7/SocketServer.py", line 295, in _handle_request_noblock
    self.process_request(request, client_address)
  File "/usr/lib/python2.7/SocketServer.py", line 321, in process_request
    self.finish_request(request, client_address)
  File "/usr/lib/python2.7/SocketServer.py", line 334, in finish_request
    self.RequestHandlerClass(request, client_address, self)
  File "/usr/lib/python2.7/SocketServer.py", line 651, in __init__
    self.finish()
  File "/usr/lib/python2.7/SocketServer.py", line 704, in finish
    self.wfile.flush()
  File "/usr/lib/python2.7/socket.py", line 303, in flush
    self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe


(I think nginx (load-balancing here) or the client times out because it's taking too long)

on AuroraCoind debug.log I see this:


 22478  ThreadRPCServer method=getrawtransaction
 22479  ThreadRPCServer method=getrawtransaction
 22480  ThreadRPCServer method=getrawtransaction
 22481  ThreadRPCServer method=getrawtransaction
 22482  ThreadRPCServer method=getrawtransaction
 22483  ThreadRPCServer method=getrawtransaction
 22484  ThreadRPCServer method=getrawtransaction
 22485  ThreadRPCServer method=getrawtransaction
 22486  ThreadRPCServer method=getrawtransaction
 22487  ThreadRPCServer method=getrawtransaction
 22488  ThreadRPCServer method=getrawtransaction
 22489  ThreadRPCServer method=getrawtransaction
 22490  ThreadRPCServer method=getrawtransaction
 22491  ThreadRPCServer method=getrawtransaction
 22492  ThreadRPCServer method=getrawtransaction
 22493  ThreadRPCServer method=getrawtransaction
 22494  ThreadRPCServer method=getrawtransaction
 22495  ThreadRPCServer method=getrawtransaction
 22496  ThreadRPCServer method=getrawtransaction
 22497  ThreadRPCServer method=getrawtransaction
 22498  ThreadRPCServer method=getrawtransaction
 22499  ThreadRPCServer method=getrawtransaction
 22500  ThreadRPCServer method=getrawtransaction
 22501  ThreadRPCServer method=getrawtransaction
 22502  ThreadRPCServer method=getrawtransaction
 22503  ThreadRPCServer method=getrawtransaction
 22504  ThreadRPCServer method=getrawtransaction


about 500 per seconds

psql> select * from pg_stat_activity;

reports only one IDLE connection
hero member
Activity: 481
Merit: 529
Thanks to Sebastian, Abe has a new public demo featuring Bitcoin and Litecoin: http://bcv.coinwallet.pl/
legendary
Activity: 1120
Merit: 1003
twet.ch/inv/62d7ae96
anyone know the fix to the decimal point being off by 100?  Everything is correct expect it says 2301 instead of 23.01 for example.
see https://bitcointalksearch.org/topic/m.4601816

Thank you, fixed problem.
legendary
Activity: 1792
Merit: 1008
/dev/null
anyone know the fix to the decimal point being off by 100?  Everything is correct expect it says 2301 instead of 23.01 for example.
see https://bitcointalksearch.org/topic/m.4601816
legendary
Activity: 1120
Merit: 1003
twet.ch/inv/62d7ae96
anyone know the fix to the decimal point being off by 100?  Everything is correct expect it says 2301 instead of 23.01 for example.
hero member
Activity: 481
Merit: 529
What is, in your experience, the fastest db to use with Abe?

SQLite with connect-args=":memory:" or (according to K1773R) something entirely in RAM (tmpfs).

Honestly, Abe could be much better optimized for both speed and space.  Data could be further denormalized.  Unindexed data such as scripts, and even the non-initial bytes of hashes, could be read from the blockfile or RPC.  If you want speed, I suggest you compare (the now open source) BlockExplorer or other such projects.
donator
Activity: 543
Merit: 500
What is, in your experience, the fastest db to use with Abe?
hero member
Activity: 481
Merit: 529
I have a question about mempool transactions regarding performance:

So I'm running http://blockexplorer.auroracoin.eu and because I have allocated quite a machine to the task everything has been zappy and fine this morning.

However when I checked back from work an hour later I saw loads of exceptions saying "error: [Errno 32] Broken pipe", the nginx I have in front reporting gateway timeouts. I'm hypothesizing the db queries are the bottleneck.

I tried rebuilding the database (drop it completely and rebuild)... that didn't help, it started again right away.

There are loads of mempool transactions in AURoracoin because we're being pool-hopping-attacked to the point where there hasn't been a block for 6 hours or so.

Another instance I run with same setup, but on quite weak machine (vm) was able to cope with the load quite well and didn't suffer broken pipes.

What fixed it was to re-initialize BOTH the blockchain of the AuroraCoind AND the db (just db didn't help).

Now the question: how are mempool transactions handled and could the existance of many mempool transactions have considerable impact on db (or abe.py) performance?

I'm a bit confused... does anyone have an idea what could've been causing this?

It could be transactions being left open, suboptimal SQL, or Huh  If you have a collection of Python stack traces or database process lists from during the timeouts, they might point to the offender(s).
newbie
Activity: 4
Merit: 0
donator
Activity: 2772
Merit: 1019
I have a question about mempool transactions regarding performance:

So I'm running http://blockexplorer.auroracoin.eu and because I have allocated quite a machine to the task everything has been zappy and fine this morning.

However when I checked back from work an hour later I saw loads of exceptions saying "error: [Errno 32] Broken pipe", the nginx I have in front reporting gateway timeouts. I'm hypothesizing the db queries are the bottleneck.

I tried rebuilding the database (drop it completely and rebuild)... that didn't help, it started again right away.

There are loads of mempool transactions in AURoracoin because we're being pool-hopping-attacked to the point where there hasn't been a block for 6 hours or so.

Another instance I run with same setup, but on quite weak machine (vm) was able to cope with the load quite well and didn't suffer broken pipes.

What fixed it was to re-initialize BOTH the blockchain of the AuroraCoind AND the db (just db didn't help).

Now the question: how are mempool transactions handled and could the existance of many mempool transactions have considerable impact on db (or abe.py) performance?

I'm a bit confused... does anyone have an idea what could've been causing this?
hero member
Activity: 481
Merit: 529
This is how I setup the class...

Code:
class HiroCoin(LtcScryptChain):          
    def __init__(chain, **kwargs):
        chain.name = 'HiroCoin'
        chain.code3 = 'HIC'      
        chain.address_version = "\x28"        
        chain.magic = "\xfe\xc3\xb9\xde"
        Chain.__init__(chain, **kwargs)

    datadir_conf_file_name = "hirocoin.conf"      
    datadir_rpcport = 9347        

EDIT: This may or may not help, but all the block past block 0 are all in orphan_block table.
Checking the code concludes this is because it cannot see the previous block hash correctly.


Please see: https://bitcointalksearch.org/topic/m.5785774 (and note that "policy":"NovaCoin" has an effect similar to the proposed "hashPrev":"scrypt").

If the algorithm is neither Bitcoin's nor NovaCoin's, we have to override block_header_hash in the new class:
Code:
class Sha256Chain(Chain):
    def block_header_hash(chain, header):
        return util.double_sha256(header)
Code:
class LtcScryptChain(Chain):
    def block_header_hash(chain, header):
        import ltc_scrypt
        return ltc_scrypt.getPoWHash(header)
hero member
Activity: 481
Merit: 529
I haven't kept up with the development of Abe, but is P2SH/Multisig supported now?

Apart from an issue affecting Namecoin and the question whether to include "escrow" outputs in address balances, yes, on the multisig branch, thanks to Jouke's sponsorship.
hero member
Activity: 821
Merit: 1000
This is how I setup the class...

Code:
class HiroCoin(LtcScryptChain):          
    def __init__(chain, **kwargs):
        chain.name = 'HiroCoin'
        chain.code3 = 'HIC'      
        chain.address_version = "\x28"        
        chain.magic = "\xfe\xc3\xb9\xde"
        Chain.__init__(chain, **kwargs)

    datadir_conf_file_name = "hirocoin.conf"      
    datadir_rpcport = 9347        

EDIT: This may or may not help, but all the block past block 0 are all in orphan_block table.
Checking the code concludes this is because it cannot see the previous block hash correctly.
Pages:
Jump to: