Author

Topic: bitHopper: Python Pool Hopper Proxy - page 148. (Read 355689 times)

hero member
Activity: 504
Merit: 502
July 26, 2011, 10:33:19 AM
And if the pool (you) doesn't use a known broken payout algorithm, we (we) will go there as a backup maybe if you have a long round, much better than getting f***ed everytime you find a block!

I actually don't mind the extra hashrate when needed...even if it does cost us a few coins...  we are small and can use the help...    but would be nice if you could stick around a little longer...  havent seen any hoppers around on a 5M block...

I dont really believe its just after a new block, your server seems to simply not handle alot of traffic.

Right now I cant even with just default miner hold a steady connection for more than 2minutes before it times out and retries. Ive tested this from a server in uk and us.

So i hope this is a known issue and gets resolved as you mentioned about some update to the system coming.
legendary
Activity: 1449
Merit: 1001
July 26, 2011, 10:30:46 AM
And if the pool (you) doesn't use a known broken payout algorithm, we (we) will go there as a backup maybe if you have a long round, much better than getting f***ed everytime you find a block!

I actually don't mind the extra hashrate when needed...even if it does cost us a few coins...  we are small and can use the help...    but would be nice if you could stick around a little longer...  havent seen any hoppers around on a 5M block...
donator
Activity: 2058
Merit: 1007
Poor impulse control.
July 26, 2011, 10:25:54 AM
yeh, i didn't mean that either. It must be a pain in the bum to have such a sudden hashrate increase and then drop down again. If you run a copy of bit Hopper though at least you'll know when to expect us come and go.

dont need a program for that... 3 seconds after the block is announced we put on our armor...

Yes, but haven't you noticed us come and go when a bigger pool starts a block and then goes out of range? Didn't you notice your  poolhashrate go up and down like a bride's nightie? Either use bithopper or keep a weather eye out for new blocks started at other pools and when they go out of range.
legendary
Activity: 2618
Merit: 1007
July 26, 2011, 10:22:40 AM
And if the pool (you) doesn't use a known broken payout algorithm, we (we) will go there as a backup maybe if you have a long round, much better than getting f***ed everytime you find a block!
hero member
Activity: 504
Merit: 502
July 26, 2011, 10:14:46 AM
pool is crap, simply it can't handle 80-100Ghash/s ...

to keep the hoppers away...

They have a 'crap pool' to keep hoppers away? I thought that would keep everyone away  Grin

So how's the infrastructure change coming - can we expect consistent connection soon?

didnt say it was crap, just not ready for 100GH...  so when the hoppers ( you) hop on,  the system goes a bit crazy...  so maybe it will keep hoppers ( you) away as it makes it not profitable



We are trying (us) to make your pool bigger (you)
legendary
Activity: 1449
Merit: 1001
July 26, 2011, 10:13:59 AM
yeh, i didn't mean that either. It must be a pain in the bum to have such a sudden hashrate increase and then drop down again. If you run a copy of bit Hopper though at least you'll know when to expect us come and go.

dont need a program for that... 3 seconds after the block is announced we put on our armor...
donator
Activity: 2058
Merit: 1007
Poor impulse control.
July 26, 2011, 10:11:42 AM
yeh, i didn't mean that either. It must be a pain in the bum to have such a sudden hashrate increase and then drop down again. If you run a copy of bit Hopper though at least you'll know when to expect us come and go.
legendary
Activity: 1449
Merit: 1001
July 26, 2011, 10:05:13 AM
pool is crap, simply it can't handle 80-100Ghash/s ...

to keep the hoppers away...

They have a 'crap pool' to keep hoppers away? I thought that would keep everyone away  Grin

So how's the infrastructure change coming - can we expect consistent connection soon?

didnt say it was crap, just not ready for 100GH...  so when the hoppers ( you) hop on,  the system goes a bit crazy...  so maybe it will keep hoppers ( you) away as it makes it not profitable

legendary
Activity: 1428
Merit: 1000
July 26, 2011, 09:53:05 AM
i'll add the stats graph this evening to c00w's version (still unsure if its gonna be a complete client-side approach [means graph data is only collected when browser is open] or if bitHopper itself send its data [means more traffic, more load])

(yes i changed the location, sorry. link ist http:///info
sr. member
Activity: 476
Merit: 250
moOo
July 26, 2011, 09:50:56 AM
Quote
could you also add my pool-stats graph

Pict looks cool.. did you change the addy to reach the webpage?... I've tried all I can think of to reach it.


c00w.. i get this when I start up yours.. probably something I did.. do you have an idea?

[09:50:43] Database Setup
Code:
Unhandled error in Deferred:
Unhandled Error
Traceback (most recent call last):
  File "D:\Users\joulesbeef\Desktop\c00w-bitHopper-july24th\bitHopper.py", line
425, in
    main()
  File "D:\Users\joulesbeef\Desktop\c00w-bitHopper-july24th\bitHopper.py", line
420, in main
    stats_call.start(117*4)
  File "D:\Python27\lib\site-packages\twisted\internet\task.py", line 163, in st
art
    self()
  File "D:\Python27\lib\site-packages\twisted\internet\task.py", line 194, in __
call__
    d = defer.maybeDeferred(self.f, *self.a, **self.kw)
--- ---
  File "D:\Python27\lib\site-packages\twisted\internet\defer.py", line 133, in m
aybeDeferred
    result = f(*args, **kw)
  File "D:\Users\joulesbeef\Desktop\c00w-bitHopper-july24th\stats.py", line 106,
 in update_api_stats
    d = work.get(self.bitHopper.json_agent,info['user_api_address'])
exceptions.NameError: global name 'work' is not defined
donator
Activity: 2058
Merit: 1007
Poor impulse control.
July 26, 2011, 09:32:42 AM
pool is crap, simply it can't handle 80-100Ghash/s ...

to keep the hoppers away...

They have a 'crap pool' to keep hoppers away? I thought that would keep everyone away  Grin

So how's the infrastructure change coming - can we expect consistent connection soon?
legendary
Activity: 1449
Merit: 1001
July 26, 2011, 09:30:37 AM
pool is crap, simply it can't handle 80-100Ghash/s ...

to keep the hoppers away...
hero member
Activity: 504
Merit: 502
July 26, 2011, 08:45:12 AM
Looking at their network swing rates, thats probably issues with the pool rather than just hoppers.

The network speed goes up and down as if the actual server is crashing constantly.

I think even regular users are just quitting cause of these issues.

EDIT: Mining with miner only having same issues as mining with bithopper. Network resets constantly.
legendary
Activity: 910
Merit: 1000
Quality Printing Services by Federal Reserve Bank
July 26, 2011, 08:42:58 AM
btw nofee blocks are easily detectable through mail (i always get the block found mail max 1min after they found a block).

what i think is curious about nofee:
with bithopper i get much more "pool downs" as with my miner. thats the reason i made an api for bithopper and switch my miner directly (there are still some pool downs and i still have a higher reject rate than usual (noofee is 5% atm for this round) - but its better as with the proxy). maybe they are trying to detect us and close the connection on a random basis - but i am not sure.

Its definitely related to bithopper in some way and not just a user-agent thing detecting bithopper then dropping the IP.

Ive tested with miner directly and get ~2% stales which is sorta within norm, and no dropped miners.

Peh. :/


And the result is this? http://imageshack.us/photo/my-images/38/snapshot28nodeedrop.jpg

donator
Activity: 2058
Merit: 1007
Poor impulse control.
July 26, 2011, 08:42:01 AM
here are some more errors I get

Code:
[15:18:47] bitclockers: 1,529,462
[15:18:56] RPC request [69e7e000] submitted to mtred
[15:19:00] RPC request [] submitted to mtred
[Failure instance: Traceback (failure with no frames): : User timeout caused connection failure.
]
[15:19:06] LP triggered from server mtred
[15:19:07] triple: 4,932,232
[15:19:13] RPC request [59a31000] submitted to mtred
[15:19:14] RPC request [3e23d000] submitted to mtred
[15:19:16] Error in pool api for nofeemining
[15:19:18] RPC request [7e170000] submitted to mtred
[15:19:22] RPC request [0589c000] submitted to mtred
[15:19:23] RPC request [] submitted to mtred
[15:19:24] RPC request [] submitted to mtred
[15:19:30] RPC request [a17f3000] submitted to mtred
[15:19:34] triple: 4,932,555
Error in json decoding, Server probably down

[15:19:46] RPC request [] submitted to mtred
[15:19:47] nofeemining: 198,400
[15:19:48] bitclockers: 1,531,414
[15:19:48] RPC request [] submitted to mtred
[15:19:53] RPC request [ca6f5000] submitted to mtred
[15:20:02] triple: 4,932,918
Caught, jsonrpc_call insides
User timeout caused connection failure.

Triplemining was down while back. I'm getting normal results atm.
legendary
Activity: 910
Merit: 1000
Quality Printing Services by Federal Reserve Bank
July 26, 2011, 08:36:42 AM
here are some more errors I get

Code:
[15:18:47] bitclockers: 1,529,462
[15:18:56] RPC request [69e7e000] submitted to mtred
[15:19:00] RPC request [] submitted to mtred
[Failure instance: Traceback (failure with no frames): : User timeout caused connection failure.
]
[15:19:06] LP triggered from server mtred
[15:19:07] triple: 4,932,232
[15:19:13] RPC request [59a31000] submitted to mtred
[15:19:14] RPC request [3e23d000] submitted to mtred
[15:19:16] Error in pool api for nofeemining
[15:19:18] RPC request [7e170000] submitted to mtred
[15:19:22] RPC request [0589c000] submitted to mtred
[15:19:23] RPC request [] submitted to mtred
[15:19:24] RPC request [] submitted to mtred
[15:19:30] RPC request [a17f3000] submitted to mtred
[15:19:34] triple: 4,932,555
Error in json decoding, Server probably down

[15:19:46] RPC request [] submitted to mtred
[15:19:47] nofeemining: 198,400
[15:19:48] bitclockers: 1,531,414
[15:19:48] RPC request [] submitted to mtred
[15:19:53] RPC request [ca6f5000] submitted to mtred
[15:20:02] triple: 4,932,918
Caught, jsonrpc_call insides
User timeout caused connection failure.
donator
Activity: 2058
Merit: 1007
Poor impulse control.
July 26, 2011, 07:47:56 AM
Bitcoin just crashed for me:


Code:
[11:41:23] Updating Difficulty
[11:41:24] 1690906.2047244
[11:41:24] Updating NameCoin Difficulty
[11:41:25] 94037.96
[11:41:25] Checking Database
[11:41:25] DB Verson: 0.1
Traceback (most recent call last):
  File "bitHopper.py", line 220, in
    bithopper_global = BitHopper()
  File "bitHopper.py", line 39, in __init__
    self.db = database.Database(self)
  File "/home/user/bitHopper/database.py", line 21, in __init__
    self.check_database()
  File "/home/user/bitHopper/database.py", line 50, in check_database
    self.curs.execute(sql)
sqlite3.OperationalError: unknown database bitcoin

Any ideas before I delete the db?

edit: Solved. Never call your pool database name [bitcoin.cz]
legendary
Activity: 1428
Merit: 1000
July 26, 2011, 06:47:31 AM
with pplns he is wrong.
he made the (wrong) assumption that the N in pplns means just N to round start. but with pplns some shares get double paid - which makes hopping useless.

score could be - i haven't dive into that yet
donator
Activity: 2058
Merit: 1007
Poor impulse control.
July 26, 2011, 06:39:00 AM
I'm looking forward to seeing your pool watch service!

its a bad idea to use slush as a backup as many of your shares will go to hell.

i tried an approach of joining slush @300% and stay until they found a block. but it's just not working well - so i disabled it

if you look at streblo's explanation, he stated that any scoring pool and any PPLNS pool would be hoppable. You just have to leave earlier. His graphs were pretty, not sure about the math and he hasn't been back since to explain except to leave a very mysterious agreement that I didn't follow.

On multipool and multclone I got good slush scores - around 110% - so I know it can be done (or I was very lucky). Either way I want to give it a try. The role:mine_slush is supposed to hop on and leave after diff*0.1



legendary
Activity: 1428
Merit: 1000
July 26, 2011, 06:32:20 AM
its a bad idea to use slush as a backup as many of your shares will go to hell.

i tried an approach of joining slush @300% and stay until they found a block. but it's just not working well - so i disabled it
Jump to: