Author

Topic: bitHopper: Python Pool Hopper Proxy - page 187. (Read 355689 times)

newbie
Activity: 55
Merit: 0
July 17, 2011, 10:27:08 AM
I have multiple gpu's so soes it matter if I have 1 worker for each pool and running multiple phoenix clients pointed to only 1 bithopper client or should I have 1 worker, 1 bithopper client and 1 phoenix client for each gpu?

Bithopper is a proxy. Point as many clients as you like at it.

Quote
Also I get syntax error and script wont start when applying nofee's code to pools.py, any clue why's that?

Can you post your code as entered in pools.py and password.py as well as the error?



Error:
Code:
Traceback (most recent call last):
  File "D:\aaaphoenix\bithopper\bitHopper.py", line 7, in
    import work
  File "D:\aaaphoenix\bithopper\work.py", line 12, in
    from bitHopper import *
  File "D:\aaaphoenix\bithopper\bitHopper.py", line 9, in
    import stats
  File "D:\aaaphoenix\bithopper\stats.py", line 6, in
    import pool
  File "D:\aaaphoenix\bithopper\pool.py", line 62
    'nofee':{'shares': default_shares, 'name': 'nofee',
          ^
SyntaxError: invalid syntax
>>>

pools:
Code:
  'nofee':{'shares': default_shares, 'name': 'nofee',
           'mine_address': 'nofeemining.com:8332', 'user': nofee_user,
           'pass': nofee_pass, 'lag': False, 'LP': None,
           'api_address':'http://www.nofeemining.com/api.php?key=' + nofee_user_apikey, 'role':'mine'}
        }
.
.
.
def nofee_sharesResponse(response):
    global servers
    info = json.loads(response)
    round_shares = int(info['poolRoundShares'])
    servers['nofee']['shares'] = round_shares
    bitHopper.log_msg('nofee:' + FormatShares(round_shares))
.
.
.
'nofee':nofee_sharesResponse,

password:
Code:
#nofee
nofee_user = 'my worker id'
nofee_pass= 'my password'
nofee_user_apikey = 'my api'
donator
Activity: 2058
Merit: 1007
Poor impulse control.
July 17, 2011, 10:17:31 AM
I have multiple gpu's so soes it matter if I have 1 worker for each pool and running multiple phoenix clients pointed to only 1 bithopper client or should I have 1 worker, 1 bithopper client and 1 phoenix client for each gpu?

Bithopper is a proxy. Point as many clients as you like at it.

Quote
Also I get syntax error and script wont start when applying nofee's code to pools.py, any clue why's that?

Can you post your code as entered in pools.py and password.py as well as the error?

newbie
Activity: 55
Merit: 0
July 17, 2011, 10:13:56 AM
I have multiple gpu's so soes it matter if I have 1 worker for each pool and running multiple phoenix clients pointed to only 1 bithopper client or should I have 1 worker, 1 bithopper client and 1 phoenix client for each gpu?

Also I get syntax error and script wont start when applying nofee's code to pools.py, any clue why's that?
legendary
Activity: 1428
Merit: 1000
July 17, 2011, 10:13:13 AM
Work is being done on identifying the who created each block with various heuristics, but it's not incredibly accurate yet, and it may prove difficult or impossible to be reliably accurate.

Some things that might work:
  • Monitoring block announces on bitcoind (especially which node told you of which block first). I assume pools don't switch neighbours too often, so blocks from pools might take similar routes to you. Especially great if you manage to directly connect to a pool's bitcoind.
  • LongPoll timing
  • Measuring somehow server load (getwork response speed?) - after a block was solved, some database load should kick in. This might be delayed though, so not that reliable...
  • Monitor own getwork submissions for winning blocks and announce these not just to the pool but also on a 3rd party website (maybe with some incentives?). If everyone did it, we wouldn't need to guess! Also very hard (current difficulty hard!) to fake.
  • Combining all of the above measures on a website, ideally submitted automatically by a poolhop program. Attached could be a valid share for any pool with a certain difficulty, so it's not easy to submit bogus data, but these shares will emerge anyways during a miner's day ensuring constant submissions.
  • Monitoring the coinbase hash in the getwork. At least it can be assumed that NO block has been found globally if it stays the same...

Most of these things are NOT possible to be delayed at all, and most are even time critical (especially block announcements have to be done as fast as possible, no matter the cost), so pools have to get them out fast, no matter the payout scheme.

VERY nice; that could give good data

could you pm techwtf?
maybe he is able to put some of those on his site: http://fasthoop.appspot.com/

(i guess we need people working at isps to watch winning getwork requests too)
donator
Activity: 2058
Merit: 1007
Poor impulse control.
July 17, 2011, 10:10:48 AM

Quote
Every share you submit after 0.4348 diminishes your overall expected efficiency, but that overall efficiency will remain greater than 1 as long as you switch out at some point.
OK, that sounds about right. But earlier you wrote:

Quote
The multiplier drops below 1 at approximately 0.4348 * difficulty

These seem contradictory statements to me, so I'm still missing something! Huh
legendary
Activity: 2618
Merit: 1007
July 17, 2011, 10:08:08 AM
Work is being done on identifying the who created each block with various heuristics, but it's not incredibly accurate yet, and it may prove difficult or impossible to be reliably accurate.

Some things that might work:
  • Monitoring block announces on bitcoind (especially which node told you of which block first). I assume pools don't switch neighbours too often, so blocks from pools might take similar routes to you. Especially great if you manage to directly connect to a pool's bitcoind.
  • LongPoll timing
  • Measuring somehow server load (getwork response speed?) - after a block was solved, some database load should kick in. This might be delayed though, so not that reliable...
  • Monitor own getwork submissions for winning blocks and announce these not just to the pool but also on a 3rd party website (maybe with some incentives?). If everyone did it, we wouldn't need to guess! Also very hard (current difficulty hard!) to fake.
  • Combining all of the above measures on a website, ideally submitted automatically by a poolhop program. Attached could be a valid share for any pool with a certain difficulty, so it's not easy to submit bogus data, but these shares will emerge anyways during a miner's day ensuring constant submissions.
  • Monitoring the coinbase hash in the getwork. At least it can be assumed that NO block has been found globally if it stays the same...

Most of these things are NOT possible to be delayed at all, and most are even time critical (especially block announcements have to be done as fast as possible, no matter the cost), so pools have to get them out fast, no matter the payout scheme.

Edit:
If this can be done in an at least sufficiently reliable way, we wouldn't even need these stupid fakeable APIs any more!
LP timing could already be done by bitHopper, the more difficult part would be to emulate bitcoind traffic and to establish + monitor connections to other "real" bitcoind nodes (as many as possible...)
newbie
Activity: 53
Merit: 0
July 17, 2011, 10:04:35 AM
This goes against nearly everything I've read - could you please explain a little more where you got these figures from? If you're referring to expected efficiency, that of course goes to 1.0 at total shares. If you're referring to Raulo's paper, that has a maximum at 0.435, but is still providing a 22% hashrate increase at total shares.

So I guess I'm missing something here - can you find a simple way to explain the results you got above?

btw, I've had multipliers of over 120x for a few shares of short rounds. 13x is *not* the maximum.

These are expected values. If the round finishes on the first share, then you will receive 50BTC, which is difficulty * x, as far as efficiency goes. However, the average value of that first share is more like 14.2x. (Sometimes it'll be a long round and get a much lower payout.)

Sticking around until the difficulty still has a positive net effect because you already feasted on tons of early-round shares, which have a higher expected value. Every share you submit after 0.4348 diminishes your overall expected efficiency, but that overall efficiency will remain greater than 1 as long as you switch out at some point.

EDIT: Updated the expected value of the first share, since it depends on difficulty, and I was using a figure from an older difficulty.
donator
Activity: 2058
Merit: 1007
Poor impulse control.
July 17, 2011, 09:42:13 AM
Interestingly, the 'cheater function' shows that even hopping at = has a 22% increase in effective hashrate (and thus coinage), which is better than eligius/arsbitcoin at 0% increase. So I it seems like it might pay have the jump off proportion of difficulty at 1.0 instead of 0.4

It's better to think in terms of the expected value of a share. If x is (50 / difficulty) The first share submitted in a round has an expected value of about 13x. The multiplier drops below 1 at approximately 0.4348 * difficulty. In other words, on average, there's no good reason to submit a share to a proportional pool after it has crossed that threshold, when SMPPS, PPLNS, or Geometric alternatives are available. (If for some reason you wanted to use Deepbit PPS as your backup, no idea why you'd do such a thing, you can wait until more like 0.525 * difficulty.)


This goes against nearly everything I've read - could you please explain a little more where you got these figures from? If you're referring to expected efficiency, that of course goes to 1.0 at total shares. If you're referring to Raulo's paper, that has a maximum at 0.435, but is still providing a 22% hashrate increase at total shares.

So I guess I'm missing something here - can you find a simple way to explain the results you got above?

btw, I've had multipliers of over 120x for a few shares of short rounds. 13x is *not* the maximum.
newbie
Activity: 53
Merit: 0
July 17, 2011, 09:34:48 AM
Interestingly, the 'cheater function' shows that even hopping at = has a 22% increase in effective hashrate (and thus coinage), which is better than eligius/arsbitcoin at 0% increase. So I it seems like it might pay have the jump off proportion of difficulty at 1.0 instead of 0.4

It's better to think in terms of the expected value of a share. If x is (50 / difficulty) The first share submitted in a round has an expected value of about 13x. The multiplier drops below 1 at approximately 0.4348 * difficulty. In other words, on average, there's no good reason to submit a share to a proportional pool after it has crossed that threshold, when SMPPS, PPLNS, or Geometric alternatives are available. (If for some reason you wanted to use Deepbit PPS as your backup, no idea why you'd do such a thing, you can wait until more like 0.525 * difficulty.)

Work is being done on identifying the who created each block with various heuristics, but it's not incredibly accurate yet, and it may prove difficult or impossible to be reliably accurate.
legendary
Activity: 1428
Merit: 1000
July 17, 2011, 09:13:57 AM

Not much use for deepbit, but at least btcg have a few > 1hr rounds.

^^ but still we have the problem to know when btcg did found the block.

lp:
i am not sure. but we could try to monitor lp's and guessing that the first lp coming in is probably from the pool who found the block (still we have no clue about solo or pools we dont monitor)

still interested in the way multipool did with bit deepbit. reading perl is a PITA
donator
Activity: 2058
Merit: 1007
Poor impulse control.
July 17, 2011, 09:04:32 AM
I wonder however what we can/shall do about deepbit and BTC guild...

how is deepbit handeld by multipool/multiclone?

the only way i can imagine is watching historic data and if they had bad luck, thank think they'll get lucky.

but this only works with an algorithm that spreads shares and does not stays within a pool ( i posted such a variant before; but it has too much rejects and i dont know how to handle them)



Wont work. Each new round length is still governed by a poisson process. On of the properties of the poisson process is that is has no memory of prior events. A short round wont make a long one more likely, nor visa versa. Sorry about that. I think though long polling is a help. ALso using the known hashrate of the pool means that we know how long an average round should be, and if it's over an hour will help us predict when it is most likely to end. Not much use for deepbit, but at least btcg have a few > 1hr rounds.
donator
Activity: 2058
Merit: 1007
Poor impulse control.
July 17, 2011, 08:55:22 AM
ok, nofeemining is working, if you want to try (yes i know they're new and could scam us). use the code i have earlier, just change the api address to http://www.nofeemining.com/api.php

The person running nofeemining fixed the api within a few minutes of my request - so i'm def donating to them. I hope those of you that add nofeemining do too. a responsive, engaged pool owner is the best sort. helps get thing sorted quickly - one way or the other.

as an aside - i think it's a good idea to donate even a little bit - even 0.5% - to any pool you hop. adding 100ghps at round start is prolly a pain for the pool operators, so why you may wish to sweeten it for them and they may keep their pools hoppable longer.

I figured out the original issue.  You can have him disable http on the api agian.  This is what it should look like:
Code:
        'nofee':{'shares': default_shares, 'name': 'nofee',
           'mine_address': 'nofeemining.com:8332', 'user': nofee_user,
           'pass': nofee_pass, 'lag': False, 'LP': None,
           'api_address':'https://www.nofeemining.com/api.php', 'role':'mine',
           'user_api_address':'https://www.nofeemining.com/api.php?key=' + nofee_user_apikey},

The earlier example you posted had left out 'user_api_address' and combined nofee_user_apikey with 'api_address'.

No.

This is my original:

 
Code:
'nofee':{'shares': default_shares, 'name': 'nofee',
           'mine_address': 'nofeemining.com:8332', 'user': nofee_user,
           'pass': nofee_pass, 'lag': False, 'LP': None,
           'api_address':'https://www.nofeemining.com/api.php?key=' + nofee_user_apikey, 'role':'mine'}

I defined the api as the user api address because i didn't know the pool stats address, but try it - it works fine for pool stats too. Just because you use the user api for user stats doesn't mean you can't use it for pool stats if they're there too. Hope this helps!
newbie
Activity: 55
Merit: 0
July 17, 2011, 08:51:35 AM
Does anyone know how to fix this:

EDIT: The answer was in readme  Smiley
Code:
[15:49:15] Error in user api for bitp
"[Failure instance: Traceback: : SSL support is not present\nC:\\Python27\\lib\\site-packages\\twisted\\internet\\task.py:194:__call__\nC:\\Python27\\lib\\site-packages\\twisted\\internet\\defer.py:133:maybeDeferred\nD:\\aaaphoenix\\bithopper\\stats.py:93:update_api_stats\nC:\\Python27\\lib\\site-packages\\twisted\\internet\\defer.py:1141:unwindGenerator\n--- ---\nC:\\Python27\\lib\\site-packages\\twisted\\internet\\defer.py:1020:_inlineCallbacks\nD:\\aaaphoenix\\bithopper\\work.py:70:get\nD:\\aaaphoenix\\bithopper\\client.py:748:request\nD:\\aaaphoenix\\bithopper\\client.py:797:_request\nD:\\aaaphoenix\\bithopper\\client.py:698:_connect\nC:\\Python27\\lib\\site-packages\\twisted\\internet\\protocol.py:289:connectSSL\nC:\\Python27\\lib\\site-packages\\twisted\\internet\\protocol.py:239:_connect\nC:\\Python27\\lib\\site-packages\\twisted\\internet\\posixbase.py:434:connectSSL\n]"

Same error for all pools. I have api´s defined in psdws.py and logins in pools.py.
legendary
Activity: 1428
Merit: 1000
July 17, 2011, 05:57:14 AM
I wonder however what we can/shall do about deepbit and BTC guild...

how is deepbit handeld by multipool/multiclone?

the only way i can imagine is watching historic data and if they had bad luck, thank think they'll get lucky.

but this only works with an algorithm that spreads shares and does not stays within a pool ( i posted such a variant before; but it has too much rejects and i dont know how to handle them)

legendary
Activity: 2618
Merit: 1007
July 17, 2011, 05:16:43 AM
"WTF man" is a proper expression when I read such stuff. Pool hopping is over 6 months old now, and people STILL don't care about their miners?!

At least a lot of medium sized pools recently are actively working on switching to secure payout systems. On one hand bad for me, because I earn less from there, but on the other hand great for their users and the community!
I wonder however what we can/shall do about deepbit and BTC guild...
hero member
Activity: 588
Merit: 500
hero member
Activity: 504
Merit: 502
July 17, 2011, 04:56:11 AM
Im getting api errors with btcguild and mtred and then it doesnt pull the roundstats, anyone else?
legendary
Activity: 2618
Merit: 1007
July 17, 2011, 04:54:34 AM
Alright, already got some ideas how to break delayed stats? Currently it looks like pools would rather rework sensible parts of their code and screw with their users (adding intransparency) by delaying statistics than simply integrating an algorithm like PPLNS which is nearly the same as prop (when the pool is lucky, you get more), just without the chance of getting hopped ever.

At least Eligius' blocks can be fairly easily identified. As can the blocks from pools that are NOT delaying stats. The problem is, that now BTCguild and deepbit seem to delay, so there's a very high chance for a new block to have no known owner for an hour or more...
I have a few ideas already, but I'd like to collect some input from you to see some approaches you would take to deal with delayed/faked stats to still detect which pool solved which block.
newbie
Activity: 40
Merit: 0
July 17, 2011, 03:38:53 AM
ok, nofeemining is working, if you want to try (yes i know they're new and could scam us). use the code i have earlier, just change the api address to http://www.nofeemining.com/api.php

The person running nofeemining fixed the api within a few minutes of my request - so i'm def donating to them. I hope those of you that add nofeemining do too. a responsive, engaged pool owner is the best sort. helps get thing sorted quickly - one way or the other.

as an aside - i think it's a good idea to donate even a little bit - even 0.5% - to any pool you hop. adding 100ghps at round start is prolly a pain for the pool operators, so why you may wish to sweeten it for them and they may keep their pools hoppable longer.

I figured out the original issue.  You can have him disable http on the api agian.  This is what it should look like:
Code:
        'nofee':{'shares': default_shares, 'name': 'nofee',
           'mine_address': 'nofeemining.com:8332', 'user': nofee_user,
           'pass': nofee_pass, 'lag': False, 'LP': None,
           'api_address':'https://www.nofeemining.com/api.php', 'role':'mine',
           'user_api_address':'https://www.nofeemining.com/api.php?key=' + nofee_user_apikey},

The earlier example you posted had left out 'user_api_address' and combined nofee_user_apikey with 'api_address'.
donator
Activity: 2058
Merit: 1007
Poor impulse control.
July 17, 2011, 01:33:22 AM
I just do it manually. For btcguild, eclipse, mt red and bitcoins-lc, they publish your block contribution history. Just take your total shares and total coinage. Then efficiency = /(*50/).

Jump to: