Pages:
Author

Topic: bitHopper: Python Pool Hopper Proxy - page 28. (Read 355768 times)

newbie
Activity: 33
Merit: 0
August 14, 2011, 03:48:51 PM
What the graphs show is that for a large number of alternative pools what you gain in efficiency per share for hopping earlier, you lose on the PPS back up. The efficiency remains constant because as you increase the hop off point you don't need to hop to back up and go for 1.0 efficiency anymore. Did your sim include a PPS backup, or just look at per share efficiency?

I'd love you to post your graphs so we can get a side by side comparison.

My simulator has PPS backup pool that pays constant BTC for each share.
My interpretation : PPS pool barely touched, when we have more proportional pools. (so, threshold becomes less meaningful, however it does not imply share from >43% proportional pool has greater profit expectation than pps)

I think we have same shaped graphs. Smiley
legendary
Activity: 1512
Merit: 1036
August 14, 2011, 03:02:41 PM
Hop off thresholds revisited

Real world simulation

Finally, I ran the simulation with six pools that were hopped at the time I started writing the simulation:

  • polmine, ~ 210 GH/s
  • MtRed, ~ 165 GH/s
  • triple, ~ 110 GH/s
  • ozco, ~ 55 GH/s
  • rfc, ~ 5 GH/s
  • btcmonkey, ~ 2.5 GH/s

In the result the peak is gone and between 0.40 up to at least 1.8 there is no visible drop in efficiency!





Conclusion

The results of my simulations agree with the findings of @organofcorti (and others). When using multiple pools for hopping (which is presently the common case), there is no need for a hop off threshold of 0.43. However, regarding efficiency, there does not seem to be any benefit in using a higher threshold either.

Choosing thresholds randomly when pool hopping makes it impossible for pool operators to identify you as a hopper for hopping off at 0.43. I therefore suggest implementing a random threshold selection from the range [0.5, 1.5] when there are multiple pools being hopped.

If you disable backup pooling or set backup threshold really high, you will essentially be hopping at a random threshold. A new round could start at a new pool when your current least-share pool is at 10% or 300%.

I assume you are giving each pool a table of random block find times (randomly generate a high-precision round percentile 0% to 100%, turn that percentile into number of individual hashes required using correct math, turn that into times using hashrate), and then are simulating the switching and share percentages earned. I too am curious in your simulation, how much time is being spent mining beyond the "optimal" range, say if the threshold was 1.8? Could you run a histogram with difficulty switch points for your six-pool example, so we can visually see how long we spend in proportional rounds. I'd be interested how long they actually tend to go.

For more interesting real-world factors to model, analysis could be biased with a 'pool hop-in' time delay (like jumping 5 minutes after block find), and a 'wrong pool hop error rate' that reflects what people are seeing with early block-detection methods, when they get incorrect blockfinding because they aren't getting info from the pool that actually found a block, and jump to a false-positive.
member
Activity: 98
Merit: 10
August 14, 2011, 02:26:33 PM
Does the efficiency stabilize because we are not going to any pool that is less than 43%? In your simulations can you say how much time the hopper would spend at the PPS pool (or how much time there was spent with no pool being less than 43%)? If the amount of time spent at PPS is a tiny percentage (because of all the available prop pools) then the efficiency would seem to be high but not because you hop away from a pool, but because you are always hopping TO a pool that has low shares.

Sure, I guess this is simply the reason and coincides so far with my observations Wink. My backup pool nearly never needs to be used, especially in these days where a huge percentage of the whole network hashing power is within proportional pools, and you can hop deepbit.

It doesn't matter if you set the threshold to 0.43 or 1 or infinity (then the backup pool will never be used) with many enough pools. There is nearly always a pool with less than 43%. You need to notice that the efficiency still drops sharply below 43%.
sr. member
Activity: 302
Merit: 250
August 14, 2011, 02:16:17 PM
Does the efficiency stabilize because we are not going to any pool that is less than 43%? In your simulations can you say how much time the hopper would spend at the PPS pool (or how much time there was spent with no pool being less than 43%)? If the amount of time spent at PPS is a tiny percentage (because of all the available prop pools) then the efficiency would seem to be high but not because you hop away from a pool, but because you are always hopping TO a pool that has low shares.
bb
member
Activity: 84
Merit: 10
August 14, 2011, 01:27:16 PM
Hop off thresholds revisited

A while ago @organofcorti claimed that our beloved hop off threshold of 43% does not make too much sense when hopping multiple pools.

Unsure about this claim (like so many others) I implemented (a very crude) pool hopping simulator myself and ran some simulations, the results of which I would like to share with you.

All these simulations run one miner at approximately 1 GH/s for about 1 year on what is now called the OldDefaultScheduler, meaning that the miner always jumps to the pool with the least shares. Also, there is a threshold variable (eg. t = 0.43). If there are no pools with less than t * difficulty shares, the miner hops to a fair pool.

The simulation does not use slicing. The pool speeds stay constant.



The standard case: one proportional pool

At first I ran the simulation with one proportional and one fair pool. This is the case @Raulo discussed in his original paper.

The simulated pool was running at about 500 GH/s.

The result shows the expected peak at about 0.43:





Two hoppable pools

Next I added another proportional pool.

The simulated pools were running at about 210 GH/s and 165 GH/s.

You can still see a peak somewhere around 0.40:





Real world simulation

Finally, I ran the simulation with six pools that were hopped at the time I started writing the simulation:

  • polmine, ~ 210 GH/s
  • MtRed, ~ 165 GH/s
  • triple, ~ 110 GH/s
  • ozco, ~ 55 GH/s
  • rfc, ~ 5 GH/s
  • btcmonkey, ~ 2.5 GH/s

In the result the peak is gone and between 0.40 up to at least 1.8 there is no visible drop in efficiency!





Conclusion

The results of my simulations agree with the findings of @organofcorti (and others). When using multiple pools for hopping (which is presently the common case), there is no need for a hop off threshold of 0.43. However, regarding efficiency, there does not seem to be any benefit in using a higher threshold either.

Choosing thresholds randomly when pool hopping makes it impossible for pool operators to identify you as a hopper for hopping off at 0.43. I therefore suggest implementing a random threshold selection from the range [0.5, 1.5] when there are multiple pools being hopped.
member
Activity: 84
Merit: 10
August 14, 2011, 01:19:13 PM
they paid out... once their json stats are fixed to reality I'll begin mining them again.

They aren't going to fix their JSON stats.  They are doing this as an anti-hopping measure.  Asked about it on their IRC and was told that. 
newbie
Activity: 38
Merit: 0
August 14, 2011, 11:53:22 AM
Urgh, not these scamcoins! Angry

Is mine_ixc really implemented? We need a difficulty for these btw. too...

ixc difficulty http://bitcoinx.com/ixcoin/
legendary
Activity: 2618
Merit: 1007
August 14, 2011, 08:35:12 AM
...I meant actually selling the mined coins for Bitcoins, not just determining if it is worth mining (which is trivial, I only need to figure out which measure to take - 24h average, daily low or last sell/buy).
hero member
Activity: 504
Merit: 502
August 14, 2011, 05:56:51 AM
If bitparking gets an API, we might even be able to automatically convert mined ___coins into Bitcoins, lowering the arbitrage risk. I'm not sure however if this really should be a feature of a pool hopping client...

You dont need any api for this, it can be done with the magic of mathematics and regex scraping Grin
full member
Activity: 168
Merit: 100
August 14, 2011, 05:55:06 AM
bcpool has been faking stats all day, I've been watching it.  Here is an example (times are in Mountain):

[22:03:27] RPC request [getwork] submitted to bcpool
[22:03:30] bcpool: 1668  96.2gh/s 27min.
[22:03:30] Setting Block Owner bcpool:298e948fe1d2891ab0c201d0ff41e4e2b6aa6bcf1d
1734de000001db00000000
[22:03:31] RPC request [2274f000] submitted to bcpool

bcpool has been reporting between 1000 and 99000 shares on the round all day, but I've been checking bitcoinpool.com and they have not solved a block all day.


Yep.  I don't know if its possible but all of the stats for individual users are on this page:

http://bitcoinpool.com/index.php?page=1&ipp=All&do=currentround

If we were able to simply add up all of the user shares we would get the total shares for the current block.  Sounds simple enough but IDK anything about python or coding.


yeh, while I've been watching the json feed has shown them solve 140903, 140904, 140906, and currently on 140910. I thought they weren't going to screw with donors. Might have to rethink my 2% donation.

they paid out... once their json stats are fixed to reality I'll begin mining them again.
legendary
Activity: 2618
Merit: 1007
August 14, 2011, 04:23:46 AM
If bitparking gets an API, we might even be able to automatically convert mined ___coins into Bitcoins, lowering the arbitrage risk. I'm not sure however if this really should be a feature of a pool hopping client...
legendary
Activity: 1526
Merit: 1002
Waves | 3PHMaGNeTJfqFfD4xuctgKdoxLX188QM8na
August 14, 2011, 04:09:02 AM
Urgh, not these scamcoins! Angry

Is mine_ixc really implemented? We need a difficulty for these btw. too...

Its only a scam when you cant make money from it Wink

I don't complain Smiley The 2000 are withdrawn from https://ixchange.bitparking.com/main, where I still have parked 6000.
 
donator
Activity: 2058
Merit: 1007
Poor impulse control.
August 14, 2011, 04:08:20 AM
bcpool has been faking stats all day, I've been watching it.  Here is an example (times are in Mountain):

[22:03:27] RPC request [getwork] submitted to bcpool
[22:03:30] bcpool: 1668  96.2gh/s 27min.
[22:03:30] Setting Block Owner bcpool:298e948fe1d2891ab0c201d0ff41e4e2b6aa6bcf1d
1734de000001db00000000
[22:03:31] RPC request [2274f000] submitted to bcpool

bcpool has been reporting between 1000 and 99000 shares on the round all day, but I've been checking bitcoinpool.com and they have not solved a block all day.


Yep.  I don't know if its possible but all of the stats for individual users are on this page:

http://bitcoinpool.com/index.php?page=1&ipp=All&do=currentround

If we were able to simply add up all of the user shares we would get the total shares for the current block.  Sounds simple enough but IDK anything about python or coding.


yeh, while I've been watching the json feed has shown them solve 140903, 140904, 140906, and currently on 140910. I thought they weren't going to screw with donors. Might have to rethink my 2% donation.
donator
Activity: 2058
Merit: 1007
Poor impulse control.
August 14, 2011, 04:00:11 AM
I got almost the same graph of @organofcorti two weeks ago. (from my own open-sourced simulator)
However, 100% agree with @deepceleron

When we have more proportional pools, efficiency doesn't 'seem' to be decaying because we have more chance to hop into < 43% pool all the time.
Earlier share is always more profittable, slicing reduces profit in the long time, and a share from > 43% pool is less profittable than a share from pps pool.

What the graphs show is that for a large number of alternative pools what you gain in efficiency per share for hopping earlier, you lose on the PPS back up. The efficiency remains constant because as you increase the hop off point you don't need to hop to back up and go for 1.0 efficiency anymore. Did your sim include a PPS backup, or just look at per share efficiency?

I'd love you to post your graphs so we can get a side by side comparison.

The real problem is variation, which slicing can help with at a cost to efficiency.

You can hop early if you want (although too early loses you efficiency since there will be fewer available pools under that level leaving you with PPS) but your efficiency wont be any better in the long run. Your variance with be better if you hop since you'll be using PPS a lot and if it wasn't for hoppers being banned and having to manage extra PPS accounts I'd stay hopping early.

I've been hopping only one week at 1.0 and although it's early (given the extra variance doing it this way) I seem to be getting results in the ballpark I was expecting (150 to 250%).



Edit: While I'm on the subject, 43% only applies to one single prop pool with one single backup. The max is slightly different for two prop pools +pps, etc up to about 4 prop+pps when it levels out totally and there is no local maximum after about .2*diff.
hero member
Activity: 504
Merit: 502
August 14, 2011, 02:58:46 AM
Urgh, not these scamcoins! Angry

Is mine_ixc really implemented? We need a difficulty for these btw. too...

Its only a scam when you cant make money from it Wink
legendary
Activity: 2618
Merit: 1007
August 14, 2011, 01:32:10 AM
Urgh, not these scamcoins! Angry

Is mine_ixc really implemented? We need a difficulty for these btw. too...
sr. member
Activity: 434
Merit: 250
August 14, 2011, 01:21:32 AM
TY.
sr. member
Activity: 434
Merit: 250
August 13, 2011, 11:34:10 PM
Bitparking IXC:

[ixparking]
name: IXParking.com
namecoin: Yep
mine_address: ixpool.bitparking.com:10098
api_address: http://ixpool.bitparking.com/user
api_method: re
api_key: Total Pool Shares([ 0-9]+)
api_strip: ' '
url: http://ixpool.bitparking.com/pool/%(user)s

[ixparking]
# IXC, use mine_ixc . Good for hopping if you want some ixccoins.
#http://bitparking.com/
#CHANGE THIS (if you want)
role: mine_ixc
penalty: .25
user: xxxGETYEROWNxxx
pass: x


member
Activity: 84
Merit: 10
August 13, 2011, 11:11:09 PM
bcpool has been faking stats all day, I've been watching it.  Here is an example (times are in Mountain):

[22:03:27] RPC request [getwork] submitted to bcpool
[22:03:30] bcpool: 1668  96.2gh/s 27min.
[22:03:30] Setting Block Owner bcpool:298e948fe1d2891ab0c201d0ff41e4e2b6aa6bcf1d
1734de000001db00000000
[22:03:31] RPC request [2274f000] submitted to bcpool

bcpool has been reporting between 1000 and 99000 shares on the round all day, but I've been checking bitcoinpool.com and they have not solved a block all day.


Yep.  I don't know if its possible but all of the stats for individual users are on this page:

http://bitcoinpool.com/index.php?page=1&ipp=All&do=currentround

If we were able to simply add up all of the user shares we would get the total shares for the current block.  Sounds simple enough but IDK anything about python or coding.
sr. member
Activity: 332
Merit: 250
August 13, 2011, 11:08:15 PM
bcpool has been faking stats all day, I've been watching it.  Here is an example (times are in Mountain):

[22:03:27] RPC request [getwork] submitted to bcpool
[22:03:30] bcpool: 1668  96.2gh/s 27min.
[22:03:30] Setting Block Owner bcpool:298e948fe1d2891ab0c201d0ff41e4e2b6aa6bcf1d
1734de000001db00000000
[22:03:31] RPC request [2274f000] submitted to bcpool

bcpool has been reporting between 1000 and 99000 shares on the round all day, but I've been checking bitcoinpool.com and they have not solved a block all day.
Pages:
Jump to: