Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 313. (Read 2591971 times)

member
Activity: 109
Merit: 10
I'm running a p2pool for an altcoin, and I'm seeing odd payouts that don't reflect the 1% fee configured. More like 0.2-0.5%. Here is a list of transactions going to the wallet of the daemon the pool is connected to: http://explorer.solcoin.net/address/8SLHAh8LukapttooKzdWocZ41tG5hne1UQ

Each block has a value of 1772 coins currently, so each transaction per found block should be around 17 coins, no? What's causing this?

The fee is 1% of valid shares, not 1% of total BTC mined.

So if a miner submits 100 shares, the expected result is the miner receives 99 and the node address receives 1.

Code in work.py line 161:
Code:
        if random.uniform(0, 100) < self.worker_fee:
            pubkey_hash = self.my_pubkey_hash
        else:
            try:
                pubkey_hash = bitcoin_data.address_to_pubkey_hash(user, self.node.net.PARENT)
            except: # XXX blah
                pubkey_hash = self.my_pubkey_hash
       
        return user, pubkey_hash, desired_share_target, desired_pseudoshare_target

Ah, that explains it I guess. Thanks!
legendary
Activity: 1258
Merit: 1027
I'm running a p2pool for an altcoin, and I'm seeing odd payouts that don't reflect the 1% fee configured. More like 0.2-0.5%. Here is a list of transactions going to the wallet of the daemon the pool is connected to: http://explorer.solcoin.net/address/8SLHAh8LukapttooKzdWocZ41tG5hne1UQ

Each block has a value of 1772 coins currently, so each transaction per found block should be around 17 coins, no? What's causing this?

The fee is 1% of valid shares, not 1% of total BTC mined.

So if a miner submits 100 shares, the expected result is the miner receives 99 and the node address receives 1.

Code in work.py line 161:
Code:
        if random.uniform(0, 100) < self.worker_fee:
            pubkey_hash = self.my_pubkey_hash
        else:
            try:
                pubkey_hash = bitcoin_data.address_to_pubkey_hash(user, self.node.net.PARENT)
            except: # XXX blah
                pubkey_hash = self.my_pubkey_hash
       
        return user, pubkey_hash, desired_share_target, desired_pseudoshare_target
member
Activity: 109
Merit: 10
I'm running a p2pool for an altcoin, and I'm seeing odd payouts that don't reflect the 1% fee configured. More like 0.2-0.5%. Here is a list of transactions going to the wallet of the daemon the pool is connected to: http://explorer.solcoin.net/address/8SLHAh8LukapttooKzdWocZ41tG5hne1UQ

Each block has a value of 1772 coins currently, so each transaction per found block should be around 17 coins, no? What's causing this?
legendary
Activity: 1540
Merit: 1001

From what folks have told me, the S3s work just fine when entered as S1s.

M

mdude, how long is it before the S3 reboots after discovering an "x"?  I ask because from time to time I've logged on to some of my S3's and seen an "x" displayed - but Antmon hasn't rebooted it. Never had this with the S1's - it's a great app though buddy  Wink

Since I don't have an S3, I don't know for sure the reboot functionality works. 

Assuming it does, I'd say it likely already rebooted it, but that didn't clear the X.  I have that with my S2 where I have to power cycle it to get the Xs to go away.  That's why I put the governor in there on reboots to prevent it rebooting every time it sees it, instead every X minutes.  Also make sure you have the checkbox in the Alerts->Alert Types tab enabled.

M
hero member
Activity: 924
Merit: 1000
Watch out for the "Neg-Rep-Dogie-Police".....

From what folks have told me, the S3s work just fine when entered as S1s.

M

mdude, how long is it before the S3 reboots after discovering an "x"?  I ask because from time to time I've logged on to some of my S3's and seen an "x" displayed - but Antmon hasn't rebooted it. Never had this with the S1's - it's a great app though buddy  Wink
legendary
Activity: 1540
Merit: 1001
Just to chime in, I've given up on trying to set difficulty or pseudo-share on my S1's.  I have also seen no difference between setting it explicitly or leaving it to auto-adjust in practical use.  I am mining on my own node now, however, and I've also split my miners up into groups to keep hashrates similar across the board (about 2.5TH each group, mainly to match the Terraminers).

I am using -- queue 1 on the S1's and I've found that I get a much better stale rate with it than at 0.

I expect my batch 5 S3's in before end of the week, so we'll see what happens with those.

BTW - is anyone using M's Ant Monitor with the S3's?  I'm using it with all my S1's and love it (and plan to donate), but there's only a radio button in there for S1/S2.  Safe to use the S1 button for an S3?  I'm really only using it to monitor the Ant's, I've got custom ssh deploy scripts done for pushing out changes to the S1's in batch, so I don't use any of the other features of the app.

From what folks have told me, the S3s work just fine when entered as S1s.

M
member
Activity: 112
Merit: 10
Just to chime in, I've given up on trying to set difficulty or pseudo-share on my S1's.  I have also seen no difference between setting it explicitly or leaving it to auto-adjust in practical use.  I am mining on my own node now, however, and I've also split my miners up into groups to keep hashrates similar across the board (about 2.5TH each group, mainly to match the Terraminers).

I am using -- queue 1 on the S1's and I've found that I get a much better stale rate with it than at 0.

I expect my batch 5 S3's in before end of the week, so we'll see what happens with those.

BTW - is anyone using M's Ant Monitor with the S3's?  I'm using it with all my S1's and love it (and plan to donate), but there's only a radio button in there for S1/S2.  Safe to use the S1 button for an S3?  I'm really only using it to monitor the Ant's, I've got custom ssh deploy scripts done for pushing out changes to the S1's in batch, so I don't use any of the other features of the app.
full member
Activity: 175
Merit: 100
Good to know as I have tried those settings as well as the Norgz Pool optimized calculated value and have not noticed any difference. I am on my own node as well.
legendary
Activity: 1344
Merit: 1024
Mine at Jonny's Pool
jonnybravo0311 are you on a pool with others is that why you are setting your difficulty?
No, I mine on my own node.  Of my miners, I have some with difficulty set to /256, some at /512 and some not set at all.  I have seen absolutely zero impact from setting them or not.
full member
Activity: 175
Merit: 100

That's using stock queue settings (--queue 4096).  Like I said... most stable one I've got - set it and forget it.  Best share is 0 because of what I reported earlier - manually setting share difficulty.

jonnybravo0311 are you on a pool with others is that why you are setting your difficulty?
legendary
Activity: 1344
Merit: 1024
Mine at Jonny's Pool

That's using stock queue settings (--queue 4096).  Like I said... most stable one I've got - set it and forget it.  Best share is 0 because of what I reported earlier - manually setting share difficulty.
hero member
Activity: 686
Merit: 500
WANTED: Active dev to fix & re-write p2pool in C
Screen using the --queue 0 setting - 0.0033% HW error, 2.14% reject, 0 stale.



(click to see all)
legendary
Activity: 1344
Merit: 1024
Mine at Jonny's Pool
I've ordered from batch 1, batch 4 and batch 5.  Even in batch 5 the issues remain, although I did get "lucky" as one of my batch 5 units actually hashes at 504GH/s stable.

I also have one of those lucky ones at 505Gh/s from B1 - it's the most stable one out the lot!!
I'm noticing that as well on my "lucky" one.  Very low HW errors, low rejects, stable hashing at 504, temps at 40... it is the closest to "set and forget" I've found with the S3s.  All of my others... constantly babysitting.
hero member
Activity: 686
Merit: 500
WANTED: Active dev to fix & re-write p2pool in C
I've ordered from batch 1, batch 4 and batch 5.  Even in batch 5 the issues remain, although I did get "lucky" as one of my batch 5 units actually hashes at 504GH/s stable.

I also have one of those lucky ones at 505Gh/s from B1 - it's the most stable one out the lot!!
legendary
Activity: 1344
Merit: 1024
Mine at Jonny's Pool
Yeah, I read that advice from ck, and noticed a few others were using --queue 1 as well. I noticed that my HW errors were slightly lower using that setting - but my reject/stale rate was slightly higher, whereas using --queue 0 resulted in the opposite - slightly higher HW errors but slightly lower reject/stale rate, so decided to go for --queue 0 & the lower stale/reject rate......still not sure though TBH.......

And yes, finicky is a good description. Now & then I get the odd "x" pop up & Antmonitor doesn't seem to reboot them always, so I have to keep an eye on them myself...... Huh
Yup, the mysterious "x" that randomly will show up, drop your hash rate by 15GH/s or so and disappears.  Changing miner configuration from the UI, but the mining process never actually starts.  Clocking 1 unit at 218.75, another at 212.5, another at 225, to find the most stable speed.  Some units with unbelievably low HW errors, others with high rates.  Some with high rejects, some with low.

I've ordered from batch 1, batch 4 and batch 5.  Even in batch 5 the issues remain, although I did get "lucky" as one of my batch 5 units actually hashes at 504GH/s stable.
Quote
EDIT: I also get around 10Gh/s more with --queue 0
Yet another example of differences in things.  I've never noticed any hash rate difference on my units by setting queue to 0, 1 or stock at 4096.
member
Activity: 98
Merit: 10
P2pool makes a lot sense, keep up the good work.
hero member
Activity: 686
Merit: 500
WANTED: Active dev to fix & re-write p2pool in C
Yeah, I read that advice from ck, and noticed a few others were using --queue 1 as well. I noticed that my HW errors were slightly lower using that setting - but my reject/stale rate was slightly higher, whereas using --queue 0 resulted in the opposite - slightly higher HW errors but slightly lower reject/stale rate, so decided to go for --queue 0 & the lower stale/reject rate......still not sure though TBH.......

And yes, finicky is a good description. Now & then I get the odd "x" pop up & Antmonitor doesn't seem to reboot them always, so I have to keep an eye on them myself...... Huh

EDIT: I also get around 10Gh/s more with --queue 0. It just seems slightly more stable......
full member
Activity: 175
Merit: 100
So far I'm finding my S3's work best using "--queue 0 --failover-only" in the /etc/init.d/cgminer settings - I'd be interested in hearing what you guys are using as a comparison?

Ta  Smiley

Edit: Decided to implement the ignore button for the first time ever - what a difference!! Bye bye troll  Cheesy
ckolivas recommended to use at least queue 1 and not queue 0.  Also, I just use the default failover option (since my backup pools are p2pool nodes, it really doesn't matter on which one I'm mining).

I've tried virtually every incarnation of settings on my S3s: queue, expiry, scan-time, manual share diff, manual pseudo-diff, etc.  I have detected absolutely zero difference in performance.  As I mentioned many pages ago in this thread, the only thing I've ever seen is that the value of "Best Share" is always "0" when I set my miner to use /#+#.

YMMV though.  One thing I have noticed is that the S3 is finicky.  Right when you think it's good to go and stable, something else pops up.  Just because I noticed no difference, somebody else might.

I too use a queue of 1 after testing the with stock setting following the CK guidance. https://bitcointalksearch.org/topic/m.8036549
legendary
Activity: 1344
Merit: 1024
Mine at Jonny's Pool
So far I'm finding my S3's work best using "--queue 0 --failover-only" in the /etc/init.d/cgminer settings - I'd be interested in hearing what you guys are using as a comparison?

Ta  Smiley

Edit: Decided to implement the ignore button for the first time ever - what a difference!! Bye bye troll  Cheesy
ckolivas recommended to use at least queue 1 and not queue 0.  Also, I just use the default failover option (since my backup pools are p2pool nodes, it really doesn't matter on which one I'm mining).

I've tried virtually every incarnation of settings on my S3s: queue, expiry, scan-time, manual share diff, manual pseudo-diff, etc.  I have detected absolutely zero difference in performance.  As I mentioned many pages ago in this thread, the only thing I've ever seen is that the value of "Best Share" is always "0" when I set my miner to use /#+#.

YMMV though.  One thing I have noticed is that the S3 is finicky.  Right when you think it's good to go and stable, something else pops up.  Just because I noticed no difference, somebody else might.
legendary
Activity: 1258
Merit: 1027
If I have 2 miners, and put one on one node, and the second on a separate one, both with the same bitcoin address, does anything weird happen?

Each node should report the proper hashrate of that particular miner, but the shares shown should be higher than expected, right?

ie miner 1 gets a share an hour, miner 2 gets a share an hour, each node show that the address is earning 2 shares an hour?

The expected payout will be the sum of both miners, however the shares found that are displayed will be unique on each node as each node only displays its own found shares (this is true for all current front ends I am aware of).
Jump to: