Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 271. (Read 2592023 times)

hero member
Activity: 686
Merit: 500
WANTED: Active dev to fix & re-write p2pool in C
@ Ki773r: PatMan is correct. I done some testing of these settings some time ago & posted the results:

I've ordered from batch 1, batch 4 and batch 5.  Even in batch 5 the issues remain, although I did get "lucky" as one of my batch 5 units actually hashes at 504GH/s stable.

I also have one of those lucky ones at 505Gh/s from B1 - it's the most stable one out the lot!!

Following up from this, I've been experimenting a little more with my settings & here's a screen of the results:



(click for larger)

This is 4 x S3's running at various clock speeds. Looking at the graph, I was running --queue 1 up until ~3am (yup, I'm a night time fiddler Cheesy) before changing the setting to --queue 0 & letting them run for the same amount of time. It can clearly be seen that after changing the setting to --queue 0, the DOA rate dropped & smoothed out - this was also confirmed by my nodes info page. Average hash rate was slightly higher as a result, so I'll be keeping all my S3's running with the --queue 0 setting from now on. I'm not saying that this will work for everyone, but it's definitely good for my setup & worth giving a try if you're experiencing a higher than expected DOA rate.
The dip in hash rate at the end of the graph was due to a reboot after updating Xubuntu.

Smoke'em if ya got'em  Cool

Edit: It's also worth mentioning that my reject rate was at ~4% with --queue 1 - and ~2% with the 0 setting. This is running on a local node.

You are quite knowledgeable of p2pool, but to suggest that someone "doesn't know anything about what he is doing" when they clearly do, belittles you & your "legendary" status. Seems you owe him an apology  Wink
hero member
Activity: 924
Merit: 1000
Watch out for the "Neg-Rep-Dogie-Police".....
If you have polled upstream and the recommended value for queue is 1, why use 0.
its a fix for a problem years ago. everyone who still uses/recommends it dosnt know anything about what hes doing Wink

Attitude?

Use whatever works best for you  Wink

Be nice  Wink

legendary
Activity: 1792
Merit: 1008
/dev/null
If you have polled upstream and the recommended value for queue is 1, why use 0.
its a fix for a problem years ago. everyone who still uses/recommends it dosnt know anything about what hes doing Wink
hero member
Activity: 924
Merit: 1000
Watch out for the "Neg-Rep-Dogie-Police".....
Because

I find I get far fewer DOA/rejects using the zero setting, but it's not a hard & fast rule.....

Use whatever works best for you  Wink
sr. member
Activity: 257
Merit: 250
If you have polled upstream and the recommended value for queue is 1, why use 0.
hero member
Activity: 924
Merit: 1000
Watch out for the "Neg-Rep-Dogie-Police".....
Thanks & you're welcome  Smiley

These settings will stay after a reboot too, so no need to keep doing it. I find I get far fewer DOA/rejects using the zero setting, but it's not a hard & fast rule, it depends on your setup/network - many use "1" as their queue setting (ck recommended setting) with good results also - it's a case of suck it & see  Wink

Peace  Smiley
member
Activity: 61
Merit: 10
It shouldn't happen very often, rarely in fact. I find my S3's run best using --queue 0 in the cgminer settings, use the latest version also  Wink
How do I input this --queue 0 setting into cgminer?
I have ssh into one of the miner and entered vi /etc/config/cgminer
there are about a dozen settings listed there all starting with the word "option" (no quotes)
is this where the --queue 0 setting goes?
Cheers.

vim /etc/init.d/cgminer

Go to end of line 75 (older firmware) 70 (newer firmware)

Press “i” (enter edit mode)

Change the setting --queue 0 --failover-only

press "esc" (exit edit mode)

type    ":wq"  (without quotes)

Press "enter"

Go to web GUI, miner configuration, click "save & apply"

Done  Smiley

To check, go to processes tab, you should see the miner settings showing the new queue value on the cgminer process.
Perfect instructions.
Thank you very much.
Very helpful.
hero member
Activity: 924
Merit: 1000
Watch out for the "Neg-Rep-Dogie-Police".....
It shouldn't happen very often, rarely in fact. I find my S3's run best using --queue 0 in the cgminer settings, use the latest version also  Wink
How do I input this --queue 0 setting into cgminer?
I have ssh into one of the miner and entered vi /etc/config/cgminer
there are about a dozen settings listed there all starting with the word "option" (no quotes)
is this where the --queue 0 setting goes?
Cheers.

vim /etc/init.d/cgminer

Go to end of line 75 (older firmware) 70 (newer firmware)

Press “i” (enter edit mode)

Change the setting --queue 0 --failover-only

press "esc" (exit edit mode)

type    ":wq"  (without quotes)

Press "enter"

Go to web GUI, miner configuration, click "save & apply"

Done  Smiley

To check, go to processes tab, you should see the miner settings showing the new queue value on the cgminer process.
member
Activity: 61
Merit: 10
It shouldn't happen very often, rarely in fact. I find my S3's run best using --queue 0 in the cgminer settings, use the latest version also  Wink
How do I input this --queue 0 setting into cgminer?
I have ssh into one of the miner and entered vi /etc/config/cgminer
there are about a dozen settings listed there all starting with the word "option" (no quotes)
is this where the --queue 0 setting goes?
Cheers.
sr. member
Activity: 252
Merit: 250
Coin Developer - CrunchPool.com operator
No, the hash in that share was completely off, no 0 bits. I'd guess it's a hardware glitch. Not much of a problem if it happens only rarely but it should never happen normally.
legendary
Activity: 1540
Merit: 1001
That hash is worthless it doesn't meet the target, that's an error you're seeing. A miner is submitting crap. This usually happens if they mine using a different hash algorithm, for example.
All my miners are S3 Antminers, I assume they all use the same algorithm. So this is just a one-off glitch or something?
Cheers.

What probably happened was p2pool increased the min pseudo share size and the S3 hadn't switched yet and submitted a share size smaller than was allowed.  I saw this regularly when analyzing how well S2s don't perform with p2pool.

It's a pseudo share, so nothing lost.

M
hero member
Activity: 924
Merit: 1000
Watch out for the "Neg-Rep-Dogie-Police".....
It shouldn't happen very often, rarely in fact. I find my S3's run best using --queue 0 in the cgminer settings, use the latest version also  Wink
member
Activity: 61
Merit: 10
That hash is worthless it doesn't meet the target, that's an error you're seeing. A miner is submitting crap. This usually happens if they mine using a different hash algorithm, for example.
All my miners are S3 Antminers, I assume they all use the same algorithm. So this is just a one-off glitch or something?
Cheers.
member
Activity: 61
Merit: 10
Ok, I automatically assumed that bigger was better - like finding a block - if I understand that correctly - I assumed that a larger value would solve a block and / or share. I need to read more I guess.
Thank you.
legendary
Activity: 1540
Merit: 1001
I had this today:
Worker 1NBJixrZoXbcUaSkbQE4FxTsADTUAx8Ct6 submitted share with hash > target:
2014-10-22 12:50:57.620751     Hash:     3ff33cca7a322c501a7e754950bf8446009b69e418f94695746291
2014-10-22 12:50:57.620805     Target:   3ff2769861c1a00000000000000000000000000000000000000000

But I didn't get any credit for it - no change in shares - shouldn't I have at least received a share for it?

Cheers.

The short answer is no. Smiley

The medium answer has to do with terminology.  Technically shares need to be smaller than the current difficulty value to count.  But the way we view difficulty, and hence shares, we say everything has to be larger.  p2pool uses the technical terminology ... which is very confusing to say the least.

I don't think I can explain the long answer properly, as I don't fully understand it yet.  Has to do with how the value is used.

M
sr. member
Activity: 252
Merit: 250
Coin Developer - CrunchPool.com operator
That hash is worthless it doesn't meet the target, that's an error you're seeing. A miner is submitting crap. This usually happens if they mine using a different hash algorithm, for example.
member
Activity: 61
Merit: 10
I had this today:
Worker 1NBJixrZoXbcUaSkbQE4FxTsADTUAx8Ct6 submitted share with hash > target:
2014-10-22 12:50:57.620751     Hash:     3ff33cca7a322c501a7e754950bf8446009b69e418f94695746291
2014-10-22 12:50:57.620805     Target:   3ff2769861c1a00000000000000000000000000000000000000000

But I didn't get any credit for it - no change in shares - shouldn't I have at least received a share for it?

Cheers.
legendary
Activity: 1540
Merit: 1001
6 blocks today and counting.  Loving this new pool hashrate.

Not so thrilled with the 17million share difficulty.  If this keeps up I'll be squeezed out again or I'll have to get more hashpower. Sad

M
newbie
Activity: 9
Merit: 0
Hi community, I have a instance of p2pool running on windows.  I used the latest git available via forestv.  The pool has been running all night, over 12 hours but when I submit a share to it, it doesn't contain any additional transaction fees in the share.  How does p2pool include the transaction fees in shares, and what am I missing to have them included in mine.  I have also followed the p2pool tuning post and have the min-max tx fees included in the bitcoin.conf file as well as server=1.  Any other tips before I move the node to a Linux install to see if that corrects the issue.  It should be also noted that im running the bitcoin-qt gui and mining the pool against that, perhaps I need to run the daemon? but no information on the web to say the gui client is limited vs the bitcoind.exe.  

P2Pool gets the transactions from the bitcoin node it is running on, to see the current tx pool on your node run "bitcoind getrawmempool".

Setting the min/max tx fees in bitcoin.conf will determine what transactions are included in your transaction pool.

When your node finds a share that also meets the minimum bitcoin difficulty, the transactions in your bitcoin nodes tx pool are included in the block and broadcast to both the p2pool and bitcoin network.



Bah, I fixed it.  I had a modified d3.v2.min and/or share.html.  after resycing those two files and a cntrl+f5 on the site brought up all the info I was missing. yay
legendary
Activity: 1258
Merit: 1027
Hi community, I have a instance of p2pool running on windows.  I used the latest git available via forestv.  The pool has been running all night, over 12 hours but when I submit a share to it, it doesn't contain any additional transaction fees in the share.  How does p2pool include the transaction fees in shares, and what am I missing to have them included in mine.  I have also followed the p2pool tuning post and have the min-max tx fees included in the bitcoin.conf file as well as server=1.  Any other tips before I move the node to a Linux install to see if that corrects the issue.  It should be also noted that im running the bitcoin-qt gui and mining the pool against that, perhaps I need to run the daemon? but no information on the web to say the gui client is limited vs the bitcoind.exe.  

P2Pool gets the transactions from the bitcoin node it is running on, to see the current tx pool on your node run "bitcoind getrawmempool".

Setting the min/max tx fees in bitcoin.conf will determine what transactions are included in your transaction pool.

When your node finds a share that also meets the minimum bitcoin difficulty, the transactions in your bitcoin nodes tx pool are included in the block and broadcast to both the p2pool and bitcoin network.


Jump to: