Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 337. (Read 2591964 times)

sr. member
Activity: 434
Merit: 250
P2Pool Global Active Miners Stats Now Live!

http://minefast.coincadence.com/p2pool-stats.php

I hope you like it Smiley


Wow. This is beautiful. Any chance you'll release this code? I'd really like to run this on some of our altcoin nodes (with your donation footer, of course!)

or on a nomp page http://eu.centralcavern.uk:8080/stats simple and seen

nice1 windpath
hero member
Activity: 532
Merit: 500
P2Pool Global Active Miners Stats Now Live!

http://minefast.coincadence.com/p2pool-stats.php

I hope you like it Smiley


Wow. This is beautiful. Any chance you'll release this code? I'd really like to run this on some of our altcoin nodes (with your donation footer, of course!)
legendary
Activity: 1540
Merit: 1001
With walletaddress/10000000 one share is worth ~0.009 or 500ghs/day in grahps, bigger hook can give you bigger fish. And no its not high  Cheesy

OK I am not sure I understand... Is it more beneficial to use this?? 10000000 or not?
Thanks for any info on this.... and will it be any help to the S2's in use?

I'm not sure what help, if any, it would be for the S2.

Setting the difficulty manually like this tells the p2pool node that you are hashing on not to dynamically assign you a share difficulty, but to use the value you provide instead.  So, if the current share difficulty for getting a share onto the chain is 2000000, and you set your difficulty to 10000000, any shares you happen to find greater than 2000000 but less than 10000000 are not added to the chain.  It isn't until your miner finds a share at your difficulty that it would be added to the chain.  Because your share difficulty is set higher than the pool's default, when you find that 10000000 share, it is weighted heavier (meaning worth more BTC).

I honestly don't know what kind of benefit this would offer you other than creating a higher variance.  I mean, unless you've got enough hashing power such that you're finding shares faster than every 30 seconds, what's the point of slowing down the rate at which you find a share?

IMHO it doesn't do a thing.  It doesn't even affect variance, as you still need a share worth 2m to get on the share chain.

You can't find a share faster than every 30 seconds.  That's why the difficulty adjusts ... to make sure the average time per share is 30 seconds.  That's why if we change it to 2 minutes, the share difficulty has to increase, otherwise folks will still find shares approx ~30 seconds.

I never had any visible result from using /xxxxx, just +xxxx.  Plus I know p2pool has some odd arbitrary upper limits on what values you can put there.

M
legendary
Activity: 1344
Merit: 1024
Mine at Jonny's Pool
With walletaddress/10000000 one share is worth ~0.009 or 500ghs/day in grahps, bigger hook can give you bigger fish. And no its not high  Cheesy

OK I am not sure I understand... Is it more beneficial to use this?? 10000000 or not?
Thanks for any info on this.... and will it be any help to the S2's in use?

I'm not sure what help, if any, it would be for the S2.

Setting the difficulty manually like this tells the p2pool node that you are hashing on not to dynamically assign you a share difficulty, but to use the value you provide instead.  So, if the current share difficulty for getting a share onto the chain is 2000000, and you set your difficulty to 10000000, any shares you happen to find greater than 2000000 but less than 10000000 are not added to the chain.  It isn't until your miner finds a share at your difficulty that it would be added to the chain.  Because your share difficulty is set higher than the pool's default, when you find that 10000000 share, it is weighted heavier (meaning worth more BTC).

I honestly don't know what kind of benefit this would offer you other than creating a higher variance.  I mean, unless you've got enough hashing power such that you're finding shares faster than every 30 seconds, what's the point of slowing down the rate at which you find a share?
member
Activity: 85
Merit: 10
With walletaddress/10000000 one share is worth ~0.009 or 500ghs/day in grahps, bigger hook can give you bigger fish. And no its not high  Cheesy

OK I am not sure I understand... Is it more beneficial to use this?? 10000000 or not?
Thanks for any info on this.... and will it be any help to the S2's in use?
full member
Activity: 932
Merit: 100
arcs-chain.com
With walletaddress/10000000 one share is worth ~0.009 or 500ghs/day in grahps, bigger hook can give you bigger fish.

And no its not high  Cheesy, its nice to get 2 or more of those with 600ghs in a day, and would be very difficult to get 10-15 2000000 shares a day without orphans...

legendary
Activity: 1344
Merit: 1024
Mine at Jonny's Pool

Dont mine with minimal settings with 1th hardware - its for usb block erupters, use minimium walletaddress/10 000000 and see what happens in a week or two...


I'm not sure how the minimum difficulty works, but is there a typo here?

Is it: walletaddress/10000000

or: walletaddress/10 000000?

Doesn't that still seem a little high?
There shouldn't be a space: walletaddress/10000000

All that does is say, "I only want you to count my shares if they're 10000000 or more."  This is instead of the typical ~2000000 that is the current minimum to get a share on the chain.
hero member
Activity: 994
Merit: 1000

Dont mine with minimal settings with 1th hardware - its for usb block erupters, use minimium walletaddress/10 000000 and see what happens in a week or two...


I'm not sure how the minimum difficulty works, but is there a typo here?

Is it: walletaddress/10000000

or: walletaddress/10 000000?

Doesn't that still seem a little high?
sr. member
Activity: 297
Merit: 250
legendary
Activity: 1258
Merit: 1027
P2Pool Global Active Miners Stats Now Live!



http://minefast.coincadence.com/p2pool-stats.php

I hope you like it Smiley

legendary
Activity: 1540
Merit: 1001
Increasing share period to 2 min, all problems solved  Tongue

 SHARE_PERIOD=30, # seconds

 SHARE_PERIOD=120, # seconds

Why not? What is wrong with that? Then Bitcoind GetBlockTemplate Latency wouldnt mean as much as it does now either - could accept more transactions -> more income.....

Right now it's 30 seconds.  To make it 120 seconds, the share size increases by a factor of 4.  So instead of being 2.1m, it becomes 8.4m.  Say bye bye little miners (including S1s).

M
full member
Activity: 932
Merit: 100
arcs-chain.com
You get orphan share if its found just before a block is found or just after it. Everyone gets those, with cpumining too!
full member
Activity: 154
Merit: 100

See pull request #187 ( https://github.com/forrestv/p2pool/pull/187 ) created March 3, 2014 for p2pool main code.

I continue to play a little bit with point 1 and 4 from your initial list and created 2 more experimental patches:

First patch is to request work restart only once for each block.
https://github.com/jaketri/p2pool/commit/0b0c3473c0a67f24f4bc9b95de002770c69828ef

Second patch is to accept any valid work even stale or DOA and let the share verification decide if share is valid or not.
https://github.com/jaketri/p2pool/commit/875c3483b82d50ce77f153f46ce7c20e193f17c0

This 2 patches help a lot for my Bitfury miner (without them I have about 25-30% DOA) but unfortunately I do not have an S2 to test.

Here is my experimental p2pool node running on port 19332 with this patches applied in case anyone want to do a quick test with S2 and let me know if they see any improvement.

p2pool.25u.com:19332

Keep in mind that p2pool node with above patches will not reject any valid work but the shares may get rejected later when adding them to share chain.

Note: On same address but on the standard p2pool port 9332 is a p2pool server without this patches if you want to see results with and without the patches.


I am hooked in there with you....lets see.......... this is a single S2

Thank you bryonp for testing my patches from my experimental p2pool.

The S2 miner was able to find in about 12 hours only 3 shares and 2 of them where orphans so my patch did not help the S2.
full member
Activity: 932
Merit: 100
arcs-chain.com
Increasing share period to 2 min, all problems solved  Tongue

 SHARE_PERIOD=30, # seconds

 SHARE_PERIOD=120, # seconds

Why not? What is wrong with that? Then Bitcoind GetBlockTemplate Latency wouldnt mean as much as it does now either - could accept more transactions -> more income.....
full member
Activity: 154
Merit: 100
At the moment my conclusion is a proxy isn't going to help getting S2s working with p2pool.

I believe the problem is the constant "drop EVERYTHING you are doing and restart" messages that p2pool sends every 30 seconds.  Then it rejects all the old work coming p2pool leading to large DOA.

My recommendations for p2pool to become more S2 friendly:

1 - Sending new work every 30 seconds is fine.  It's apparently not necessary to force the miner to restart "clean jobs" = true.  It also needs to gracefully accept work from prior job IDs for a period of time like other pools do.

2 - Start at a higher pseudo difficulty than 1.

3 - Allow a higher pseudo difficulty than 1000.

4 - Send the actual errors back for rejected work, instead of just "false".

If I understood python I'd take a stab at 2 through 4.  1 might be a big deal.

Here are 2 patches for p2pool that address point 2 and 3 from your list.

First a simple patch that allow higher than 1000 pseudo-difficulty.
https://github.com/jaketri/p2pool/commit/05b630f2c8f93b78093043b28c0c543fafa0a856

And another patch that add "--min-difficulty" parameter to p2pool. For my setup I use 256 as start pseudo difficulty.

https://github.com/jaketri/p2pool/commit/5f02f893490f2b9bfa48926184c4b1329c4d1554

Thanks, that's good.  Now why isn't it in the main code?

M
See pull request #187 ( https://github.com/forrestv/p2pool/pull/187 ) created March 3, 2014 for p2pool main code.

I continue to play a little bit with point 1 and 4 from your initial list and created 2 more experimental patches:

I don't think #1 is fixable on the p2pool side.  Since I wrote the above, I've learned that p2pool must force a restart for it's share chain, as each share found every ~30 secs is equiv to a block in the BTC chain.  While I can forcible override the jobid to get it by p2pool's edits, I'm pretty sure any shares of greatest size will be rejected by the pool because it's using the wrong data.  (I assume that means it would show up stale.)

M
This is also my understanding and I when p2pool server is forcing a work restart it will be a dead time until miner is able to get a share based on new data.
I was thinking that miner start using the new work data with or without a work restart but with the work restart it will just drop the old work. So if p2pool server does not issue a work restart it is possible for the miner to find valid shares based on old work until the new work data is used.

If my theory is wrong and p2pool server does not issue work restart every ~30 sec then miners should get more orphan and/or dead shares. To test this I started this experimental p2pool server and so far from 13 found shares I got only 2 orphans and 0 dead. This does not prove anything yet but so far either my miner was very very lucky or the results are going somewhere.
legendary
Activity: 1540
Merit: 1001
At the moment my conclusion is a proxy isn't going to help getting S2s working with p2pool.

I believe the problem is the constant "drop EVERYTHING you are doing and restart" messages that p2pool sends every 30 seconds.  Then it rejects all the old work coming p2pool leading to large DOA.

My recommendations for p2pool to become more S2 friendly:

1 - Sending new work every 30 seconds is fine.  It's apparently not necessary to force the miner to restart "clean jobs" = true.  It also needs to gracefully accept work from prior job IDs for a period of time like other pools do.

2 - Start at a higher pseudo difficulty than 1.

3 - Allow a higher pseudo difficulty than 1000.

4 - Send the actual errors back for rejected work, instead of just "false".

If I understood python I'd take a stab at 2 through 4.  1 might be a big deal.

Here are 2 patches for p2pool that address point 2 and 3 from your list.

First a simple patch that allow higher than 1000 pseudo-difficulty.
https://github.com/jaketri/p2pool/commit/05b630f2c8f93b78093043b28c0c543fafa0a856

And another patch that add "--min-difficulty" parameter to p2pool. For my setup I use 256 as start pseudo difficulty.

https://github.com/jaketri/p2pool/commit/5f02f893490f2b9bfa48926184c4b1329c4d1554

Thanks, that's good.  Now why isn't it in the main code?

M
See pull request #187 ( https://github.com/forrestv/p2pool/pull/187 ) created March 3, 2014 for p2pool main code.

I continue to play a little bit with point 1 and 4 from your initial list and created 2 more experimental patches:

I don't think #1 is fixable on the p2pool side.  Since I wrote the above, I've learned that p2pool must force a restart for it's share chain, as each share found every ~30 secs is equiv to a block in the BTC chain.  While I can forcible override the jobid to get it by p2pool's edits, I'm pretty sure any shares of greatest size will be rejected by the pool because it's using the wrong data.  (I assume that means it would show up stale.)

M
sr. member
Activity: 308
Merit: 250
Decentralize your hashing - p2pool - Norgz Pool
why is Azure insane for a project like this? I was suggesting it for individuals to run nodes. All you need is a business or to be a student. it's a good deal and I have nodes running on the Singapore dc's with great stats.

Sorry, I meant running a distributed pool of p2pool servers on a MSDN credit Azure account is insane Wink .

I have some doubts that Azure would be the right platform p2pool in general though, there are better (technically that is) alternatives out there.  I don't know if you have used EC2 but there is a reason that the "big players" use it.

Neil



my azure node runs more efficiently than my VMware one locally. the whole EC2 v Azure argument is mute since they are both built for the same kind of performance and scalability. it really comes down to interface and any specific features. I've spent a lot of time with Azure and migrated our company to it. "The big players use it" (EC2) because they were up first and they committed to it, Azure is the underdog that has really proven to deliver a very rich featureset and give AWS a run for their money. Even Amazon admit that MS do SQL better on Azure than they can.

Anyway, this is not exactly a hosting thread so i'll leave it at that and you can decide for yourself.

sr. member
Activity: 543
Merit: 250
Orjinal üyelik ToRiKaN banlanalı asır ol
What should be edited in the code to be able to increase nodes minimium share difficulty, it would be much easier than change it with every miner. (walletaddress/100 000000)...100M



I think the main reason it's set with your worker name is so that each miner mining on a node can set their share value without having to override what the node operator wants to mine at to get the default value.
legendary
Activity: 896
Merit: 1000
The problem, as I've said on many occasions, is not a lack of nodes - there are literally hundreds of public nodes & hundreds more private nodes.

Agreed, but I am meaning a set of nodes that have the "P2Poll Community Approved" stamp on them.

At the moment it is not even clear that you can use other peoples nodes.  Even if you do work it out there are literally tones of them run by anonymous people of dubious ethics (I mean from the outsiders point of view).  There is also no way of being able to trust them with your hash'es, they could be charging a 10% fee and have "tweeked" their code.

Neil
legendary
Activity: 896
Merit: 1000
why is Azure insane for a project like this? I was suggesting it for individuals to run nodes. All you need is a business or to be a student. it's a good deal and I have nodes running on the Singapore dc's with great stats.

Sorry, I meant running a distributed pool of p2pool servers on a MSDN credit Azure account is insane Wink .

I have some doubts that Azure would be the right platform p2pool in general though, there are better (technically that is) alternatives out there.  I don't know if you have used EC2 but there is a reason that the "big players" use it.

Neil

Jump to: