Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 481. (Read 2591928 times)

sr. member
Activity: 454
Merit: 252
Pool rate: 60.9TH/s (15% DOA+orphan) Share difficulty: 69700
Expected time to block: 2.92 hours

wow, haven't had that large of the network in a long time (ever?)
sr. member
Activity: 344
Merit: 250
Flixxo - Watch, Share, Earn!
We need a bann-funkction for Clients on public-nodes:

On my node there is a unknown worker wich produces >30% of DOA and massivly
2013-09-30 12:41:24.961081 Worker xxx  submitted share with hash > target:
2013-09-30 12:41:24.961407     Hash:   a9b3716dfc6ad5c4e8145dc2b5529e497e8f45e85cd616d3c6119b00
2013-09-30 12:41:24.961601     Target:  fd1f6b8accdf2000000000000000000000000000000000000000000

what can I /we do again so Workers?
can Dev implement some Score-Function to bann such nodes?

Greets

Subo
legendary
Activity: 1066
Merit: 1098
Hallo, is that possible to mine on p2pool with Block erupter blade? If yes, then tell me please how to configure it.
Maybe somebody know an answer to this question?

You will need to use a proxy - either BFGMiner or one of the stratum proxies - other than that it is pretty straight forward.  Configure the blade to connect to the proxy, and configure the proxy to connect to P2pool.  My experience is that you will see a very high DOA rate from the blades on P2Pool.
legendary
Activity: 1066
Merit: 1098
I have a problem I don't really understand, and I'm hoping one of you can give me some useful advice...

I was running a P2Pool node for a while, and I seem to remember that my Bitcoind GetBlockTemplate Latency from the graphs page was consistently ~0.2s.

I started mining on a different pool for a couple of weeks, then came back to P2Pool - and now, my Bitcoind GetBlockTemplate Latency is staying up at ~1.1s!  That seems pretty high to me, especially since bitcoind and P2Pool are running on the same machine.

As far as I can tell, the only thing that's different is that I am using 13.3 now, and was using 13.2 before.  I have not tried reverting to 13.2...  Maybe that's an experiment I should try, but I didn't see any change in the release notes that should affect this.

If it makes any difference, the platform is Win7 64-bit.

Anyone have any idea what might be going on here?


Does anyone have any idea what might be the problem here?
newbie
Activity: 35
Merit: 0
Hallo, is that possible to mine on p2pool with Block erupter blade? If yes, then tell me please how to configure it.
Maybe somebody know an answer to this question?
hero member
Activity: 686
Merit: 500
WANTED: Active dev to fix & re-write p2pool in C
Anybody else seeing this since the latest git pull?



Running p2p Version: 13.3-27-g740e306 on Xubuntu 12.04 64bit.

Normally runs at a constant 350ish - now it's gone superman.....up, up & away  Shocked

Nope? Just me then.

Rebooted.

Sorted.

Strangeness  Huh
newbie
Activity: 21
Merit: 0
180 / 1.4 = 128 (almost exactly)

There's no point in using a lower difficulty with p2pool. You're just wasting your own CPU cycles.

That's about 71% of 180, not 30% of 180.

If you want to get technical, the best difficulty is 32,768 regardless of your local hashrate, because unless about a third of the users dropped out of the network, the difficulty per P2Pool share won't drop that low.  Every share found below the current P2Pool difficulty is useful only for local statistics.  Unless you're implementing a sub-pool that has a different share tracking method, those shares are wasted.

And every hash you produce before you find a share is wasted? This is the basis of "proof of work", isn't it?

With P2Pool, yes, each share below the P2Pool difficulty is wasted.

P2Pool's definition of proof of work is when you create a P2Pool blockchain block.  While using P2Pool, you can create a block on the Bitcoin blockchain without creating a block on the P2Pool blockchain (well, the P2Pool block would be created, but it would be orphaned).

Unless you were part of a sub-pool that used a P2Pool node for its main income, then redistributed the income out to the contributors, then the shares below P2Pool's difficulty are only useful for statistical purposes.  This is NOT the same as putting your Bitcoin address as the username when connecting to a P2Pool node.
newbie
Activity: 21
Merit: 0

I don't know where you're getting the random 30% number, but the formula I posted is the tried and proven method for keeping your shares per minute at the sweet spot between accurate stat tracking and bandwidth/CPU savings.

I got it from this post by astutiumRob:  https://bitcointalksearch.org/topic/m.3244309

That's the formula that yurtesen was having trouble with.
newbie
Activity: 35
Merit: 0
Hallo, is that possible to mine on p2pool with Block erupter blade? If yes, then tell me please how to configure it.
hero member
Activity: 591
Merit: 500
180 / 1.4 = 128 (almost exactly)

There's no point in using a lower difficulty with p2pool. You're just wasting your own CPU cycles.

That's about 71% of 180, not 30% of 180.

If you want to get technical, the best difficulty is 32,768 regardless of your local hashrate, because unless about a third of the users dropped out of the network, the difficulty per P2Pool share won't drop that low.  Every share found below the current P2Pool difficulty is useful only for local statistics.  Unless you're implementing a sub-pool that has a different share tracking method, those shares are wasted.
I don't know where you're getting the random 30% number, but the formula I posted is the tried and proven method for keeping your shares per minute at the sweet spot between accurate stat tracking and bandwidth/CPU savings.
legendary
Activity: 1066
Merit: 1098
I have a problem I don't really understand, and I'm hoping one of you can give me some useful advice...

I was running a P2Pool node for a while, and I seem to remember that my Bitcoind GetBlockTemplate Latency from the graphs page was consistently ~0.2s.

I started mining on a different pool for a couple of weeks, then came back to P2Pool - and now, my Bitcoind GetBlockTemplate Latency is staying up at ~1.1s!  That seems pretty high to me, especially since bitcoind and P2Pool are running on the same machine.

As far as I can tell, the only thing that's different is that I am using 13.3 now, and was using 13.2 before.  I have not tried reverting to 13.2...  Maybe that's an experiment I should try, but I didn't see any change in the release notes that should affect this.

If it makes any difference, the platform is Win7 64-bit.

Anyone have any idea what might be going on here?
donator
Activity: 2058
Merit: 1007
Poor impulse control.
180 / 1.4 = 128 (almost exactly)

There's no point in using a lower difficulty with p2pool. You're just wasting your own CPU cycles.

That's about 71% of 180, not 30% of 180.

If you want to get technical, the best difficulty is 32,768 regardless of your local hashrate, because unless about a third of the users dropped out of the network, the difficulty per P2Pool share won't drop that low.  Every share found below the current P2Pool difficulty is useful only for local statistics.  Unless you're implementing a sub-pool that has a different share tracking method, those shares are wasted.

And every hash you produce before you find a share is wasted? This is the basis of "proof of work", isn't it?

newbie
Activity: 21
Merit: 0
180 / 1.4 = 128 (almost exactly)

There's no point in using a lower difficulty with p2pool. You're just wasting your own CPU cycles.

That's about 71% of 180, not 30% of 180.

If you want to get technical, the best difficulty is 32,768 regardless of your local hashrate, because unless about a third of the users dropped out of the network, the difficulty per P2Pool share won't drop that low.  Every share found below the current P2Pool difficulty is useful only for local statistics.  Unless you're implementing a sub-pool that has a different share tracking method, those shares are wasted.
hero member
Activity: 591
Merit: 500
3 * 60 * .3 == 54
Powers of 2: 1, 2, 4, 8, 16, 32, 64...

64 would probably be a good idea, but you'd get slightly less variance with 32.  Then again, since you don't get a share until around 50,000, there's no point in setting the local difficulty lower than you have to, unless you want the most accurate statistics possible... but then you can just use your mining software to see what your singles are actually doing.

Short answer: 64
180 / 1.4 = 128 (almost exactly)

There's no point in using a lower difficulty with p2pool. You're just wasting your own CPU cycles.
newbie
Activity: 21
Merit: 0
I see higher amount of rejects on p2pool with BFL devices using cgminer compared to other pools. Is that normal? or am I missing some configuration?
Change your difficulty - nearest power of 2 of 30% of your hashrate
(so +8 for a LS, +16 for a Single)

So if I have 3 singles running with cgminer, I should set it to 3x16=48 or 16 only?

3 * 60 * .3 == 54
Powers of 2: 1, 2, 4, 8, 16, 32, 64...

64 would probably be a good idea, but you'd get slightly less variance with 32.  Then again, since you don't get a share until around 50,000, there's no point in setting the local difficulty lower than you have to, unless you want the most accurate statistics possible... but then you can just use your mining software to see what your singles are actually doing.

Short answer: 64
newbie
Activity: 43
Merit: 0
Where is this set?
There'a lot of really useful information in the p2pool documentation ...

My brain simply isn't working right.  I have looked at the documentation in the github download, and at the wiki, and cannot find this.  I know I have read it before, so I must have found it previously.  Could you drop me a link to the right doc?
Quote
30% of 60 is not 9 for a start - and the numbers for a single and a little-single :p

Ahem.  See my first point.
legendary
Activity: 1361
Merit: 1003
Don`t panic! Organize!
Total power per miner not per device.
member
Activity: 83
Merit: 10
I see higher amount of rejects on p2pool with BFL devices using cgminer compared to other pools. Is that normal? or am I missing some configuration?
Change your difficulty - nearest power of 2 of 30% of your hashrate
(so +8 for a LS, +16 for a Single)

So if I have 3 singles running with cgminer, I should set it to 3x16=48 or 16 only?
full member
Activity: 201
Merit: 100
Change your difficulty - nearest power of 2 of 30% of your hashrate
(so +8 for a LS, +16 for a Single)
Where is this set?
There'a lot of really useful information in the p2pool documentation ...

Quote
It doesn't seem to be an option in p2pool or bitcoin
In your mining software by appending +(n) onto the worker name (address)

Quote
And what is the formula?
The one you quoted Wink

Quote
A Single is 60GH/s so 30% of that is 9
30% of 60 is not 9 for a start - and the numbers for a single and a little-single :p

30% of 60 is 18, nearest 'power of 2' to 18 is 16, so you use 16

legendary
Activity: 1379
Merit: 1003
nec sine labore

No difference, I even increased MAX_LENGTH to 8Mb more or less

Code:
013-09-27 14:46:38,070 INFO proxy client_service.handle_event # New job 34994353376094061241536474908834835780 for prevhash 4b820075, clean_jobs=True
2013-09-27 14:46:38,400 INFO proxy client_service.handle_event # Setting new difficulty: 48.062596497
2013-09-27 14:46:38,693 INFO proxy client_service.handle_event # New job 285701906439778128344577293448536521316 for prevhash 4b820075, clean_jobs=True
2013-09-27 14:46:39,022 INFO proxy client_service.handle_event # Setting new difficulty: 48.062596497
2013-09-27 14:46:39,316 INFO proxy client_service.handle_event # New job 150053800563661665987158017586485075904 for prevhash 4b820075, clean_jobs=True
2013-09-27 14:46:39,916 INFO proxy client_service.handle_event # Setting new difficulty: 46.6031855997
2013-09-27 14:46:40,217 INFO proxy client_service.handle_event # New job 143798284483278434977368641514806560471 for prevhash 4b820075, clean_jobs=True
2013-09-27 14:46:40,417 INFO proxy client_service.handle_event # Setting new difficulty: 46.6031855997
2013-09-27 14:46:40,768 INFO proxy client_service.handle_event # New job 186365317384528894265181723368453710583 for prevhash 4b820075, clean_jobs=True
2013-09-27 14:46:40,911 INFO proxy client_service.handle_event # Setting new difficulty: 46.6031855997
2013-09-27 14:46:41,204 INFO proxy client_service.handle_event # New job 255424085435345721878011713647552276082 for prevhash 4b820075, clean_jobs=True
2013-09-27 14:46:41,532 INFO proxy client_service.handle_event # Setting new difficulty: 46.6031855997
2013-09-27 14:46:41,826 INFO proxy client_service.handle_event # New job 205252725933772297880035452289960841088 for prevhash 4b820075, clean_jobs=True
2013-09-27 14:46:42,155 INFO proxy client_service.handle_event # Setting new difficulty: 46.6031855997
2013-09-27 14:46:42,449 INFO proxy client_service.handle_event # New job 2696826970669673437006


this is it when mining on HHTT

Code:
2013-09-27 14:49:37,797 INFO proxy getwork_listener._on_authorized # Worker '1...' asks for new work
2013-09-27 14:49:37,925 INFO proxy getwork_listener._on_authorized # Worker '1...' asks for new work
2013-09-27 14:49:38,025 INFO proxy jobs.submit # Submitting bfc2ccb5
2013-09-27 14:49:37,797 INFO proxy getwork_listener._on_authorized # Worker '1...' asks for new work
2013-09-27 14:49:37,925 INFO proxy getwork_listener._on_authorized # Worker '1...' asks for new work
2013-09-27 14:49:38,250 WARNING proxy getwork_listener._on_submit # [219ms] Share from '1..xX' accepted, diff 128
2013-09-27 14:49:38,279 INFO proxy jobs.submit # Submitting f0be461a
2013-09-27 14:49:37,797 INFO proxy getwork_listener._on_authorized # Worker '1...' asks for new work
2013-09-27 14:49:37,925 INFO proxy getwork_listener._on_authorized # Worker '1...' asks for new work

I even fiddled a little with -rt and --old-target options of slush's proxy but they make no difference.

spiccioli
Jump to: