Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 178. (Read 2591928 times)

legendary
Activity: 1344
Merit: 1024
Mine at Jonny's Pool
I normally get 1380gh/s on eligius and ghash. On p2pool I always get ~1230gh/s.

P2Pool local node don't lie.

Pool ... can lie (network problem, DOA lost).
It isn't really about lying.  A lot of hardware has issues with p2pool.  When they first came out, the S2 would lose 10-15% hash on p2pool.  Neptunes just don't work at all.  AM tubes don't work.

chalkboard17, I'm sure you've probably done the following, so forgive the redundancy if you have:
* update the firmware from Bitmain
* install ck/kano S5 binaries
* change the queue depth to 0 or 1
* use the '+' and '/' at the end of your BTC address to help flatten the graphs and get a more consistent reported hash
legendary
Activity: 1512
Merit: 1012
The last couple of weeks have been excellent for p2pool - way above average.

math is hard . . .

i prefer graph ...

legendary
Activity: 1512
Merit: 1012
I normally get 1380gh/s on eligius and ghash. On p2pool I always get ~1230gh/s.

P2Pool local node don't lie.

Pool ... can lie (network problem, DOA lost).
legendary
Activity: 1512
Merit: 1012
Huh what are you trying to say ?

we're talking about merged mining uno with p2pool Huh

you are on a BITCOIN thread ... you talk about UNO crypto-currency.
legendary
Activity: 1500
Merit: 1002
Mine Mine Mine
@windpath . . . site down ?

http://minefast.coincadence.com/p2pool-stats.php

whooops looks like it's back. my bad prolly.

btw a block for today. looking for more ! so far p2pool have been doing pretty well. fingers crossed.
hero member
Activity: 770
Merit: 500
Somehow I think P2Pool dying out. The network performance is ridiculous, the last visitors on my Node passed weeks ago. I think I switch my node down.

 Huh

Don't you use your own node then?

The last couple of weeks have been excellent for p2pool - way above average.

I dont mine anymore since 08/2014. Here in Germany the mines is senseless. You pay a lot of money for the electricity. But what are 1.2 TH / s compared to the overall performance of the network?
full member
Activity: 312
Merit: 100
Bcnex - The Ultimate Blockchain Trading Platform
I don't know why people attack kano, ck or forestv or anyone else for that matter. They put out allot for the community and you guys should have more respect to dev's that put out open source and I'm not just talking about his pool. Unless your asking a relevant question or imputing relevant info to benefit the topic your causing more harm than good to the cause. Please refrain from calling out people for the whole communities sake.


I could not agree more.  Kano , ck , forest and many others go above and beyond, i for one have the greatest respect for them.

 Cheesy
full member
Activity: 213
Merit: 100
I don't know why people attack kano, ck or forestv or anyone else for that matter. They put out allot for the community and you guys should have more respect to dev's that put out open source and I'm not just talking about his pool. Unless your asking a relevant question or imputing relevant info to benefit the topic your causing more harm than good to the cause. Please refrain from calling out people for the whole communities sake.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
...
How do you store submitted work for your pool? Do you plan to keep all the data in a query-able state for all time?

After thinking about this on and off for over a year now I'm OK with having an ~estimated~ luck, that is consistently calculated from the data we have available.
The share storage in my pool I have documented ... let me find the link ...
https://bitcointalksearch.org/topic/m.10731200

Quote
It serves it's intended purpose which is to reflect how the pool is preforming overall, over time.

I view having to use imperfect data as a trade off for getting completely trust-less decentralization, something I value much more than a 100% accurate all the time luck stat.

While I do understand your points, I find it ironic for you to criticize P2Pool's infrastructure while running your own closed source centralized pool.
Heh - oops - you got that wrong Smiley
It's open source - here:
https://bitbucket.org/ckolivas/ckpool/src

... oh -ck said it above also Smiley
sr. member
Activity: 484
Merit: 251
Update on my previous post: It just now happened on chain 2. Only took a few seconds to start back to normal.
It seems sometimes it takes many minutes while sometimes it takes a few seconds. Still, I am losing money with this and unfortunately will be forced to leave p2pool (which I do not want) if I cannot fix this. Really appreciate some help on this. Thanks.
EDIT again two hours after this post: happening more frequently
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Serious question: What would it take to get you and CK to merge your pool with P2Pool, and focus on it's scalability problems?
We've both said numerous times, there is no valid solution to its most significant scalability problems. Pooled mining is a solution to the variance of solo mining - the more miners there are the less the variance. p2pool is the anti-solution to pooled mining - above a certain size, the more miners there are, the more variance there is; it is just solo mining on a smaller scale. It doesn't matter how much money you throw at us, we can't solve this problem in its current form. Rewriting the p2pool software in a scalable language like C only makes each node able to handle more miners - and that's not really important since the idea is each miner runs their own node for their own hardware. It does not solve the design issue.

While I do understand your points, I find it ironic for you to criticize P2Pool's infrastructure while running your own closed source centralized pool.

Kano's ckpool runs ckpool software which is absolutely all free and open code. Centralised it may be, but closed it most definitely is not.
sr. member
Activity: 484
Merit: 251
I'm facing a problem.
Every so often board 1 stops hashing for a few minutes, decreasing my overall long term hashrate from ~1385gh/s to ~1350gh/s resulting in an over 2.5% loss that would force me to use a centralized pool.

This is happening as I write this post.
http://prntscr.com/7f3hc8
http://prntscr.com/7f2vt3
http://prntscr.com/7f2vmw
The 4 drops in the day chart are all because of this problem.

I never had such problem before.
I had to do this in order to avoid having hashrate less than 1250gh/s
I have stable internet connection and good computer. I don't believe it's my node or else it would change pool or make both boards stop.
I am running bitcoin-xt 0.10.2 and board 1 always run 8c hotter than #2, but none of them have ever reached 60c and they run in a cool place.

My diff1 and diffa always were very equal but when I do the cited software modification diff1 doubles and diffa decreases a little bit. Is that ok?
Any input is greatly appreciated and I hope I can fix this as I really want to run my own node and help decentralization.
Thank you.
legendary
Activity: 1500
Merit: 1002
Mine Mine Mine
Somehow I think P2Pool dying out. The network performance is ridiculous, the last visitors on my Node passed weeks ago. I think I switch my node down.

 Huh

Don't you use your own node then?

The last couple of weeks have been excellent for p2pool - way above average.

math is hard . . .
sr. member
Activity: 266
Merit: 250
Somehow I think P2Pool dying out. The network performance is ridiculous, the last visitors on my Node passed weeks ago. I think I switch my node down.

 Huh

Don't you use your own node then?

The last couple of weeks have been excellent for p2pool - way above average.
hero member
Activity: 770
Merit: 500
Somehow I think P2Pool dying out. The network performance is ridiculous, the last visitors on my Node passed weeks ago. I think I switch my node down.
legendary
Activity: 1344
Merit: 1024
Mine at Jonny's Pool

Serious question: What would it take to get you and CK to merge your pool with P2Pool, and focus on it's scalability problems?

We could sure use your expertise Smiley

I've seen posts by others on this thread regarding the scalability thing - to me this is the biggest issue with p2pool.

Has anyone actually conversed with the dev (if he's still around) to see if he has any ideas/solutions?

Was a bounty ever organised with regard to finding a dev who could find a solution?
A ton of ideas have been thrown around but none of them have proved to be implementable.  The problem is in the concept of the share chain.  In effect it really is nothing more than a relatively low difficulty coin that you are solo mining.  The solution, which contains things like payout information, gets added to the share chain.  If that share also happens to solve a block of BTC, the block award gets distributed according to the payout information in the share that solves the block.

Because of this construct there's no easily implemented solution to the problem of variance.  OgNasty and Nonnakip have come up with a very interesting approach which puts ckpool on top of the p2pool backbone.  The tradeoff is that by choosing to mine there you are bound by the same constraints as a typical centralized pool.  For example, mining on a typical p2pool node, you can failover to any other p2pool node and none of your work is lost.  Not so with their implementation - no other standard p2pool node has any concept of work you've done because you don't have any individual shares on the chain.  You sacrifice the completely trust-less decentralized nature of p2pool for variance reduction.  It's a nice step forward and I've had a couple S3s pointed to OgNasty's pool (both standard p2pool and NastyPoP) since November of last year.  You can see my long-running thread about it here: https://bitcointalksearch.org/topic/nastypop-vs-standard-p2pool-891298

If there were a viable solution, it could be implemented.
legendary
Activity: 1500
Merit: 1002
Mine Mine Mine

Serious question: What would it take to get you and CK to merge your pool with P2Pool, and focus on it's scalability problems?

We could sure use your expertise Smiley

I've seen posts by others on this thread regarding the scalability thing - to me this is the biggest issue with p2pool.

- yes it is & it still is atm sadly. i think it can be scaled but some of those who have the knowledge might not be willing to help or share Huh

Has anyone actually conversed with the dev (if he's still around) to see if he has any ideas/solutions?

- u mean forrestv ? he has not been here since a long time ago, now it;s the community that's supporting p2pool

Was a bounty ever organised with regard to finding a dev who could find a solution?

- yes but splashed with cold water ... i'm still a supporter for p2pool & hoping something will happen someday.
sr. member
Activity: 266
Merit: 250

Serious question: What would it take to get you and CK to merge your pool with P2Pool, and focus on it's scalability problems?

We could sure use your expertise Smiley

I've seen posts by others on this thread regarding the scalability thing - to me this is the biggest issue with p2pool.

Has anyone actually conversed with the dev (if he's still around) to see if he has any ideas/solutions?

Was a bounty ever organised with regard to finding a dev who could find a solution?
legendary
Activity: 1500
Merit: 1002
Mine Mine Mine

While I do understand your points, I find it ironic for you to criticize P2Pool's infrastructure while running your own closed source centralized pool.


+9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999

Well said VERY well said, sending u a tip shortly for your effort for p2pool.

post up an addy.
legendary
Activity: 1258
Merit: 1027
Did not implement a way to track non-share chain blocks, perhaps I will in the future, but the historical stuff is gone.

I believe it is a much higher % than you think. I'd speculate (and yes it's just speculation) that it is somewhere around 5-7% of found blocks.

I understand how your calculation works now, thank you, however I don't see how it could be applied to p2pool practically.

The share difficulty changes with every share, and often nodes disagree on difficulty (due to propagation times).

While the reported hash rate is only an estimate I believe it's the best number we have to work with, and by using the 1 minute average since the last block was found I think we get about as accurate a picture as possible without storing every single share for the long term.

Storing all the shares would be cool, but just don't see how to do it in a way that is directly query-able without creating an additional enterprise scale DB on top of what we already have.

How do you store submitted work for your pool? Do you plan to keep all the data in a query-able state for all time?

After thinking about this on and off for over a year now I'm OK with having an ~estimated~ luck, that is consistently calculated from the data we have available.

It serves it's intended purpose which is to reflect how the pool is preforming overall, over time.

I view having to use imperfect data as a trade off for getting completely trust-less decentralization, something I value much more than a 100% accurate all the time luck stat.

While I do understand your points, I find it ironic for you to criticize P2Pool's infrastructure while running your own closed source centralized pool.

Serious question: What would it take to get you and CK to merge your pool with P2Pool, and focus on it's scalability problems?

We could sure use your expertise Smiley
Jump to: