Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 778. (Read 2591916 times)

hero member
Activity: 516
Merit: 643
I created this Google group for notifications about required updates and critical problems: http://groups.google.com/group/p2pool-notifications Subscribe if you wish; it's easier than keeping up to date with the forum thread!
hero member
Activity: 737
Merit: 500
Quote from: twmz

The active users/addr doesn't match blockexplorer?  Why is that? Due to RRD?

Or do I misunderstand blockexplorer?
http://blockexplorer.com/t/93krV7V9FR shows 210 but your stats page has a max of ~140 active users/addr


RRD is not good, it averages away perfectly good data. (But people seem to use it a lot, maybe not knowing this.)

This site does not use RRD.  It takes the output of http://myp2poolserver:9332/users and parses it to determine the number of users.  It does this once every 5 minutes.  That said, it could be broken.  I'll review it tonight to see if it is counting wrong.

I confirmed that everything is working as designed but that the design was flawed.  The number of users in the graph accuratly replicates what p2pool reports.  The problem with using that statistic is that the http://localhost:9332/users page only shows people that have submitted at least 1 share in the past 2 hours.  Small miners may not always find a share that often, and so aren't getting recognized.  I'll have to consider alternate approaches to approximating the number of users...

Stay tuned...
hero member
Activity: 742
Merit: 500
Actually how do I use the "--address" option with merged mining?
Do I use a "," to separate them?

Thanks!
You can't specify an address for the merged chain (yet).  The merged mining still needs some work.  The address is currently fetched automatically via the RPC
full member
Activity: 130
Merit: 100
Actually how do I use the "--address" option with merged mining?
Do I use a "," to separate them?

Thanks!
full member
Activity: 130
Merit: 100
@DeathAndTaxes & kano:
Thanks guys for the quick reply!
hero member
Activity: 737
Merit: 500
Quote from: twmz

The active users/addr doesn't match blockexplorer?  Why is that? Due to RRD?

Or do I misunderstand blockexplorer?
http://blockexplorer.com/t/93krV7V9FR shows 210 but your stats page has a max of ~140 active users/addr


RRD is not good, it averages away perfectly good data. (But people seem to use it a lot, maybe not knowing this.)

This site does not use RRD.  It takes the output of http://myp2poolserver:9332/users and parses it to determine the number of users.  It does this once every 5 minutes.  That said, it could be broken.  I'll review it tonight to see if it is counting wrong.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Heh he beat me to it - but I will add, a variance of 8 times is not that uncommon on sha256(sha256())
So Maybe after 12 or 13 hours if you haven't found anything you might consider getting worried, but at least until then, that's not unexpected.
donator
Activity: 1218
Merit: 1079
Gerald Davis
Just moved one right to p2pool, but:
- I don't know where the receiving addresses are specified. Is p2pool just getting them through rpc from bitcoind?
- although my miner (diablo) seems fine, I see this in p2pool :
Quote
P2Pool: 17525 shares in chain (9178 verified/17529 total) Peers: 10 (0 incoming)
 Local: 393MH/s in last 10.0 minutes Local dead on arrival: ~3.6% (1-13%) Expected time to share: 1.6 hours
 Shares: 0 (0 orphan, 0 dead) Stale rate: Huh Efficiency: Huh Current payout: 0.0000 BTC
 Pool: 262GH/s Stale rate: 8.9% Expected time to block: 6.3 hours
which makes me wonder if it actually works..

Any idea?
Thanks!


1 p2pool share right now = >500 "normal" pool shares.  When it says expected time for 1 share is 1.6 hours it isn't kidding.  Due to variance it could be 2 minutes or it could be 8 hours.  1.6 hours is just the average for 1 share.  So seeing 0 shares isn't unusual.

As for your payment address yes p2pool gets it via RPC.  If you want you can force it to use a specific address by using command line option.
full member
Activity: 130
Merit: 100
Just moved one right to p2pool, but:
- I don't know where the receiving addresses are specified. Is p2pool just getting them through rpc from bitcoind?
- although my miner (diablo) seems fine, I see this in p2pool :
Quote
P2Pool: 17525 shares in chain (9178 verified/17529 total) Peers: 10 (0 incoming)
 Local: 393MH/s in last 10.0 minutes Local dead on arrival: ~3.6% (1-13%) Expected time to share: 1.6 hours
 Shares: 0 (0 orphan, 0 dead) Stale rate: Huh Efficiency: Huh Current payout: 0.0000 BTC
 Pool: 262GH/s Stale rate: 8.9% Expected time to block: 6.3 hours
which makes me wonder if it actually works..

Any idea?
Thanks!
member
Activity: 70
Merit: 10
Quote from: twmz

The active users/addr doesn't match blockexplorer?  Why is that? Due to RRD?

Or do I misunderstand blockexplorer?
http://blockexplorer.com/t/93krV7V9FR shows 210 but your stats page has a max of ~140 active users/addr


RRD is not good, it averages away perfectly good data. (But people seem to use it a lot, maybe not knowing this.)
legendary
Activity: 1148
Merit: 1008
If you want to walk on water, get out of the boat
270ghash/s now!

Time to update the thread name  Cheesy
donator
Activity: 1218
Merit: 1079
Gerald Davis
I was actually wondering whether it is possible for each P2Pool instance to adjust its own difficulty, so that both weak miners and strong miners took 10 seconds to solve a block. Essentially have each P2Pool server dynamically adjusts the difficulty for its own miners only, as opposed to having one difficulty for the overall pool. The payout would then be (total number of shares submitted) * (hash rate). Since total number of blocks submitted will be, on average, the same for both large miners and small ones (difference only depending on amount of time they were running), the payout will essentially be only dependent on the hash speed.

It isn't EACH miner solves a share in 10 seconds (shares not blocks).  It is the entire p2pool solves 1 share (collectively) in 10 seconds.

Quote
This will require the chain to carry more data, but my main concern is that hackers could spoof their mining power to be way lower than it actually is, and then steal from everyone else by submitting way more than one share every 10 seconds. I realize this sort of defeats the purpose of a "verifiable chain," too...  Tongue

As I explained above that isn't a problem.  The difficulty of the share is proof of how much work was completed.  There is no way a miner can fake that.  A difficulty 1000 shares is worth 1000x as much as a difficulty 1 share and someone looking for 1000 difficulty shares will only find 1/1000th as many as someone looking for 1 difficulty shares.  Making variable shares "cheatproof" is trivially easy.  A couple lines of code is all it takes.  


Cheating isn't an issue, the issue is there is a limit on how quickly the network can efficiently handle shares.  p2pool raises difficulty to ensure time between shares remains ~10 seconds.  Not 10 seconds per miner , or 10 second for the average miner but 10 seconds FOR THE ENTIRE NETWORK.  Variable difficulty shares would still be constrained by that global "compromise".
legendary
Activity: 1379
Merit: 1003
nec sine labore
I just had another thought:
To stay as a single p2pool: How about every payout address has its own, adjusting difficulty?
You would just have to broadcast a lot of data/chains. But since you can compare/convert each miners difficulty to the total p2pool hashing power, payout should be easy to calculate?
Just a thought..

That was the thought above which prompted by "essay". Smiley

The largest constraint is that as average share time decreases the orphan rate increases.  Eventually you reach a point where the share time is so small that orphan rate because astronomical. 

For example right now share difficulty is ~560.  To have a share difficulty ~280 would require 5 second average share time which means a significant increase in orphan rate of pool. 

The "concepts" I outlined above (and no I didn't come up with them) are methods to "compartmentalize" the network so that you can have more shares per second without higher oprhan rates.  All 4 "solutions" essentially do the same thing.  They allow greater than 6 shares per minute without an increase in orphan rate.  That is the ultimately the problem to solve.

DeathAndTaxes,

first and foremost p2pool current code could be modified to not accept new peers when total hash-rate of pool reaches a predefined limit, like 1 TH, this would force new p2peers to create a new instance and would auto-limit current p2pool.

best regards.

spiccioli.
legendary
Activity: 1680
Merit: 1035
I was actually wondering whether it is possible for each P2Pool instance to adjust its own difficulty, so that both weak miners and strong miners took 10 seconds to solve a block. Essentially have each P2Pool server dynamically adjusts the difficulty for its own miners only, as opposed to having one difficulty for the overall pool. The payout would then be (total number of shares submitted) * (hash rate). Since total number of blocks submitted will be, on average, the same for both large miners and small ones (difference only depending on amount of time they were running), the payout will essentially be only dependent on the hash speed.
This will require the chain to carry more data, but my main concern is that hackers could spoof their mining power to be way lower than it actually is, and then steal from everyone else by submitting way more than one share every 10 seconds. I realize this sort of defeats the purpose of a "verifiable chain," too...  Tongue
donator
Activity: 1218
Merit: 1079
Gerald Davis
I just had another thought:
To stay as a single p2pool: How about every payout address has its own, adjusting difficulty?
You would just have to broadcast a lot of data/chains. But since you can compare/convert each miners difficulty to the total p2pool hashing power, payout should be easy to calculate?
Just a thought..

That was the thought above which prompted by "essay". Smiley

The largest constraint is that as average share time decreases the orphan rate increases.  Eventually you reach a point where the share time is so small that orphan rate because astronomical. 

For example right now share difficulty is ~560.  To have a share difficulty ~280 would require 5 second average share time which means a significant increase in orphan rate of pool. 

The "concepts" I outlined above (and no I didn't come up with them) are methods to "compartmentalize" the network so that you can have more shares per second without higher oprhan rates.  All 4 "solutions" essentially do the same thing.  They allow greater than 6 shares per minute without an increase in orphan rate.  That is the ultimately the problem to solve.
legendary
Activity: 2126
Merit: 1001
I am all for sub-p2pools.
Preferably, miners dont have to (and cant) choose where exactly they mine at, but are automatically transferred to the right sub-pool according to their own hashingpower and the general sub-pools-situation.

I just had another thought:
To stay as a single p2pool: How about every payout address has its own, adjusting difficulty?
You would just have to broadcast a lot of data/chains. But since you can compare/convert each miners difficulty to the total p2pool hashing power, payout should be easy to calculate?
Just a thought..

Ente
legendary
Activity: 1316
Merit: 1005
The way I see it there are three decentralized solutions:  multiple p2pools, dynamic p2pools, sub-p2pools

Excellent explanation. The way it looks is that multiple & sub-pools would be a natural result from the dynamic approach, so long as it incorporates the ability to communicate laterally, vertically, and internally.

Until then, you're right - manually bootstrapping sub-pools does offer the best path for small miners. This is very similar to multicellular development in biology Smiley
donator
Activity: 1218
Merit: 1079
Gerald Davis
The limitation of Bitcoin is that the block chain is only aware of the total hashing power, not individual miners, and thus can only adjust accordingly. P2Pool protocol chain is sort, and is easy to change, and each instance of P2Pool is aware of both the pool hashing power and each instance's local hashing power.
Would it be possible to just change the algorithm from adjusting difficulty to make a pool block every ten seconds based on overall pool hashing power, to one that bases it on the fraction of your hashing power compared to the overall pool? Have the difficulty start out at average, and as you mine, every thirty minutes recalculate your local difficulty based on reported hashing power, so that strong miners get increased difficulty and fewer shares and weak miners get more?
Or is this too difficult due to all blocks in the chain needing to be the same, or risky due to being easily hacked?

Currently shares are all the same because (your payout) = (your shares) / (total last n shares).

While you could make shares variable in difficulty and make it (your payout) = (sum of your shares difficulty) / (total sum of last n shares difficulty) it doesn't get around the ophan problem.

Bitcoin rarely has orphaned blocks because the round time is ~600 seconds.  The shorter the round time the more likely two entities on the network find solution at roughly the same time and one of them gets orphaned.   P2pool compromises between share difficulty & orphan rate by using a 10 second round time.  It sets difficulty so someone will find a share roughly 10 seconds (and hopefully most of the time that "solution") can be shared to everyone else to avoid duplicated work in time.

So to avoid higher orphan rate you still need the average share time to be ~10 seconds.  You could within reason allow smaller miners to use lower difficulty and larger miners to have higher difficulty but the average must still work out to ~1 share per 10 seconds.

So that solution has two problems:
a) the amount share difficulty can be vary is not much and if most miners are small it is very little at all.
b) larger miners would be accepting higher variance in order to give smaller miners lower variance.  Something for nothing.  Unlikely they will do that.


The way I see it there are four decentralized solutions:  multiple p2pools, merged share chain, dynamic p2pools, sub-p2pools.

multiple p2pools.
The simplest solution is to simply start a second p2pool.  There is no reason only one has to exist.  Take the source code and modify it so the "alternate p2pool can be identified" and start one node.  Node can join using modified client.  Eventually client could be modified to have user indicate which instance of the network to join or even scan all instances and give user the option.   If the two pool gets too large they also could be split.  The disadvantage is that each split requires migration and that requires people to look out for the good of the network.  For example 3 p2pools with 10GH, 20GH, and 2.87TH/s isn't exactly viable.

--------------------------------------

merged share chain
In Bitcoin there can only be "one block" which links to the prior block.  The reason why is it is used to prevent double spends.  Double spend isn't as much of a problem in p2pool.  Sure one needs to ensure that workers don't get duplicate credit but that can be solved without a static "only one" block-chain.  Modifying the protocol to allow multiple branches at one level would seem to be possible.  Since this would allow oprhans to be counted (within reason) it would be possible to reduce the share time.  For example a 1 TH/s p2pool with a 2 second share time would have no higher difficulty than a 200 GH/s p2pool with 10 second share time.  There likely are "gotchas" which need to be resolved but I believe a sharechain which allows "merging" is possible.

--------------------------------------

dynamic p2pool.
Building upon that idea of multiple p2pools the protocol could be expanded so that a new node queries a p2pool information net and gets statuses of existing p2pools.  The network assigns new nodes where they best optimize the balance of the network.   If the protocol enforces this pool assignment then there is no human gaming involved and the pools will be relatively balances.  As pools grow or shrink they can be split or combined with other pools by the protocol.   Some simulation would be needed to find the optimal balance between share variance and block variance.  The network could even intentionally allow variance in pool size and share time.  Larger pools with high difficulty and large share time to accommodate very large miners and smaller pools with lower difficulty to provide optimal solution for smaller individual miners.

--------------------------------------

sub p2pools
Imagine the p2pool forming a "backbone" and for max efficiency the share time would be longer.  Say 1 share per 60 seconds instead of 10 (difficulty goes up by factor of 6).  At 1TH/s that is ~12,000 difficulty (which is high but not as high as block difficulty of 1.3 million).  Due to 12K+ difficulty the only participants on this backbone are a) major hashing farms, b) conventional pools, and c) sub p2pools.

You notice I said conventional pools.  Conventional pools which submit valid shares to p2pool are less of a security risk to Bitcoin than opaque proprietary poools. 

For smaller miners who wish a fully decentralized solution they could form/join "sub-p2pools". These pools could be tuned for different speed miners to provide an optimal balance between block difficulty and share difficulty.  They would maintain a sub-p2pool level share chain and use that to set the reward breakout for the subpool.  When the one node in the subpool solves a "master p2pool" difficulty share (12K in above example) it submits it to the main pool (which updates the ultimate reward split to include the subpool current split for that share).  subpools could be created manually (Rassah small miner subpool), or eventually dynamically by an protocol similar to the second solution.  This requires a modest change to the existing p2pool (which would form the backbone). Currently 1 share can only be assigned to 1 address.  To make sub-p2pools possible it would need to be possible to include an address list and distribution % for 1 share. 

--------------------------------------


Note: these ideas aren't fleshed out.  Likely someone can point out issues and areas where the explanation is incomplete.  They are more designed as a thought exercise to look a potential avenues for expanding p2pool to handle potentially someday 51% of network hashing power (at which point an internal 51% attack becomes impossible).   Obviously these are complex ideas which will take time to implement.  I believe that "front ends" are preferable to small miners going back to deepbit and could act as a bridge to transition p2pool from 250GH/s to 1TH/s+ while more decentralized solutions are developed.
legendary
Activity: 1204
Merit: 1000
฿itcoin: Currency of Resistance!
When I find a block, my reward is bigger?!
legendary
Activity: 1680
Merit: 1035
The limitation of Bitcoin is that the block chain is only aware of the total hashing power, not individual miners, and thus can only adjust accordingly. P2Pool protocol chain is sort, and is easy to change, and each instance of P2Pool is aware of both the pool hashing power and each instance's local hashing power.
Would it be possible to just change the algorithm from adjusting difficulty to make a pool block every ten seconds based on overall pool hashing power, to one that bases it on the fraction of your hashing power compared to the overall pool? Have the difficulty start out at average, and as you mine, every thirty minutes recalculate your local difficulty based on reported hashing power, so that strong miners get increased difficulty and fewer shares and weak miners get more?
Or is this too difficult due to all blocks in the chain needing to be the same, or risky due to being easily hacked?
Jump to: