Pages:
Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 19. (Read 2591916 times)

newbie
Activity: 32
Merit: 0
Hey all. Recently got p2pool working with litecoin. Pool was working for fine about 6 hours and now will not connect to any p2pool peers? I was connected to 6 since starting, now 0. Do I need to manually add other p2pool nodes?
hero member
Activity: 818
Merit: 1006
Looking at joining, but can someone confirm when the last block was found by the pool.
Last block found on mainnet was 8/15/2017. mainnet currently has 0.8 PH/s, and is expected to find one block every 91 days on average.

Last block found on jtoomimnet was 9/18/2017. jtoomimnet currently has 2.6 PH/s, and is expected to find one block every 28 days on average. jtoomimnet will be adding at least 4 PH/s over the next 45 days.

List of all blocks found (both mainnet and jtoomimnet):
https://blockchain.info/address/1Kz5QaUPDtKrj5SqW5tFkn7WZh8LmQaQi4
newbie
Activity: 7
Merit: 0
Looking at joining, but can someone confirm when the last block was found by the pool.
newbie
Activity: 27
Merit: 0
Sorry for the slow response, Cryptonomist.

1) Is it correct to assume that the instance of the Tracker class in p2pool/util/forrest.py is the actual share chain?
That's one part of the relevant code. The actual tracker for the share chain is the OkayTracker class which inherits from forest.tracker, as instantiated as node.tracker. I think OkayTracker's code is more relevant and interesting.

By the way, it's forest.py with one 'r' (as in a bunch of trees), not forrest.py (as in the author's name).

Quote
2) Can someone suggest a way to get the time between the "Time first seen" and the addition to the share chain.
I would suggest printing out the difference between the time first seen and time.time() at the end of data.py:OkayTracker.attempt_verify(). That seems like useful information for everyone. If you put it under the --bench switch and submit it as a PR to 1mb_segwit, I'd likely merge it. Don't worry about it if you're not good with git/github, as it's not a big deal either way.

Quote
3)  the flow of a share between the moment the node detects its existence and the final addition of it to the share chain is not very clear to me.
Yeah, that code is kinda spaghettified. It might help to insert a "raise" somewhere and then run it so you can get a printout of the stack trace at that point.

Quick from-memory version: the stuff in data.py (the BaseShare class and its child classes) gets called during share object instantiation and deserialization. When p2p.py receives a serialized share over the wire, it deserializes it and turns it into an object, then asks the node object in node.py what to do with it. node.py then passes it along to the node.tracker object and asks the tracker if it fits in the share chain; if it does, then node.tracker adds it to node.tracker.verified, and the next time node.tracker.think() is run (which is probably immediately afterward), node.tracker may choose to use that new share for constructing work to be sent to miners. This causes work.py:get_work() to generate a new stratum job (using data.py:*Share.generate_transaction() to make a coinbase transaction stub and block header) which gets passed via bitcoin/worker_interface.py and bitcoin/stratum.py to the mining hardware.

Quote
4a) Under the rules in the main p2pool network the shares are 100kb. So after 300 seconds on average the shares will have a total of 1mb transactions, and after 600 seconds on average the bitcoin blocks would be 2mb. Is this correct?
Sorta. It's a limit of 100 kB of new transactions. Less than 100 kB of new transactions can be added per share. The serialized size of the share is much lower than this, since the transactions are referred to by hash instead of as the full transaction; the serialized size of the candidate block that the share represents is much larger than this, and includes the old (reused) transactions as well as the new ones.

If the transaction that puts it over 100 kB is 50 kB in size, and has 51 kB of new transactions preceding it, then only 51 kB of transactions get added. If some of the old transactions from previous shares have been removed from the block template and replaced with other transactions, then those old transactions don't get included in the new share and your share (and candidate block) size goes down.

In practice, the candidate block sizes grow slower than 100 kB per share. I haven't checked very thoroughly how much slower, but in the one instance that I followed carefully it took around 25 shares to get to 1 MB instead of 10 shares.

Quote
4b) The hash of the header of the bitcoin block contains the merkle tree of the transactions the block contains. ... How can the transactions of several shares be added to get for example after 300 seconds 1 mb of transactions in a bitcoin block.
The hash of a share *is equal to* the hash of the corresponding bitcoin block header. The share structure includes a hash of all p2pool-specific metadata embedded into the coinbase transaction (search for gentx in data.py). The share has two different serializations: the long serialization (which is exactly equal to the block serialization, and which only includes the hash of the share-specific metadata), and the short serialization (which includes the block header plus the share-specific metadata such as the list of hashes of new transactions, the 2-byte or 3-byte reference links for the old transactions, the share difficulty, timestamps, etc.). Any synced p2pool node can recreate the long serialization from the short serialization, data in the last 200 shares of the share chain, and the hash:full_tx map in the node's known_txs_var.

The transactions aren't "added". If a transaction has been included in one of the last 200 shares in the share chain, then a share can reference that share using the number of steps back in the share chain (1 byte) and the index of the transaction within that share (1 or 2 bytes). These transactions -- "old" transactions -- do not count toward the 100 kB limit. If a transaction has not been included before, then the share will reference this transaction using its full 32-byte hash, and counts its full size (e.g. 448 bytes) against the 100 kB limit. Both types of references are committed into the metadata hash in the gentx, so both are immutable and determined at the time the stratum job is sent to the mining hardware.

https://github.com/jtoomim/p2pool/blob/9692d6e8f9980b057ae67e8970353be3411fe0fe/p2pool/data.py#L156

My code currently has a soft limit of 1000 kB (instead of 100 kB or 50 kB) on new transactions per share, but unlike the p2ool master branch, this is not enforced at the consensus layer, so anyone can modify their code to exceed this limit without consequences from should_punish_reason().


Thank you for your reply. It's most helpfull.

I will put the code on github once I've implemented and tested it.
hero member
Activity: 818
Merit: 1006
Sorry for the slow response, Cryptonomist.

1) Is it correct to assume that the instance of the Tracker class in p2pool/util/forrest.py is the actual share chain?
That's one part of the relevant code. The actual tracker for the share chain is the OkayTracker class which inherits from forest.tracker, as instantiated as node.tracker. I think OkayTracker's code is more relevant and interesting.

By the way, it's forest.py with one 'r' (as in a bunch of trees), not forrest.py (as in the author's name).

Quote
2) Can someone suggest a way to get the time between the "Time first seen" and the addition to the share chain.
I would suggest printing out the difference between the time first seen and time.time() at the end of data.py:OkayTracker.attempt_verify(). That seems like useful information for everyone. If you put it under the --bench switch and submit it as a PR to 1mb_segwit, I'd likely merge it. Don't worry about it if you're not good with git/github, as it's not a big deal either way.

Quote
3)  the flow of a share between the moment the node detects its existence and the final addition of it to the share chain is not very clear to me.
Yeah, that code is kinda spaghettified. It might help to insert a "raise" somewhere and then run it so you can get a printout of the stack trace at that point.

Quick from-memory version: the stuff in data.py (the BaseShare class and its child classes) gets called during share object instantiation and deserialization. When p2p.py receives a serialized share over the wire, it deserializes it and turns it into an object, then asks the node object in node.py what to do with it. node.py then passes it along to the node.tracker object and asks the tracker if it fits in the share chain; if it does, then node.tracker adds it to node.tracker.verified, and the next time node.tracker.think() is run (which is probably immediately afterward), node.tracker may choose to use that new share for constructing work to be sent to miners. This causes work.py:get_work() to generate a new stratum job (using data.py:*Share.generate_transaction() to make a coinbase transaction stub and block header) which gets passed via bitcoin/worker_interface.py and bitcoin/stratum.py to the mining hardware.

Quote
4a) Under the rules in the main p2pool network the shares are 100kb. So after 300 seconds on average the shares will have a total of 1mb transactions, and after 600 seconds on average the bitcoin blocks would be 2mb. Is this correct?
Sorta. It's a limit of 100 kB of new transactions. Less than 100 kB of new transactions can be added per share. The serialized size of the share is much lower than this, since the transactions are referred to by hash instead of as the full transaction; the serialized size of the candidate block that the share represents is much larger than this, and includes the old (reused) transactions as well as the new ones.

If the transaction that puts it over 100 kB is 50 kB in size, and has 51 kB of new transactions preceding it, then only 51 kB of transactions get added. If some of the old transactions from previous shares have been removed from the block template and replaced with other transactions, then those old transactions don't get included in the new share and your share (and candidate block) size goes down.

In practice, the candidate block sizes grow slower than 100 kB per share. I haven't checked very thoroughly how much slower, but in the one instance that I followed carefully it took around 25 shares to get to 1 MB instead of 10 shares.

Quote
4b) The hash of the header of the bitcoin block contains the merkle tree of the transactions the block contains. ... How can the transactions of several shares be added to get for example after 300 seconds 1 mb of transactions in a bitcoin block.
The hash of a share *is equal to* the hash of the corresponding bitcoin block header. The share structure includes a hash of all p2pool-specific metadata embedded into the coinbase transaction (search for gentx in data.py). The share has two different serializations: the long serialization (which is exactly equal to the block serialization, and which only includes the hash of the share-specific metadata), and the short serialization (which includes the block header plus the share-specific metadata such as the list of hashes of new transactions, the 2-byte or 3-byte reference links for the old transactions, the share difficulty, timestamps, etc.). Any synced p2pool node can recreate the long serialization from the short serialization, data in the last 200 shares of the share chain, and the hash:full_tx map in the node's known_txs_var.

The transactions aren't "added". If a transaction has been included in one of the last 200 shares in the share chain, then a share can reference that share using the number of steps back in the share chain (1 byte) and the index of the transaction within that share (1 or 2 bytes). These transactions -- "old" transactions -- do not count toward the 100 kB limit. If a transaction has not been included before, then the share will reference this transaction using its full 32-byte hash, and counts its full size (e.g. 448 bytes) against the 100 kB limit. Both types of references are committed into the metadata hash in the gentx, so both are immutable and determined at the time the stratum job is sent to the mining hardware.

https://github.com/jtoomim/p2pool/blob/9692d6e8f9980b057ae67e8970353be3411fe0fe/p2pool/data.py#L156

My code currently has a soft limit of 1000 kB (instead of 100 kB or 50 kB) on new transactions per share, but unlike the p2ool master branch, this is not enforced at the consensus layer, so anyone can modify their code to exceed this limit without consequences from should_punish_reason().
newbie
Activity: 41
Merit: 0
P2Pool is ultimately forrestv's pool, and jtoomimnet is jtoomim's fork of forrestv's pool. forrestv is therefore under no obligation to promote or even acknowledge jtoomimnet, whether in his original post on this thread, on the main P2Pool GitHub, or on the official P2Pool webpage. The onus is on jtoomim to create his own thread for jtoomimnet if he wishes to make jtoomimnet more publicly known.

I was actually referring to forrestv's links on the first page of this thread, and if that was my first exposure to p2poool, I would have never given it a chance. It actually feels like an abandoned project when going to these links. Maybe everyone's turning off the donation, and he's not getting compensated for his work. I don't know. These are just some thoughts, and not meant to criticize forrestv - I'd just like to see p2pool get to a little bigger to help reduce variance, and increase decentralization.

This is all the info available to someone who might be interested in mining at p2pool:

P2Pool homepage: http://p2pool.in/ makes it seem as if you have to run your own node in order to mine at p2pool

P2Pool stats page, made by twmz: http://p2pool.info/ shows the last block was mined 2 years ago

Things that are not P2Pool (and just people running P2Pool): p2pool.org This is good if it was actually a link

Graphs: http://p2pool.info/ http://forre.st:9332/ These are turnoffs, and his own node has no stats

List of all blocks found: http://blockexplorer.com/address/1Kz5QaUPDtKrj5SqW5tFkn7WZh8LmQaQi4 not very promising if it shows the last block March 2016

P2Pool wiki page this is probably the reason I looked into p2pool further
sr. member
Activity: 351
Merit: 410
I opened up the webpage for the p2pool jtoomim node that I am connected to, and it says there was a block found an hour ago, but there is nothing in my wallet. I checked another node, and the addresses listed did not receive anything either. It used to be instant, or do the coins need to mature now? It is a cold storage wallet, not an exchange. That's about $400 I should have received ~ .070 BTC

My wallet: https://blockchain.info/address/1AdPjxC3u2K32pyRFYaLAUp59qXXGP4fpd
Block: https://blockchain.info/block/000000000000000000817d56511e1e3f2269250ee49f9fa651fc2c625c360138
Node I'm connected to:https://btc.coinpool.pw/#
Another node that has addresses that show no transactions:http://low-doa.mine.nu:9334/static/
The node you were looking at, and are currently mining at, is running a custom frontend for what seems to be a custom version of jtoomimnet. I assume that it has a bug that caused it to falsely report Antpool's block as being P2Pool's. jtoomimnet's last block was found on September 18, 2017, more than 28 days ago.

There is also currently a bug in the standard P2Pool web frontend that is causing it to not announce new blocks found by P2Pool.

Anyway, the important thing here is not whether a node is correctly announcing blocks found by P2Pool, but whether you are actually mining at P2Pool in the first place. The best guarantee of this is, of course, to mine at your own P2Pool node. But if that is not possible for you, or if you prefer to mine at a third party's node, what you can do to ensure that you are indeed mining at P2Pool is to visit different nodes' web frontends (for the fork that you are on), scroll to the list of P2Pool miners at the bottom, and check to see that you are indeed in the list of P2Pool miners. If you are in the list, then newfound and accepted blocks (from the P2Pool fork that you are on) will always result in your payout being immediately generated to your bitcoin wallet, whether or not the block was correctly announced as P2Pool's block on any P2Pool node's web frontend, Blockchain's explorer, or Blocktrail's explorer.

TL;DR: Check to see if you are indeed mining at your preferred fork of P2Pool. If you are, and if your P2Pool fork finds a block, you will immediately receive your payout in your bitcoin wallet.

Any verifiable nodes? Where to go?
I suggest jtoomim's own jtoomimnet nodes. Their web frontends may be found here and here.

You may also use this P2Pool node scanner to find active P2Pool nodes on both forks.

Nobody seems to care that it is impossible to find this fork if you weren't already following this thread. The info on the main page is outdated. The links are for pages that don't have current info. How is the hashrate gonna increase if it's not readily accessible to anyone? I'm gonna make my own node again, but not everyone can do it
P2Pool is ultimately forrestv's pool, and jtoomimnet is jtoomim's fork of forrestv's pool. forrestv is therefore under no obligation to promote or even acknowledge jtoomimnet, whether in his original post on this thread, on the main P2Pool GitHub, or on the official P2Pool webpage. The onus is on jtoomim to create his own thread for jtoomimnet if he wishes to make jtoomimnet more publicly known.
newbie
Activity: 27
Merit: 0
Hello,

I'm still going through the p2pool code. I have however a few questions about how p2pool works.

1) Is it correct to assume that the instance of the Tracker class in p2pool/util/forrest.py is the actual share chain? It seems to keep track of all the shares in the share chain. Or is this assumption wrong?

2) As jtoomim pointed out, the "Time first seen"  on the p2pool web page comes from self.time_seen = time.time() of the class BaseShare in p2pool/data.py. I wonder how much time passes between this time_seen, and the addition of the share to the share chain. I'm trying to put some timers in the code, but I'm not very successful in my attempts. Can someone suggest a way to get the time between the "Time first seen" and the addition to the share chain. I would like to modify the p2pool client and run it on several computers to get an estimate of the time I need to add to "Time first seen" to get the approximate time it takes to update the share chain.

3) Another problem I'm facing, but which is related to the problem I have in question 2, is that the flow of a share between the moment the node detects its existence and the final addition of it to the share chain is not very clear to me. My image of the process is for the moment the following:
* the main function in p2pool/main.py creates an instance of class P2PNode from p2pool/node.py.
* the instance of P2PNode handles the p2pool shares and the bitcoin blocks, through functions like handle_shares, handle_get_shares, etc... This part I understand reasonable well I think. For example in the method handle_shares it adds the shares that were given as parameter to the method to the tracker if the tracker didn't know them already. I was able to add some lines to the code that return the time it takes to process the new shares.
My problem is that I can't for the moment connect the methods in P2PNode to the time function in BaseShare (and the classes that inherite from this class Share and NewShare). I don't understand how the processes in P2PNode are chronologically speaking arranged versus the time function in BaseShare. Would someone be able to clarify this to me? It would be a great help.

4) My last question is related to a discussion between jtoomim and veqtrus. From the forum posts I could understand that the share size is relevant to the size of the bitcoin blocks found by p2pool. Apparently the sum of the size of the transactions in the different shares tells us something about the expected size of the bitcoin block if a block is found by the p2pool network. So on average a share is found every 30 seconds. A bitcoin block is found on average every 10 minutes. Under the rules in the main p2pool network the shares are 100kb. So after 300 seconds on average the shares will have a total of 1mb transactions, and after 600 seconds on average the bitcoin blocks would be 2mb. Is this correct?
There is however something I don't understand. The hash of the header of the bitcoin block contains the merkle tree of the transactions the block contains. So these transactions can't be changed without changing the merkle tree, and the block hash. Now p2pool creates its own shares, which have also a hash, that can potentially be a hash that is accepted by the bitcoin network. This hash needs to have sufficient zeros in front to be considered as a block in the bitcoin network. So the share hash contains I suppose also a merkle tree of the transactions. This means I think that those transactions can't be changed either. My question is now, how can this work? How can the transactions of several shares be added to get for example after 300 seconds 1 mb of transactions in a bitcoin block. If transactions are added, then the merkle tree changes, and also the share hash. So the work done by the miners would be lost. Of course I know that I'm missing something, but can someone explain me how it works, and where in the code I can find it?

Thank you in advance
newbie
Activity: 41
Merit: 0
That block was mined by Antpool, not p2pool. Looks like the pool is falsely reporting a block found or the pool is mining in antpool and not p2pool and it found a block for antpool. If the latter is the case and your not the pool's owner then you've been scammed.

Any verifiable nodes? Where to go? Nobody seems to care that it is impossible to find this fork if you weren't already following this thread. The info on the main page is outdated. The links are for pages that don't have current info. How is the hashrate gonna increase if it's not readily accessible to anyone? I'm gonna make my own node again, but not everyone can do it
hero member
Activity: 537
Merit: 524
That block was mined by Antpool, not p2pool. Looks like the pool is falsely reporting a block found or the pool is mining in antpool and not p2pool and it found a block for antpool. If the latter is the case and your not the pool's owner then you've been scammed.
newbie
Activity: 41
Merit: 0
I opened up the webpage for the p2pool jtoomim node that I am connected to, and it says there was a block found an hour ago, but there is nothing in my wallet. I checked another node, and the addresses listed did not receive anything either. It used to be instant, or do the coins need to mature now? It is a cold storage wallet, not an exchange. That's about $400 I should have received ~ .070 BTC

My wallet: https://blockchain.info/address/1AdPjxC3u2K32pyRFYaLAUp59qXXGP4fpd
Block: https://blockchain.info/block/000000000000000000817d56511e1e3f2269250ee49f9fa651fc2c625c360138
Node I'm connected to:https://btc.coinpool.pw/#
Another node that has addresses that show no transactions:http://low-doa.mine.nu:9334/static/
sr. member
Activity: 351
Merit: 410
Are the blocks that are mined on jtoomin's fork empty, like they were on the main p2pool?
No.

I think when I was on, a few people were testing jtoomin's fork, but I thought that it was supposed to get merged with the main p2pool. Is that not the case anymore?
Mainnet P2Pool must support altcoins, and jtoomimnet currently does not work properly with altcoins. The merging of jtoomimnet into mainnet is therefore on hold until jtoomimnet is made compatible with altcoins.
newbie
Activity: 43
Merit: 0
Are the blocks that are mined on jtoomin's fork empty, like they were on the main p2pool?

I haven't been on p2pool since around Christmas I think, but I remember them being mostly empty. I think when I was on, a few people were testing jtoomin's fork, but I thought that it was supposed to get merged with the main p2pool. Is that not the case anymore?

I found it hard to find, as there's no info about his fork anywhere. It's not really accessible, you have to know exactly what you're looking for - I had to ask on here. Maybe at least something on the first page, or are you trying to keep it small - I thought it was nice when the main p2pool was around 10PH.

It would be really cool if there was something like p2pool.org if it's not getting merged.

just some of my thoughts.
p2pool is stange thing since this fork...  but, there is now other way  Undecided but empty blocks does on jtoomim fork of p2pool not happening. Blocks are full. only the pool software is gone  be a strange think, mainly on windows ...
seams to run better on Linux.
newbie
Activity: 41
Merit: 0
Are the blocks that are mined on jtoomin's fork empty, like they were on the main p2pool?

I haven't been on p2pool since around Christmas I think, but I remember them being mostly empty. I think when I was on, a few people were testing jtoomin's fork, but I thought that it was supposed to get merged with the main p2pool. Is that not the case anymore?

I found it hard to find, as there's no info about his fork anywhere. It's not really accessible, you have to know exactly what you're looking for - I had to ask on here. Maybe at least something on the first page, or are you trying to keep it small - I thought it was nice when the main p2pool was around 10PH.

It would be really cool if there was something like p2pool.org if it's not getting merged.

just some of my thoughts.
hero member
Activity: 818
Merit: 1006
If uncles was applied to bitcoin then special care would have to be taken to make sure a transaction in an uncle doesn't conflict with a transaction in the main chain.
Actually, you'd simply ignore all transactions in uncles except the coinbase transaction, which gets programmatically modified to reduce the amounts.

Quote
If uncles are added then I think the share interval could be reduced even more to (say) 15 seconds.
I'd really rather not, for CPU and RAM performance reasons in addition to orphan rate reasons. It takes a medium-slow CPU on Python2.7 up to 4 seconds to process a share and issue new work with 1 MB blocks. With 4 MB blocks, you would generally be unable to run p2pool on medium-slow CPUs with a 15 second share interval, as it would take 16 seconds to process each share. Share variance is currently insignificant compared to block variance on p2pool. Increasing the share interval to 60 seconds on average is more likely to be a good idea than decreasing it.

While there are a few major changes to p2pool's architecture that would drastically reduce p2pool's performance load, those changes are quite large and may never be done. It would certainly involve more code than just adding uncles.

Quote
There'd be no more orphans at all. This would reduce the difficulty (by a constant factor but still) which would mean if p2pool gets really big then small hashers can still solve shares fairly often.
There would be uncles instead. Uncles have a smaller reward for the person who mined them than normal shares, so there's still a potential fairness penalty to having high uncle rates. The game theory of uncles is complicated, and if you don't set the rules right, it can be in a miner's best interest to not mention other people's shares as uncles and orphan them instead. Giving the uncled shares a hit in revenue is necessary to making sure there's an incentive to include them.

Quote
Another way of solving this apart from being probabilistic...
A probably better solution is to just follow the share with the lowest hash, regardless of its difficulty. This inherently prefers high-diff shares, but is fair in that low-diff shares have a chance to win against a high-diff share if the low-diff share is particularly lucky in its hash. Basically, it is effectively equivalent to the algorithm I proposed in the post that you linked to, but where the source of randomness is the hashes themselves.
newbie
Activity: 43
Merit: 0
How many set virtual memory with 16gb of ram in windows 10 ? .
3,6Gb memory free, 450Mb virtual Memory (see picture a few posts up)
sr. member
Activity: 261
Merit: 523
If uncles are added then I think the share interval could be reduced even more to (say) 15 seconds.
For what its worth, average time between blocks on the p2pool share chain was 10 seconds once upon a time.


I suppose it was increased because of orphans and DOA shares. If so then using uncles should allow share intervals to go lower because there wouldn't be a race to publish shares anymore.
sr. member
Activity: 257
Merit: 250
If uncles are added then I think the share interval could be reduced even more to (say) 15 seconds.
For what its worth, average time between blocks on the p2pool share chain was 10 seconds once upon a time.
sr. member
Activity: 261
Merit: 523
Has anybody thought about in detail how adding the uncles scheme to p2pool would work?

It was mentioned in the original bitcointalk thread about uncles in 2013 here and here. And also by jtoomim a few months ago here

But I haven't been able to find more details (or even mentions) than those. I've got a couple of ideas but it's good to read everything that's been written about it beforehand.


If uncles was applied to bitcoin then special care would have to be taken to make sure a transaction in an uncle doesn't conflict with a transaction in the main chain. This doesn't apply to p2pool's sharechain because the shares can't conflict with each other in the same way, each share just gets added to the total work that the hasher did. This is quite a cool property of applying uncles to p2pool.

If uncles are added then I think the share interval could be reduced even more to (say) 15 seconds. There'd be no more orphans at all. This would reduce the difficulty (by a constant factor but still) which would mean if p2pool gets really big then small hashers can still solve shares fairly often.


Another problem with the existing algorithm for choosing which share to base work off of is that all shares are treated the same, regardless of their difficulty, which makes low-diff shares have an advantage for selfish mining attacks. Large-diff shares contribute more work to p2pool, so they should have a greater chance of winning an orphan race, ceteris paribus. But you can't just pick the highest-diff share of all shares in an orphan race, since that would cause a race-to-the-top scenario where the optimal difficulty to use for your shares is the highest diff that a competitor uses plus one. That would also allow a selfish miner to reliably win orphan races against siblings: instead of working on a child share of the best known share, you could just orphan it by creating a sibling with +1 difficulty. So it needs to be probabilistic.

Another way of solving this apart from being probabilistic is to limit the difficulty that hashers can mine on. Only allow p2pool hashers to mine 1x difficulty, 10x difficulty, 100x difficulty and 1000x difficulty. Then it becomes impossible for a bad hasher to just mine difficulty + 1, they can only mine difficulty x 10.

Of course, with the scheme of adding uncles to p2pool there totally stop being orphans so these kind of difficulty games don't lead to extra profit.
hero member
Activity: 818
Merit: 1006
On p2pool.org, it shows it has been 83 days since the last block, so are they not running jtoomin's fork?
Correct. The most recent block found on jtoomimnet was found on September 18th. Jtoomimnet currently has around 2.6 PH/s and an expected time per block of around 22 days. Mainnet has around 0.7 PH/s and an expected time per block of around 90 days.

Pages:
Jump to: