Pages:
Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 21. (Read 2591916 times)

newbie
Activity: 27
Merit: 0
Hello,

I've got some questions related to P2Pool:

1) On the web interface of my node I find the following in relation to the shares: "Best share", "Verified heads", "Heads", "Verified tails", "Tails". I'm not sure I fully understand what they mean.
      *I suppose "best share" is the last share added to the sharechain (I'm I right about this???).
      *"Verified heads" contains several shares. I see that one of these shares is the one mentioned in "best share". So what are the other      shares mentioned? Other heads that are or will be orphaned or so?
      *What is the difference between "Verified heads" and "heads"?
      *What are the "Verified Tails" and "Tails"?

2) When I look to the "Best share" share, then I see stuff like "Time first seen", "Peer first received from", "Timestamp".
      *I suppose "Time first seen" gives the time that my node received a Inv (or the p2pool equivalent) from another node containing the hash of the share. Is this correct? Or is it the time when my node has completely downloaded the share?
      *"Peer first received from" gives then probably the node that has send my node the Inv containing the share.
      *"Timestamp" is the timestamp from the node that has found the share I guess. But this timestamp probably depends on the accuracy of that nodes clock, and can in theory deviate from reality?

3) Say for example that I want to know the time it takes my node to receive the latest share of the sharechain (so I want to know the time it takes my node to converge to the consensus sharechain). I probably can use the timestamp of the share gegenerated by the node that found the share, as it will be very close to the moment that it sends it to the rest of the network. So if I can also get the time that my node has downloaded the share, and restarts its mining work on top of the new share, I can in principal determine the time that it took the share to propagate from the node that has found the share to my node, and as such I can determine the "dwell time" of my node. Is this correct?

4) If my reasoning in point 3 is correct, I just need to find the time when my node is finished downloading the new share, and the share is added to the local sharechain. Is there a way to access the sharechain of my node directly through a log file or something? I know that p2pool creates logs in the directory /p2pool/data/bitcoin/, and occasionaly I find references to shares downloaded and stuff like that in the output. But is there another file for the sharechain, that contains time data of when a new share is downloaded and added to the share chain?

Thank you in advance
newbie
Activity: 43
Merit: 0
I pushed some more code to https://github.com/jtoomim/p2pool/tree/1mb_segwit that adds more CPU and (maybe) RAM performance improvements for p2pool. I think it should improve CPU usage and latency by about 30% or more. These improvements should reduce DOA and orphan rates on the network a bit. It looks like running p2pool on CPython with medium-slow CPUs should now be viable without huge DOA/orphan costs, although I still recommend using pypy whenever possible. The new code makes fewer memory allocations when serializing objects for network transmission, which might reduce total memory consumption, or it might not. We'll see in a few days.

I also added a performance profiling/benchmark mode. If p2pool is too slow for you, I would find it helpful if you ran python run_p2pool.py --bench and then sent me a snippet of the output, especially if you can get the output near when a share is received over the network.

okey, thanks. i will take a look these days. would by nice to get it run on Win10 with Pyton 2.7 (pypy does not run here ...)
hero member
Activity: 818
Merit: 1006
I pushed some more code to https://github.com/jtoomim/p2pool/tree/1mb_segwit that adds more CPU and (maybe) RAM performance improvements for p2pool. I think it should improve CPU usage and latency by about 30% or more. These improvements should reduce DOA and orphan rates on the network a bit. It looks like running p2pool on CPython with medium-slow CPUs should now be viable without huge DOA/orphan costs, although I still recommend using pypy whenever possible. The new code makes fewer memory allocations when serializing objects for network transmission, which might reduce total memory consumption, or it might not. We'll see in a few days.

I also added a performance profiling/benchmark mode. If p2pool is too slow for you, I would find it helpful if you ran python run_p2pool.py --bench and then sent me a snippet of the output, especially if you can get the output near when a share is received over the network.
newbie
Activity: 12
Merit: 0
Hi, everybody!

How can I configure p2pool with other coin? Just interesting fo CANN.
https://github.com/ilsawa/p2pool-cann

 Hmm, trying to set pool for solo, but it still connect to other nodes.

 Changed in p2pool/networks.py

PERSIST = False
BOOTSTRAP_ADDRS = ''.split(' ')

  but it doesnt work.
newbie
Activity: 2
Merit: 0
Hello Everybody, first of all I would like to thank the developers of p2pool for such a wonderful job. We are Raptor Pool and we just launched a p2pool node in Paraguay(South America) raptorpool.com or es.raptorpool.com (In Spanish). We invite everyone in south america to join us. One more thing, We would like to ask whoever is in charge of the node scanner dev if it's possible to add a flag (Paraguayan Flag) for 190.52.135.28 and the location pin in the map? that will help everyone that's nearby to find out about us.

Thanks again and Best regards, 
newbie
Activity: 42
Merit: 0
I just finished a beta version of a merge of veqtrus's SegWit code with my 1mb_hardforked+lowmem code. I've also added a couple of things that should improve the likelihood of it working with altcoins. I'm testing with Bitcoin right now. I'll try to have the testing done and the Bitcoin code ready for others to use within a couple of days, and then I'll work on the altcoin stuff and test it with Litecoin.
legendary
Activity: 1308
Merit: 1011
Hi, everybody!

How can I configure p2pool with other coin? Just interesting fo CANN.
https://github.com/ilsawa/p2pool-cann
newbie
Activity: 12
Merit: 0
Hi, everybody!

How can I configure p2pool with other coin? Just interesting fo CANN.
hero member
Activity: 818
Merit: 1006
It didn't show up on my node's web front-end, though.
Yeah, it seems that might be a bug. I've noticed once before that we get paid without it showing on the front-end. Better that way than the other way around, I suppose.
sr. member
Activity: 351
Merit: 410
Oh, looks like jtoomimnet mined a block two days ago:

https://blockchain.info/block-index/1626545

1004.743 kB, 3999.157 kWU. Thanks to veqtrus for writing the code for p2pool to support Segwit.

15.1513745 BTC block reward.

Very nice. Grin

It didn't show up on my node's web front-end, though.
hero member
Activity: 818
Merit: 1006
Oh, looks like jtoomimnet mined a block two days ago:

https://blockchain.info/block-index/1626545

1004.743 kB, 3999.157 kWU. Thanks to veqtrus for writing the code for p2pool to support Segwit.

15.1513745 BTC block reward.
sr. member
Activity: 351
Merit: 410
legendary
Activity: 1258
Merit: 1027
hero member
Activity: 818
Merit: 1006
If China makes sense, by all means go ahead and set up a DC there. Don't worry about it too much. I know how to write networking code to cross the firewall without performance loss. I just need to know when it's necessary so that I have enough time to do it.

That said, USA and Iceland are both reasonable choices. Are you aiming for the Pacific NW? If so, we might be neighbors.
hero member
Activity: 818
Merit: 1006
It's a different password. The default ssh login and password for SP10s is root/root, even though the default web login/pass is admin/admin.

On the other hand, the default ssh login and password for Antminers is root/admin, even though the default web login/pass is root/root.

I've found that quite annoying over the years.
newbie
Activity: 31
Merit: 0
does anyone here know about spondoolies sp10's?

I am trying to connect using putty and it keeps rejecting the password which I have set in the web interface.

hero member
Activity: 818
Merit: 1006
htle0006, sounds good. Keep me updated of your progress. Quick question: will your farm be inside China? If so, I should do some network optimization work to help with crossing the Great Firewall.

As long as the large miner is nice to everyone else and sets a much higher difficulty ...
Rather, as long as the large miner isn't mean to everyone else by manually overriding the current difficulty code. If that happened, I'd either make it a consensus rule or I'd turn the share chain into a share DAG so that it didn't matter.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
good answer ...

basically it would benefit the pool as a whole if we had that much hashing power in the pool as we would find block more frequently and would be rewarded based on the percentage of work done. A large miner would be facing a higher difficulty than a small miner so the small miner would have a greater chance of getting shares. Frequency of payouts would be greater and is where the greatest benefit would be even if the pay outs are smaller. In the end it would even out or be greater than what the pool is able to get right now with the hash rate being so low.

BB

As long as the large miner is nice to everyone else and sets a much higher difficulty ...
newbie
Activity: 31
Merit: 0
good answer ...

basically it would benefit the pool as a whole if we had that much hashing power in the pool as we would find block more frequently and would be rewarded based on the percentage of work done. A large miner would be facing a higher difficulty than a small miner so the small miner would have a greater chance of getting shares. Frequency of payouts would be greater and is where the greatest benefit would be even if the pay outs are smaller. In the end it would even out or be greater than what the pool is able to get right now with the hash rate being so low.

BB
Pages:
Jump to: