Pages:
Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 43. (Read 2591916 times)

legendary
Activity: 1308
Merit: 1011
I just switched over to: p2pool () ‐ node dashboard - SegWit - Eurasian server
                                 82.200.205.30:9332
This is my node. More info & complete list of nodes are in first message here: https://bitcointalksearch.org/topic/annp2pool-p2pools-on-the-siberiaminenu-cryptominenu-852083
newbie
Activity: 37
Merit: 0
(apologies to the veteran miners, i'm still new here)

I mentioned it earlier that I had understood that PING Response Time does matter in the
sense of being able to pickup/report back work performed.

I had +/- 100ms to the node I was on, before it went dead, so i stayed there.

I found another node at 30MS, which I was advised NOT to mine there.

The next best performance, and with more than 5 users, is responding between 300-500,
the 2nd & 3rd address  responds at around 200ms.

I know it doesn't have anything to do with hashing, but I understood the faster you can
pickup and report back, was better.

newbie
Activity: 37
Merit: 0
stratum+tcp://p2pool.org:9332 - seems to have been down/dead/frozen for nearly an hour.

I've been posting how there has seemed to be some issues with the Node I was at, or
maybe it's P2Pool itself?

i was connected here: stratum+tcp://p2pool.org:9332

I just switched over to: p2pool () ‐ node dashboard - SegWit - Eurasian server
                                 82.200.205.30:9332

Anyone else experiencing/know anything?



full member
Activity: 175
Merit: 100
Is there something I am missing about the Avalon 741 setup and P2P as it is not getting the amount of shares to reflect the payout as others with the same hash rate. I know shares and luck come into play but there is such a difference I feel the need to ask. My node is not forked, running version 16.0-4-gde1be30. I ran a node in the 2012-2014 timeframe so I had some catching up to do and read through the pages from where I left off. I see the old relay has now changed, there is a forked P2P version running, and some of the configuration options used in the past are not optimal for the current environment.

The Avalon 741 has the new driver and MM updates. If it is one of those issues where it is just not compatible with the quick "start-drop-start" environment of P2P that will be a shame as I enjoy this rather than conventional pools.

Any suggestions/help would be appreciated. Yes, it is good to be back. I see some of the old players are still around as well.
hero member
Activity: 818
Merit: 1006
Why not to screw any smooth and nice frontend to your p2pool fork by default?
I'm open to the idea. The ugly one has additional information that is useful for debugging, so it hasn't been a priority for me. If someone wants to make a PR, I'd strongly consider it.
legendary
Activity: 1258
Merit: 1027
Is something negative happening to the pool?

My hash rate is good, but my expected payout
has been dropping the couple Days

My node uptime reset from 45+ days to 10 hours
and we're down a couple users.

Efficiency seems to have gone up, from low 80s
To low 90s.

Looks like some of the pool stats are down,
TH - expected payout....

I'm looking to switch nodes.

I'm new to mining, would appreciate any advice/insight.

Thanks!


As the overal pool hashrate increases your expected payout for a given hashrate will decrease, but we are also expected to find blocks more frequently so in the end your payout will be about the same...
newbie
Activity: 37
Merit: 0
Is something negative happening to the pool?

My hash rate is good, but my expected payout
has been dropping the couple Days

My node uptime reset from 45+ days to 10 hours
and we're down a couple users.

Efficiency seems to have gone up, from low 80s
To low 90s.

Looks like some of the pool stats are down,
TH - expected payout....

I'm looking to switch nodes.

I'm new to mining, would appreciate any advice/insight.

Thanks!
hero member
Activity: 496
Merit: 500

P2pool seems to be alive again !!!  Smiley

http://imgur.com/3LBaNYA
legendary
Activity: 2030
Merit: 1076
A humble Siberian miner
https://github.com/jtoomim/p2pool/tree/1mb_hardforked has new code. This should fix the sync issues that people have been having. It also includes a few bootstrap nodes to make it easier for people to connect to the jtoomimnet p2pool.

To set up and run my fork, you will need to do the regular steps for installing p2pool, except that instead of getting the regular github.com/p2pool/p2pool repository, you will do:

Code:
git clone https://github.com/jtoomim/p2pool
cd p2pool
git checkout 1mb_hardforked

If anyone continues to have trouble connecting to jtoomimnet, please let me know.

Why not to screw any smooth and nice frontend to your p2pool fork by default? Everyone who deploy a node for mining will then set some other frontend instead of current ugly default one...
hero member
Activity: 818
Merit: 1006
About half would be solved by switching to something like C or Rust. We could rewrite the whole codebase to solve those problems, sure. We could also profile the code to find those problems and tweak a few lines or rewrite a module in C using Cython or CFFI. Personally, the latter approach sounds easier and more fun to me, so that's what I've been doing. Switching to pypy makes a big difference, so at this point it might be enough to just publish binaries (or source-based installation instructions) that use pypy.

The other half of the performance problems are from algorithmic/design issues or network bandwidth and latency, and are easier to solve in Python.
member
Activity: 107
Merit: 10
Most of p2pool's performance issues could be solved by switching to a statically typed language...
legendary
Activity: 1258
Merit: 1027
... reduce the CPython memory consumption of this data by about 10-15x, or of the full share size by a little more than 3x, without any loss of functionality.

That's a pretty significant improvement, awesome!!
hero member
Activity: 818
Merit: 1006
I'm looking into the memory consumption issue now. Here's some data when running CPython:

Of the first 100 shares in the share chain, the average memory consumption (using pympler.asizeof) is 57 kB per share, with most shares between 20 kB and 100 kB.

Of this 57.3 kB average, 43.5 kB on average is taken up in share_info.transaction_hash_refs. See the background paragraph at the end if you want to know what that does. Each of these ints takes 24 bytes of RAM in CPython, even though we only need 1 or 2 bytes for each one based on the size of each int. It might be possible to encode these numbers as an array of uint8 and/or uint16 integers instead of regular python integers, and reduce the CPython memory consumption of this data by about 10-15x, or of the full share size by a little more than 3x, without any loss of functionality.

On the other hand, the list of new transaction hashes averaged 7 kB per share. These are 32 byte integers, but they take 64 bytes of Python memory. We might be able to save some memory here by using an array of strings (or just one really long string), but that would be inconvenient, and would only save about 2x memory on that variable, so it's probably not worth it.

Another option is to just forget all of the transaction data after the share is >200 shares away from the head of the chain, and reload it from disk if it's requested. This will probably be more work, and might open up some DoS vulnerabilities if I'm not careful with the code (since someone could mine a share that requires us to reload 200 shares off disk, and share parsing is hella slow right now), but would probably be able to reduce memory consumption by around 20x if done well.

I can't do the same tests on pypy, since sys.getsizeof() and asizeof() don't work on pypy, unfortunately.


Background on how p2pool's share format handles transactions: For each transaction in a share/block, p2pool will check to see if that transaction hash was encoded into one of the 200 previous shares in the share chain. P2pool then encodes that in share_info.transaction_hash_refs by referring to the share number (where 0 is this share, 1 is this share's parent, 2 is the grandparent, etc) and the index of that transaction in that share's share_info.new_transaction_hashes. If the transaction hash wasn't in a previous share, then p2pool also sticks that transaction hash into share_info.new_transaction_hashes. When sent over the network, these numbers are encoded as var_ints, so it's usually 1 byte for the share index, and 1-2 bytes for the transaction index, plus 32 bytes for each of the new hashes.

Edit: Yeah, the array of uint16 thing seems to work pretty well, at least on CPython. 2170 MB -> 489 MB = 4.44x reduction.


Edit 2: Seems to benefit pypy (5.7.1) even more. 5310 MB -> 785 MB = 6.76x reduction. Now I just need to make sure the new code can successfully mine shares...

Edit 3: Seems to mine shares just fine. Neat. https://github.com/jtoomim/p2pool/commits/lowmem for now, but I'll merge it into 1mb_hardforked once it's been tested for more than a few minutes.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
The semantic discussion by whalingoutbox was too offtopic for this thread so I've moved it to its own thread:
https://bitcointalksearch.org/topic/--1954341


There wasn't an issue of semantics. The argument was that the blocksize from 2009 inspired more processing power in computer in equipment for Bitcoin in contrast to the white paper, which the whitepaper was arguing for decentralization (more than likely with a shared system, like P2Pool). This Satoshi Nakamoto event has had an effect on P2Pool ever since: And I want to see if anyone will admit or deny that and why.

Not an issue of semantics. Where were the semantics?
I have no problem with you discussing what you want to discuss it's just not relevant to this p2pool support thread. Hence I've moved it to its own thread. The posts are all still there intact and others are free to respond if they feel inclined but any more posts to this thread along that nature will be removed.

Please don't play the free speech card, this is a private forum and you have to obey the rules of the forum primarily. I'm not even censoring your posts, I'm just moving them to their own topic. But if you keep posting unwanted posts in this thread you will be silenced.
newbie
Activity: 10
Merit: 0
The semantic discussion by whalingoutbox was too offtopic for this thread so I've moved it to its own thread:
https://bitcointalksearch.org/topic/--1954341


The issue of semantics was non-existent. I was asserting I wasn't asking basic questions. I was simply making the implied argument that the posted was a sophist who used lower parts of his/her brain. Basic questions? Really?

So, I can't argue against them not being alleged basic questions?

The argument was that the blocksize from 2009 inspired more processing power in computer in equipment for Bitcoin in contrast to the white paper, which the whitepaper was arguing for decentralization (more than likely with a shared system, like P2Pool). This Satoshi Nakamoto event has had an effect on P2Pool ever since: And I want to see if anyone will admit or deny that and why.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
The semantic discussion by whalingoutbox was too offtopic for this thread so I've moved it to its own thread:
https://bitcointalksearch.org/topic/--1954341
member
Activity: 107
Merit: 10
Speaking of BIP148, are there any contingency plans in place for P2Pool handling the fork? As far as I'm aware, both mainnet and jtoomimnet are currently not compatible with segwit. So when BIP148 activates — and in the off-chance that it replaces the legacy non-segwit chain, or if circumstances post-BIP148 manage to trigger the 95% segwit threshold on the legacy chain — we P2Pool miners would be hung out to dry.
BIP148 states that all miners must signal for SegWit after August 1st. Currently, p2pool is capable of signaling for SegWit, so p2pool is compatible with this phase. If a majority of miners are following BIP148, then SegWit (BIP141) will enter into the LOCKED_IN state for two weeks, followed by the ACTIVE period two weeks after that. If 100% of miners follow BIP148, then the signaling phase will last 2 weeks. (If 10% follow, then 20 weeks, etc.) This plus the LOCKED_IN period means that we will have at least 4 weeks after August 1st before p2pool becomes incompatible with the Bitcoin some minority alt-coin based on Bitcoin chain.

It should take a week or less to merge and test SegWit support into jtoomimnet and/or mainnet's code. We'll be fine.
Corrected a typo there ...
If the majority of miners follow BIP148 there will be no chain split since it is a soft fork.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Speaking of BIP148, are there any contingency plans in place for P2Pool handling the fork? As far as I'm aware, both mainnet and jtoomimnet are currently not compatible with segwit. So when BIP148 activates — and in the off-chance that it replaces the legacy non-segwit chain, or if circumstances post-BIP148 manage to trigger the 95% segwit threshold on the legacy chain — we P2Pool miners would be hung out to dry.
BIP148 states that all miners must signal for SegWit after August 1st. Currently, p2pool is capable of signaling for SegWit, so p2pool is compatible with this phase. If a majority of miners are following BIP148, then SegWit (BIP141) will enter into the LOCKED_IN state for two weeks, followed by the ACTIVE period two weeks after that. If 100% of miners follow BIP148, then the signaling phase will last 2 weeks. (If 10% follow, then 20 weeks, etc.) This plus the LOCKED_IN period means that we will have at least 4 weeks after August 1st before p2pool becomes incompatible with the Bitcoin some minority alt-coin based on Bitcoin chain.

It should take a week or less to merge and test SegWit support into jtoomimnet and/or mainnet's code. We'll be fine.
Corrected a typo there ...
hero member
Activity: 818
Merit: 1006
Speaking of BIP148, are there any contingency plans in place for P2Pool handling the fork? As far as I'm aware, both mainnet and jtoomimnet are currently not compatible with segwit. So when BIP148 activates — and in the off-chance that it replaces the legacy non-segwit chain, or if circumstances post-BIP148 manage to trigger the 95% segwit threshold on the legacy chain — we P2Pool miners would be hung out to dry.
BIP148 states that all miners must signal for SegWit after August 1st. Currently, p2pool is capable of signaling for SegWit, so p2pool is compatible with this phase. If a majority of miners are following BIP148, then SegWit (BIP141) will enter into the LOCKED_IN state for two weeks, followed by the ACTIVE period two weeks after that. If 100% of miners follow BIP148, then the signaling phase will last 2 weeks. (If 10% follow, then 20 weeks, etc.) This plus the LOCKED_IN period means that we will have at least 4 weeks after August 1st before p2pool becomes incompatible with the Bitcoin chain.

It should take a week or less to merge and test SegWit support into jtoomimnet and/or mainnet's code. We'll be fine.
hero member
Activity: 496
Merit: 500
frodocooper, thank you very much for your reply! This was the most useful answer in this topic.

You're welcome, m1n3r. Glad to be of help. Smiley



Founded UASF( BIP148) segwit is planning.

Speaking of BIP148, are there any contingency plans in place for P2Pool handling the fork? As far as I'm aware, both mainnet and jtoomimnet are currently not compatible with segwit. So when BIP148 activates — and in the off-chance that it replaces the legacy non-segwit chain, or if circumstances post-BIP148 manage to trigger the 95% segwit threshold on the legacy chain — we P2Pool miners would be hung out to dry.

It's better to be well-prepared as early as possible than to be caught off-guard as the last minute. Just saying. Smiley

We are already hung out to dry with this ambiguous situation   Roll Eyes
Pages:
Jump to: