Pages:
Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 57. (Read 2591916 times)

legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
... or just pick 1042
Most miners that's ok.
hero member
Activity: 818
Merit: 1006
I tracked down and (in my working branch) fixed a bug that's worth mentioning. When assigning new stratum jobs to miners, p2pool will guess what difficulty to use (for pseudoshares) based on the node's hashrate. If the node just started, it doesn't know what hashrate to use, and often ends up assigning insanely low difficulty, usually around 4 GH per share. If a substantial amount of miners connect to the node quickly after starting up, the node can get flooded with thousands of shares per second, which will either saturate the CPU or (if you've got less than about 100 kB/s of bandwidth) the network. This can be avoided by making p2pool use the default share difficulty (which is based on the share chain) divided by e.g. 1000 until the node has an estimate for its own hashrate.

When this bug occurs, it looks like this:

Code:
2017-04-14 21:30:07.152590 > Traceback (most recent call last):
2017-04-14 21:30:07.152619 >   File "p2pool/util/variable.py", line 74, in set
2017-04-14 21:30:07.152647 >     self.changed.happened(value)
2017-04-14 21:30:07.152664 >   File "p2pool/util/variable.py", line 42, in happened
2017-04-14 21:30:07.152681 >     func(*event)
2017-04-14 21:30:07.152698 >   File "p2pool/work.py", line 130, in
2017-04-14 21:30:07.152715 >     self.node.best_share_var.changed.watch(lambda _: self.new_work_event.happened())
2017-04-14 21:30:07.152732 >   File "p2pool/util/variable.py", line 42, in happened
2017-04-14 21:30:07.152759 >     func(*event)
2017-04-14 21:30:07.152785 > --- ---
2017-04-14 21:30:07.152801 >   File "p2pool/bitcoin/stratum.py", line 38, in _send_work
2017-04-14 21:30:07.152817 >     x, got_response = self.wb.get_work(*self.wb.preprocess_request('' if self.username is None else self.username))
2017-04-14 21:30:07.152835 >   File "p2pool/work.py", line 212, in preprocess_request
2017-04-14 21:30:07.152852 >     raise jsonrpc.Error_for_code(-12345)(u'lost contact with bitcoind')
2017-04-14 21:30:07.152868 > p2pool.util.jsonrpc.NarrowError: -12345 lost contact with bitcoind

and this:

Code:
2017-04-14 21:28:37.035488 > Watchdog timer went off at:
2017-04-14 21:28:37.035549 >   File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
2017-04-14 21:28:37.035592 >     "__main__", fname, loader, pkg_name)
2017-04-14 21:28:37.035623 >   File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
2017-04-14 21:28:37.035652 >     exec code in run_globals
2017-04-14 21:28:37.035691 >   File "/usr/lib/python2.7/cProfile.py", line 199, in
2017-04-14 21:28:37.035728 >     main()
2017-04-14 21:28:37.035756 >   File "/usr/lib/python2.7/cProfile.py", line 192, in main
2017-04-14 21:28:37.035782 >     runctx(code, globs, None, options.outfile, options.sort)
2017-04-14 21:28:37.035828 >   File "/usr/lib/python2.7/cProfile.py", line 49, in runctx
2017-04-14 21:28:37.035853 >     prof = prof.runctx(statement, globals, locals)
2017-04-14 21:28:37.035879 >   File "/usr/lib/python2.7/cProfile.py", line 140, in runctx
2017-04-14 21:28:37.035923 >     exec cmd in globals, locals
2017-04-14 21:28:37.035948 >   File "run_p2pool.py", line 5, in
2017-04-14 21:28:37.035973 >     main.run()
2017-04-14 21:28:37.035998 >   File "p2pool/main.py", line 669, in run
2017-04-14 21:28:37.036022 >     reactor.run()
2017-04-14 21:28:37.036067 >   File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line 1192, in run
2017-04-14 21:28:37.036104 >     self.mainLoop()
2017-04-14 21:28:37.036130 >   File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line 1204, in mainLoop
2017-04-14 21:28:37.036156 >     self.doIteration(t)
2017-04-14 21:28:37.036182 >   File "/usr/lib/python2.7/dist-packages/twisted/internet/epollreactor.py", line 396, in doPoll
2017-04-14 21:28:37.036207 >     log.callWithLogger(selectable, _drdw, selectable, fd, event)
2017-04-14 21:28:37.036233 >   File "/usr/lib/python2.7/dist-packages/twisted/python/log.py", line 88, in callWithLogger
2017-04-14 21:28:37.036259 >     return callWithContext({"system": lp}, func, *args, **kw)
2017-04-14 21:28:37.036285 >   File "/usr/lib/python2.7/dist-packages/twisted/python/log.py", line 73, in callWithContext
2017-04-14 21:28:37.036310 >     return context.call({ILogContext: newCtx}, func, *args, **kw)
2017-04-14 21:28:37.036336 >   File "/usr/lib/python2.7/dist-packages/twisted/python/context.py", line 118, in callWithContext
2017-04-14 21:28:37.036371 >     return self.currentContext().callWithContext(ctx, func, *args, **kw)
2017-04-14 21:28:37.036395 >   File "/usr/lib/python2.7/dist-packages/twisted/python/context.py", line 81, in callWithContext
2017-04-14 21:28:37.036421 >     return func(*args,**kw)
2017-04-14 21:28:37.036446 >   File "/usr/lib/python2.7/dist-packages/twisted/internet/posixbase.py", line 614, in _doReadOrWrite
2017-04-14 21:28:37.036470 >     why = selectable.doRead()
2017-04-14 21:28:37.036496 >   File "/usr/lib/python2.7/dist-packages/twisted/internet/tcp.py", line 214, in doRead
2017-04-14 21:28:37.036520 >     return self._dataReceived(data)
2017-04-14 21:28:37.036545 >   File "/usr/lib/python2.7/dist-packages/twisted/internet/tcp.py", line 220, in _dataReceived
2017-04-14 21:28:37.036570 >     rval = self.protocol.dataReceived(data)
2017-04-14 21:28:37.036595 >   File "p2pool/util/switchprotocol.py", line 11, in dataReceived
2017-04-14 21:28:37.036619 >     self.p.dataReceived(data)
2017-04-14 21:28:37.036644 >   File "/usr/lib/python2.7/dist-packages/twisted/protocols/basic.py", line 454, in dataReceived
2017-04-14 21:28:37.036667 >     self.lineReceived(line)
2017-04-14 21:28:37.036693 >   File "p2pool/util/jsonrpc.py", line 164, in lineReceived
2017-04-14 21:28:37.036718 >     _handle(line, self, response_handler=self._matcher.got_response).addCallback(lambda line2: self.sendLine(line2) if line2 is not None else None)
2017-04-14 21:28:37.036744 >   File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1237, in unwindGenerator
2017-04-14 21:28:37.036767 >     return _inlineCallbacks(None, gen, Deferred())
2017-04-14 21:28:37.036795 >   File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1099, in _inlineCallbacks
2017-04-14 21:28:37.036814 >     result = g.send(result)
2017-04-14 21:28:37.036831 >   File "p2pool/util/jsonrpc.py", line 85, in _handle
2017-04-14 21:28:37.036847 >     result = yield method_meth(*list(preargs) + list(params))
2017-04-14 21:28:37.036873 >   File "p2pool/bitcoin/stratum.py", line 75, in rpc_submit
2017-04-14 21:28:37.036899 >     return got_response(header, worker_name, coinb_nonce)
2017-04-14 21:28:37.036915 >   File "p2pool/bitcoin/worker_interface.py", line 136, in
2017-04-14 21:28:37.036932 >     lambda header, user, coinbase_nonce: handler(header, user, pack.IntType(self._my_bits).pack(nonce) + coinbase_nonce),
2017-04-14 21:28:37.036949 >   File "p2pool/work.py", line 387, in got_response
2017-04-14 21:28:37.036966 >     new_gentx = bitcoin_data.tx_type.unpack(new_packed_gentx) if coinbase_nonce != '\0'*self.COINBASE_NONCE_LENGTH else gentx
2017-04-14 21:28:37.036984 >   File "p2pool/util/pack.py", line 63, in unpack
2017-04-14 21:28:37.037000 >     obj = self._unpack(data, ignore_trailing)
2017-04-14 21:28:37.037016 >   File "p2pool/util/pack.py", line 42, in _unpack
2017-04-14 21:28:37.037033 >     obj, (data2, pos) = self.read((data, 0))
2017-04-14 21:28:37.037048 >   File "p2pool/util/pack.py", line 295, in read
2017-04-14 21:28:37.037065 >     item[key], file = type_.read(file)
2017-04-14 21:28:37.037081 >   File "p2pool/util/pack.py", line 171, in read
2017-04-14 21:28:37.037097 >     res[i], file = self.type.read(file)
2017-04-14 21:28:37.037113 >   File "p2pool/util/pack.py", line 295, in read
2017-04-14 21:28:37.037129 >     item[key], file = type_.read(file)
2017-04-14 21:28:37.037145 >   File "p2pool/util/pack.py", line 131, in read
2017-04-14 21:28:37.037161 >     length, file = self._inner_size.read(file)
2017-04-14 21:28:37.037177 >   File "p2pool/util/pack.py", line 97, in read
2017-04-14 21:28:37.037193 >     data, file = read(file, 1)
2017-04-14 21:28:37.037209 >   File "p2pool/util/pack.py", line 14, in read
2017-04-14 21:28:37.037225 >     data2 = data[pos:pos + length]
2017-04-14 21:28:37.037241 >   File "p2pool/main.py", line 313, in
2017-04-14 21:28:37.037258 >     sys.stderr.write, 'Watchdog timer went off at:\n' + ''.join(traceback.format_stack())

And you'll see 100% CPU usage and ~100 kB/s of traffic to your miners.

The fix (work.py line 344):
Code:
        if desired_pseudoshare_target is None:
            target = 2**256-1
            local_hash_rate = self._estimate_local_hash_rate()
            if local_hash_rate is not None:
                target = min(target,
                    bitcoin_data.average_attempts_to_target(local_hash_rate * 1)) # limit to 1 share response every second by modulating pseudoshare difficulty
            else:
+                # If we don't yet have an estimated node hashrate, then we still need to not undershoot the difficulty.
+                # Otherwise, we might get 1 PH/s of hashrate on difficulty settings appropriate for 1 GH/s.
+                # 1/100th the difficulty of a full share should be a reasonable upper bound. That way, if
+                # one node has the whole p2pool hashrate, it will still only need to process one pseudoshare
+                # every 0.3 seconds.
+                target = min(target, 100 * bitcoin_data.average_attempts_to_target((bitcoin_data.target_to_average_attempts(
+                    self.node.bitcoind_work.value['bits'].target)*self.node.net.SPREAD)*self.node.net.PARENT.DUST_THRESHOLD/block_subsidy))
        else:
            target = desired_pseudoshare_target
        target = max(target, share_info['bits'].target)
donator
Activity: 4760
Merit: 4323
Leading Crypto Sports Betting & Casino Platform

Thanks for the info
any site where I can check what % of p2pool hashrate is mmining PTC?

No, and I imagine the % is quite low given that the pool operator must go out of their way to enable it and run a PTC client.
member
Activity: 107
Merit: 10
More nodes have upgraded to bitcoind with segwit.

hero member
Activity: 818
Merit: 1006
forrestv: edits made.


Progress update:
I'm getting the following error some of the time on my new fork when my mining node attempts to broadcast a just-mined share to its peers:
Code:
2017-04-14 05:24:01.842353 >     assert tx_hash in known_txs, 'tried to broadcast share without knowing all its new transactions'

The number of transactions that are missing from known_txs varies substantially. The last two shares that had this issue:

Code:
2017-04-14 05:24:01.841583 Missing 701 of 843 transactions for broadcast
...
2017-04-14 05:26:16.618400 Missing 9 of 1183 transactions for broadcast

It shows up for about 80% of newly mined shares, and I think 0% of shares that are requested more than a minute after they've been mined. I'm working on tracking down the cause of this bug, but for now the fork isn't ready for more than one mining node. My guesses are that either transactions aren't being added to known_txs properly when they show up in a block template, or that transactions are getting evicted too quickly, or that send_shares needs to check the eviction caches in addition to known_txs.
hero member
Activity: 516
Merit: 643
At this point, neither of the valid heads (D2 and E2) are block-stale, so neither one gets punished for that. Ultimately, the one that wins is the one with the most work. Not all shares have the same difficulty. If B1 was higher difficulty than C2, and D2 is equal to E2, then the total work behind D2 will be greater than E2, and E2 will get preferentially orphaned.

Shares are compared using the total work of their 5th parent, rather than their own work (data.py line 836). As a result of that, practically, forks are chosen between based on height with arrival time breaking ties. Difficulty doesn't come into it.

Can you correct that in your post? I think most of your arguments are unaffected.
legendary
Activity: 2142
Merit: 1025
hero member
Activity: 818
Merit: 1006
Thanks for the reply, forrestv. It's always good to see your text.

I didn't think about it from a selfish mining perspective. Interesting. I don't think we're at the Nash equilibrium, though.

I agree that C2 is more helpful to the pool than B1, and ceteris paribus, I think that for the sake of the pool it's better if people follow C2 instead of B1 when the choice is forced; however, I do not agree that a self-interested miner would always choose to mine on top of C2 instead of B1 when given the choice. Let's say both D2 (based on B1) and E2 (based on C2) get mined. Which one gets preferred?

Code:
A1
|  \
B1  C2
|   |
D2  E2

(In this diagram, the letter refers to the person who mined it and the number refers to the Bitcoin block height.)

Note: The section below was heavily edited a few hours after posting.

At the point shown in the diagram, neither of the valid heads (D2 and E2) are block-stale, so neither one gets punished for that. Ultimately, the share that wins is the one at the greatest height (edited: not the most work), or the one that was propagated first if the heights are the same. As such, there is no direct incentive to guide miners to prefer to mine like D2 versus like E2. As for indirect (pro-social) incentives, some miners might prefer to punish shares that are block-stale, whereas other miners might prefer to punish shares that are share-stale and which may be selfish mining attempts. Personally, I'm a lot more worried about selfish mining attempts on p2pool than I am about block-stale shares, so I personally would prefer to make or build off of D2 rather than E2. But let's say miners are split 50/50 on that issue.

Since there's no clear incentive why a miner would prefer to make E2 instead of D2, C2's choice to mine on A1 instead of B1 comes at a substantial cost. If D2 has already been mined and C just hasn't heard of it yet, then C2 will be at a substantial disadvantage. C2 will always lose against D2 if C2 is based on A1, but C2 has a roughly 50% chance of winning if C2 is based on B1. As we know that B will be trying to mine on top of their own blocks even after C2 is mined, it is clearly disadvantageous for C to try to orphan B1 if B has more hashrate than C and if bystanders are indifferent. It is also disadvantageous if bystanders prefer C2 but B has more than 50% of the hashrate -- which, as it happens, is currently often the case.

So C loses if D2 is in flight (maybe a 10% chance) or if B mines the next block (maybe a 10% chance); or if other miners don't clearly punish block-stale before share-stale (maybe a 50% chance). P(C_wins) is around 0.3 to 0.8. If C wins, the benefit to C is one fewer competing share in the share chain (maybe worth 10% of a share if C has 10% of the hashrate). Thus, even if all miners punish block-stale shares first, there's still roughly a (1 - 0.8*110%)=12% cost to the current strategy; and if the punishment strategies are 50/50, then there's a (1 - 0.3*110%)=67% cost to the current strategy. (End of edits.)

By choosing to base C2 on A1 instead of B1, the miner of C2 is gaining a small benefit from orphaning a competitor's share, but is paying the cost of C2 having lower absolute work. This generally works against C's best interest. A self-interested (but not large and maliciously selfish) C always acts in their own best interest when building off the chain head with the most work.

If everyone on P2Pool has the same orphan rate, then everyone's payouts are fair.
Agreed. However, high orphan rates worsen the UX in a few ways: (a) they increase reward variance; (b) they confuse people who don't understand that it's only the difference in orphan rates that affects revenue; and (c) they can be pathological in some edge cases, such as when the pool hashrate drops dramatically (e.g. during my hard fork) and the time between shares is greater than or comparable to the time between blocks, or for altcoins with naturally short block intervals.

I also think that creating orphan races results in greater reward unfairness, since winning an orphan race depends unfairly on miner hashrate and bandwidth, whereas sequential mining does not. While I agree that the stale-block rule would have the same chance of putting any miner's share into an orphan race, I do not agree that it gives every miner's share the same chance of being orphaned.

So looking back at your list of rationales:
Quote
Is unavoidable, by the argument above
I disagree that it's the optimal strategy or a Nash equilibrium, so I think it's avoidable.

Quote
Is still fair (just a slight variance increase) if everyone gets the new blocks at the same
I disagree that it's fair, as large miners are more likely to win an orphan race.

Quote
Punishes people with slow bitcoind instances in a manner that I believe is fair (which is good for the pool, as people with fast bitcoinds aren't paying those with slow bitcoinds for useless work)
I'm on board with incentivizing fast bitcoind instances. But maybe there's a better way? Rather than a stick, why not a carrot? We could give a small revenue bonus (or lower difficulty) to any share that uses a block with more total work than the block of the share's parent. The current strategy seems to me to have more drawbacks than advantages, but I think a carrot wouldn't have any major drawbacks. Edit: a carrot incentivizes people to intentionally orphan the first share on a new block. The perverse incentive might be weak enough to never be worthwhile, but it would require some care and may not be worth the complexity cost.

Quote
Also, note that a node will not try to orphan a share if it has the same payout address as theirs, so running a few nodes with the same payout address won't disadvantage you compared to running a single node.
Which means that with the current code, big miners are playing to win when it comes to orphan races, and small miners would be wise not to start an orphan race with a large miner. Given that we currently have one miner with around 70% of p2pool's hashrate, this makes this block-stale punishment strategy suboptimal in practice, not just theory.
hero member
Activity: 516
Merit: 643
Hey, jtoomim. All of what you described (including the edit) are not bugs.

First of all, remember that orphaned shares are not inherently bad. If everyone on P2Pool has the same orphan rate, then everyone's payouts are fair. Now, with that in mind...

Here's an example scenario:
Code:
A1
|  \
B1  C2
(letter is share ID, number is block it refers to)

You noticed that a node will always prefer building off of C2 rather than B1, even if C2 came in way later than B1. I think that most would agree with this policy, on the basis that C2's work is much more useful, since it was entirely on a new block, instead of possibly partially on an old block (and therefore possibly partially useless). You implied that you agree with this, so I won't go any further.

Given that rule, the optimal (selfish) strategy for any node mining is to try to make a share C2 if it doesn't exist yet (assuming B1 isn't theirs), since they will benefit from the pool ignoring share B1. This situation is when the "Punishing share for Block-stale detected!" message is printed.

Doing anything else, as you proposed, would (slightly) disadvantage the people doing it. If the software didn't follow the optimal strategy by default, people would be (slightly) motivated to patch their P2Pool instances to implement the optimal strategy. Right now, every node follows the optimal strategy, and so we're at a Nash equilibrium. Everyone does everything the same, so all is fair, and there's nothing better to do.

Yes, the current strategy results in one share being orphaned per block (the last share that was mined for a given block). However, the current state of things:
  • Is unavoidable, by the argument above
  • Is still fair (just a slight variance increase) if everyone gets the new blocks at the same
  • Punishes people with slow bitcoind instances in a manner that I believe is fair (which is good for the pool, as people with fast bitcoinds aren't paying those with slow bitcoinds for useless work)

Also, note that a node will not try to orphan a share if it has the same payout address as theirs, so running a few nodes with the same payout address won't disadvantage you compared to running a single node.
hero member
Activity: 818
Merit: 1006
I'm probably going to fork off of p2pool soon in order to increase the limit of 50 kB of new transactions per share.
Yesterday, my company mined a 51 kB block. That reminded me that I need to finish that fork.

Today, I got a working prototype of the fork running. I now have three nodes running the new code draft, with one of the three mining, and it seems to be working okay. I'm also trying to squeeze a few performance improvements and cleanups at the same time. The current code that I'm running is not in my github yet, but I'll push it fairly soon (I still have a few changes that I've recently made that I haven't fully vetted). If anyone wants to help test, let me know and I'll prioritize getting the code out.

I've also noticed what seems to be another design flaw/bug with p2pool. This issue seems to result in unnecessarily high share orphan rates. The issue seems to be with the "Punishing share for Block-stale detected!" mechanic. When your node receives a share based on block N-1 after your node has already heard about block N, your node will refuse to build off of that share under nearly all circumstances, even if there's no competing share at that height in the share chain. For example, consider this sequence of events:

1. Alice mines share A on bitcoin block 1.
2. Bob and Carol both receive share A.
3. Bob mines share B on block 1 with A as the parent.
4. Bitcoin block 2 gets mined by Slush.
5. Carol receives block 2.
6. Carol receives share B.
7. Carol marks share B as block-stale, and chooses to mine share C based on share A instead of share B.

This seems wrong to me. If Carol were choosing between using share B (based on block 1) as the parent instead of share D (based on block 2) as the parent, where B and D are siblings (using A as their parent), then I think that Carol should choose share D. However, Carol should never prefer to use a parent rather than a child just because she heard about the child after the cutoff. In other words, block-stale should be used as a way to resolve share orphan races, not as a reason to cause them.

But right now, one of my nodes is doing exactly that. On my mining node (Andrew), I mined a share 0db22594 using share cc867ac0 as the parent. One of my listening nodes (Beth) downloaded share 0db22594 a few minutes later (it wasn't running at first). It marked share 0db22594 as block-stale, and preferred to use cc867ac0 even though cc86 is much older, from an older bitcoin block, has less absolute work behind it, and is 0db's direct parent. I had to mine several more shares on top of 0db22594 before Beth switched over to a share that included it as an ancestor, and *even now* Beth is still overlooking a better share (2ddf4538) in favor of its parent (46ae27c1). 46ae27c1 is the grandchild of 0db22594, at least, so Beth eventually accepted it.

I see "Block-stale detected!" quite frequently on my non-forked (v16) nodes too, and I've also seen a few shares of mine on the v16 branch that have ended up orphaned in ways that don't make sense to me when I look back at the timestamps, so I don't think this is a bug that I've somehow introduced recently. Which means that it's a preexisting issue.

And it should be fairly easy to fix. I'll try. If it works, this should reduce share orphan rates for everyone on the fork that I'm making.



Edit: After reading the code, it looks like the behavior is even stranger than I first thought:

1. Alice mines share A on block 1.
2. Bob receives share A, and mines share B based on share A and block 1.
3. Carol receives share B, and starts work on share C based on share B and block 1.
3. Carol receives block 2.
4. Carol notices that the otherwise-best share, share B, is not based on block 2, the best-known block, and marks share B as block-stale. To punish share B, Carol switches to work on a new share, share D, based on block 2 and the parent of share B, which is share A.
5. Share B gets orphaned, even though it was perfectly valid at the time it was mined and published.

I'm seeing this bug a lot more often right now than normally, since I'm mining one share every 15-30 minutes instead of the normal 30 seconds, which means that basically *every* share gets marked as block-stale instead of roughly every 20th share. This probably accounts for a 5% orphan rate on Bitcoin/p2pool, and probably much more on other blockchains with shorter block intervals.
legendary
Activity: 1512
Merit: 1012
They stay ... good !

hero member
Activity: 496
Merit: 500
Quote
Hello, Nice to meet you as well.    Grin
I try to keep the nodes well tuned and performing well with p2pool and the network, I appreciate the feedback and utilization.
If you would like to help support, you may use this address: 1M6bcAhc7sGxC2yvXwPRs7fHeCFUppkyGR

The front-end starts to lag on load when there is a long list of shares; Thanks for letting me know. I will reboot that node tonight to clear the long list of shares.. Smiley

Ok.
Donation sent : https://blockchain.info/tx-index/d3c99c73746a46a1655f833bd0fdd0d492d6af34800d71fddca7af19fed37a3d
newbie
Activity: 58
Merit: 0
...
   Hello squidicuz.
   Nice to meet you.
   I would like to donate for your good performance in operating this node.
   Tell me which bitcoin address to donate.
   And by now also thanked you if you could clear the node list of shares that is already very long.
   Regards Wink

Hello, Nice to meet you as well.    Grin
I try to keep the nodes well tuned and performing well with p2pool and the network, I appreciate the feedback and utilization.
If you would like to help support, you may use this address: 1M6bcAhc7sGxC2yvXwPRs7fHeCFUppkyGR

The front-end starts to lag on load when there is a long list of shares; Thanks for letting me know. I will reboot that node tonight to clear the long list of shares.. Smiley
hero member
Activity: 496
Merit: 500
Sorry don't know.
It's the http://uk.p2pool.science:9332/static/ node.
Possibly bitcoin system wallet is outdated

So the alert was added along with BIP9 (version bits) and is triggered when:

> If at any stage 50 of the last 100 blocks have unexpected bits set, you get Warning: Unknown block versions being mined! It’s possible unknown rules are in effect.

It's likely the node is running either BU or bitcoin Classic and that is what triggered it. In an attempt to stay apolitical, if you support that you should stay there, if you don't, find a SegWit signaling node, if you don't care find a node that signals that (thanks to veqtrus's graph we now know that's ~90% of the P2Pool network).

Right now the node is mining valid shares and will continue to until a soft/hard fork occurs, at which point Forrest will likely release a hard fork of P2Pool.... after that it gets very political and messy so I'll stop there Wink

Bitcoind updated.  It was previously running v0.12

Warning has been cleared from that node.

   Hello squidicuz.
   Nice to meet you.
   I would like to donate for your good performance in operating this node.
   Tell me which bitcoin address to donate.
   And by now also thanked you if you could clear the node list of shares that is already very long.
   Regards Wink
hero member
Activity: 578
Merit: 501
Block! Smiley

Also, am I misunderstanding something here? Overall pool efficiency is sitting at 87%. http://p2pool.org/stats/index.php sits at exactly this efficiency.

Why? http://uk.p2pool.science:9332/static/ has a current listed efficiency of 103%. Why would anyone chose to lose almost 20% of their hashing power?

I understand that connectivity and processor limits apply here - nodes need to be well connected and low-latency. But why would someone use one that... Isn't? "just" to have control?
Frequently, efficiency discrepancies can be explained by the locality of the clients connecting to the node. The efficiency for a node with clients that are co-located on the same LAN with the node should be higher than a node with clients that must traverse the internet.
hero member
Activity: 496
Merit: 500
Try this one http://www.p2pool.io/static/
Or stay tuned to a p2pool node finder  Grin
newbie
Activity: 56
Merit: 0

You should not have doubts or distrust of anything because everything is automatically
populated by your web browser  Roll Eyes

I would completely agree, if my external-to-the-node stats didn't show my hash rate about right for the stated efficiency.
hero member
Activity: 496
Merit: 500
Block! Smiley

Also, am I misunderstanding something here? Overall pool efficiency is sitting at 87%. http://p2pool.org/stats/index.php sits at exactly this efficiency.

Why? http://uk.p2pool.science:9332/static/ has a current listed efficiency of 103%. Why would anyone chose to lose almost 20% of their hashing power?

I understand that connectivity and processor limits apply here - nodes need to be well connected and low-latency. But why would someone use one that... Isn't? "just" to have control?

You should not have doubts or distrust of anything because everything is automatically
populated by your web browser  Roll Eyes
Pages:
Jump to: