Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 669. (Read 2591920 times)

newbie
Activity: 22
Merit: 0
Thanks for the explanations, that certainly clears that up  Grin
legendary
Activity: 1540
Merit: 1001
I guess you are right. Then my only idea is the caching of work in WorkerInterface. Can someone shed some light on that?

On the other hand, I am seeing it a lot more from my two cgminer instances than my phoenix instance.  Considering my phoenix instance is around 1.6g/h, and my local 7870 is 660m/h, you'd think I'd see a lot more from phoenix than I do the 7870.  But I don't.

M
donator
Activity: 2058
Merit: 1007
Poor impulse control.
Can someone please explain to me why my local rate chart shows toal mean area 314Mhs for a week and 22Mhs Dead Mean yet P2Pool stats page only shows me @ 194Mhs and is normally way below what my live stats shows.

Thx


You're confusing the area (total megahashes, Mh) with the rate (megahashes per second, MHps). The actual rate you should expect from your given stats is: 314*86400*7/10^6 = 189.9 MHps. This is quite close to the 194 Mhps you quoted.
legendary
Activity: 1361
Merit: 1003
Don`t panic! Organize!
Can someone please explain to me why my local rate chart shows toal mean area 314Mhs for a week and 22Mhs Dead Mean yet P2Pool stats page only shows me @ 194Mhs and is normally way below what my live stats shows.

Thx

On p2pool.info you can see last 24hrs hashrate measured by shares accepted by pool.
On own chart you can see hour/day/wee/month measure by diff=1 shares.
Depends on your luck you can see that rate reflected in pool shares can be higher or lower than sd1 shares.
To compare what you see on p2pool.info and local page always see local last day.
hero member
Activity: 504
Merit: 500
Scattering my bits around the net since 1980
Can someone please explain to me why my local rate chart shows toal mean area 314Mhs for a week and 22Mhs Dead Mean yet P2Pool stats page only shows me @ 194Mhs and is normally way below what my live stats shows.

Thx

The chart is averaged out over the period the chart is displaying, and the stats shown in the console, are averaged out over the past 10 minutes or so, based on your diff-1 share submission, so luck will play a part too.

-- Smoov
newbie
Activity: 22
Merit: 0
Can someone please explain to me why my local rate chart shows toal mean area 314Mhs for a week and 22Mhs Dead Mean yet P2Pool stats page only shows me @ 194Mhs and is normally way below what my live stats shows.

Thx
legendary
Activity: 1540
Merit: 1001
I guess you are right. Then my only idea is the caching of work in WorkerInterface. Can someone shed some light on that?

The work caching preserves the merkle root, but advances the timestamp by 12 seconds every time. It may be possible that CGminer is advancing the timestamp more than that (ignoring the X-Roll-NTime header, which is set to "expire=10"), and so redoing work.

EDIT: I'm going to add a check to P2Pool that warns about improperly rolled work. If anyone _ever_ sees this warning, we'll know that something is broken.

On the other hand, maybe CGminer retries submitting work before the original work submit is finished if it's slow? That would mean that there's no work actually being lost. I should really just look at the code...

I use CGminer and almost never see this message. Is there anything special about the mining rigs that this happens to? How many GPUs do they have?

2012-07-07 09:29:47.581000 > Worker q6600 @ 127.0.0.1 submitted share more than once!
2012-07-07 09:29:47.897000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:49.355000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:50.915000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:51.948000 > Worker miner1 @ 192.168.0.110 submitted share more than once!

q6600 = my workstation, one 7870 on cgminer.  p2pool is running here.
miner1 = 4x7870 on cgminer
miner2 = 4x5870 on phoenix

I don't see miner2 that often, but I do see it.

M
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Also look at
 java API stats
that may show some interesting numbers
(though I have no idea what values to expect from p2pool)
sr. member
Activity: 337
Merit: 252
I use CGminer and almost never see this message. Is there anything special about the mining rigs that this happens to? How many GPUs do they have?

I have two machines and both show this fenomenon but one has a much higher hashrate and therefore it is more noticable on that. One desktop machine with a single 5850 and a dedicated rig with 4 x 5850 + 10 x Icarus. Both of these are running linux and I have the latest version of both p2pool and cgminer.

Thanks for looking into it.
hero member
Activity: 516
Merit: 643
I guess you are right. Then my only idea is the caching of work in WorkerInterface. Can someone shed some light on that?

The work caching preserves the merkle root, but advances the timestamp by 12 seconds every time. It may be possible that CGminer is advancing the timestamp more than that (ignoring the X-Roll-NTime header, which is set to "expire=10"), and so redoing work.

EDIT: I'm going to add a check to P2Pool that warns about improperly rolled work. If anyone _ever_ sees this warning, we'll know that something is broken.

On the other hand, maybe CGminer retries submitting work before the original work submit is finished if it's slow? That would mean that there's no work actually being lost. I should really just look at the code...

I use CGminer and almost never see this message. Is there anything special about the mining rigs that this happens to? How many GPUs do they have?
sr. member
Activity: 337
Merit: 252
I guess you are right. Then my only idea is the caching of work in WorkerInterface. Can someone shed some light on that?
legendary
Activity: 1540
Merit: 1001
BTW, I'm still regularly getting dupe submission messages.  I just saw this (biggest one I've seen so far):

2012-07-07 09:29:47.581000 > Worker q6600 @ 127.0.0.1 submitted share more than once!
2012-07-07 09:29:47.897000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:49.355000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:50.915000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:51.948000 > Worker miner1 @ 192.168.0.110 submitted share more than once!

M

I have this problem too and I've been doing some counting. The number as of now: accepted:29732 duplicates:2330 rejected:426. That is 7-8% duplicate shares of the total and the actual submitted good shares are only 91%.

I have also confirmed that the same hashes shows up in cgminer's sharelog and they are never submitted more than once. (EDIT: not true; occasionally the same hash is subitted three times)

Since the set contaning the submitted hashes is local to the got_response closure it seems to me that the duplicate check only affects submitted shares from the same get_work. Am I correct to conclude that this means that the problem is with cgminer?

I think the problem is in p2pool, because I two of my miners are cgminer, one is phoenix, and I get dupes on all 3.

M
sr. member
Activity: 337
Merit: 252
No. Smiley
P2pool is getting work form bitcoind. Each node do the same, but not all the nodes have exact same work because not all txes are in all places of bitcoin network at same time.
So if your node will found a block it will be closed in way your bitcoind see it. Then your "block found" and your closing tx is spread across p2pool nodes.
So it IS possible, that you see block value 61 and I see same block @ 55 and s1 else only 50.

Good info, thanks.

What still puzzles me a little is why I consistently saw blocks in the low 60s.  It wasn't a one time deal, it stayed that one until we found a block, then it dropped.  And it was that way long enough for my payout to increase from .9 to 1.2, and as we know, the payout doesn't change rapidly.  I don't know how long it takes this info to propagate across the network.  That implies there's something wrong with my logic, or propagation is pretty slow.

BTW, I'm still regularly getting dupe submission messages.  I just saw this (biggest one I've seen so far):

2012-07-07 09:29:47.581000 > Worker q6600 @ 127.0.0.1 submitted share more than once!
2012-07-07 09:29:47.897000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:49.355000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:50.915000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:51.948000 > Worker miner1 @ 192.168.0.110 submitted share more than once!

M

I have this problem too and I've been doing some counting. The number as of now: accepted:29732 duplicates:2330 rejected:426. That is 7-8% duplicate shares of the total and the actual submitted good shares are only 91%.

I have also confirmed that the same hashes shows up in cgminer's sharelog and they are never submitted more than once. (EDIT: not true; occasionally the same hash is subitted three times)

Since the set contaning the submitted hashes is local to the got_response closure it seems to me that the duplicate check only affects submitted shares from the same get_work. Am I correct to conclude that this means that the problem is with cgminer?

Code:
       
    def get_work(self, pubkey_hash, desired_share_target, desired_pseudoshare_target):

        /.../

        received_header_hashes = set()

        def got_response(header, request):

            /.../

            elif header_hash in received_header_hashes:
                print >>sys.stderr, 'Worker %s @ %s submitted share %064x more than once!' % (request.getUser(), request.getClientIP(), header_hash)
legendary
Activity: 1361
Merit: 1003
Don`t panic! Organize!
I posted a script in this thread that used git to autoupdate p2pool. Including it in the cron jobs of p2pcoin (and any Linux-based install) might be worth it.

For it to work, you must have :
- a p2pool checkout with git in a directory from which p2pool will be run,
- git...
- screen to run p2pool

If anyone's interested and can't find it I'll dig it up.

Yeah, if you can dig that up, let me know.  I'll include it in p2pcoin.
Here it is :
http://pastie.org/4166525

Everything should be self explanatory for anyone familiar with bash scripts and p2pool. It's configured for my needs but should easily be adapted. If you have any question when customizing it for p2pcoin I'd be happy to help.
Based on this script I made combination of 2 scripts that can be used at startup and in cron (every day/week) to keep p2pool updated.

First one os one-liner that atrually starts p2pool. This way you can easily edit pool options. Lets name it p2p.sh, put it in ~/ and make executable.
Code:
#!/bin/bash
screen -U -d -m python ~/p2pool/run_p2pool.py --irc-announce --merged http://nmcu:[email protected]:8336/
Just add/remove merged mining in way you have it. All startup options on wiki: https://en.bitcoin.it/wiki/P2Pool#Option_Reference

Second script is checking for p2pool update and restart p2pool if need. If p2pool is not started (ie at boot) it starts it:
Code:
#!/bin/bash
EXISTINGPID=`pgrep -f run_p2pool`

cd ~/p2pool
echo "Checking for P2pool updates..."
if git pull | grep -q 'Already up-to-date'; then
    echo "P2pool is up to date."
    if [[ -z "$EXISTINGPID" ]]; then
        echo "Starting P2pool..."
        ~/p2p.sh
    fi
else
    echo "P2pool updated, starting..."
    ~/p2p.sh
    if [[ ! -z "$EXISTINGPID" ]]; then
        echo "Waiting for new p2pool to be ready..."
        sleep 90
        echo "Killing old p2pool."
        kill $EXISTINGPID
    fi
fi
I assume that p2pool is git cloned into ~/p2pool, if not edit proper line.

To avoid loss of hash power on pool restart (sometimes it might be about minute gap) remember to have backup pool in miner. It might be one of public: http://nodes.p2pmine.com/
Beauty of p2pool is, that you can mine at any node using own payout address and you get paid form sum of your shares Smiley
hero member
Activity: 737
Merit: 500
That block had a high reward because of these three transactions:

http://blockchain.info/tx-index/11490303/e287eb27bb543a82ce9bcad780583de9f7da4e2e1abef55395a97c931f6aa4cb
http://blockchain.info/tx-index/11490313/34b90a85b7e625f0a60ff1e3b3bdfd735d79393930e8515f9d8188dd72f638b9
http://blockchain.info/tx-index/11490553/e35818bec399da75796e7ff8235cf1fb2bfb2897f161dc4d1d17b2a8ef79bed2

The first has a 7.5 BTC fee attached, the second has a 3.1 BTC fee attached, and the third has a 2.9 BTC fee attached.

So 13.5 BTC fees between them and resulted in the block being worth 50 + 13.5 + some other small fees.

You p2pool console showed a 60+ BTC block reward for as long as these three transactions were sitting pending.  No other pool claimed them for some reason and when we eventually found a block, they were included in our block and we got the reward. As soon as we included them in a block, the estimated block reward in your p2pool console dropped back down to the normal 50ish BTC.
hero member
Activity: 504
Merit: 500
Scattering my bits around the net since 1980
What still puzzles me a little is why I consistently saw blocks in the low 60s.  It wasn't a one time deal, it stayed that one until we found a block, then it dropped.  And it was that way long enough for my payout to increase from .9 to 1.2, and as we know, the payout doesn't change rapidly.  I don't know how long it takes this info to propagate across the network.  That implies there's something wrong with my logic, or propagation is pretty slow.
Because other pools that found blocks, either weren't seeing the Tx's your bitcoind was seeing, or were rejecting them.

They could have been rejecting them based on their priority rating, fee amount, or simply the fact that many of them were Satoshi-Dice Tx's. Some people mining, reject all Tx's too.

If a Tx hasn't been included in a block, it stays available until someone includes it.

This is likely why you kept seeing the amount you were for as long as you were.

-- Smoov
legendary
Activity: 1540
Merit: 1001
No. Smiley
P2pool is getting work form bitcoind. Each node do the same, but not all the nodes have exact same work because not all txes are in all places of bitcoin network at same time.
So if your node will found a block it will be closed in way your bitcoind see it. Then your "block found" and your closing tx is spread across p2pool nodes.
So it IS possible, that you see block value 61 and I see same block @ 55 and s1 else only 50.

Good info, thanks.

What still puzzles me a little is why I consistently saw blocks in the low 60s.  It wasn't a one time deal, it stayed that one until we found a block, then it dropped.  And it was that way long enough for my payout to increase from .9 to 1.2, and as we know, the payout doesn't change rapidly.  I don't know how long it takes this info to propagate across the network.  That implies there's something wrong with my logic, or propagation is pretty slow.

BTW, I'm still regularly getting dupe submission messages.  I just saw this (biggest one I've seen so far):

2012-07-07 09:29:47.581000 > Worker q6600 @ 127.0.0.1 submitted share more than once!
2012-07-07 09:29:47.897000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:49.355000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:50.915000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:51.948000 > Worker miner1 @ 192.168.0.110 submitted share more than once!

M
legendary
Activity: 1361
Merit: 1003
Don`t panic! Organize!
No. Smiley
P2pool is getting work form bitcoind. Each node do the same, but not all the nodes have exact same work because not all txes are in all places of bitcoin network at same time.
So if your node will found a block it will be closed in way your bitcoind see it. Then your "block found" and your closing tx is spread across p2pool nodes.
So it IS possible, that you see block value 61 and I see same block @ 55 and s1 else only 50.
legendary
Activity: 1540
Merit: 1001
That makes sense.  Basically what you're saying is p2pool is more likely to process satoshi-dice transactions, so if the bits align right, p2pool users get a lot of transactions in their block?
Well, I don't know if p2pool distributes the transactions themselves or not, so this is just a guess... but I believe it depends on your bitcoind, and what it is set up to accept or reject.

I believe that your p2pool instance, is reporting the transactions your bitcoind knows about, and will include in the solved block.

-- Smoov

So... that implies that bitcoind is providing the work, and all the connections in p2pool (I have 32 atm) are for distributing what's found?  I think p2pool can feed bitcoind as well, which means overall the more connections on bitcoind and p2pool the better.

Yes?

M
hero member
Activity: 504
Merit: 500
Scattering my bits around the net since 1980
That makes sense.  Basically what you're saying is p2pool is more likely to process satoshi-dice transactions, so if the bits align right, p2pool users get a lot of transactions in their block?
Well, I don't know if p2pool distributes the transactions themselves or not, so this is just a guess... but I believe it depends on your bitcoind, and what it is set up to accept or reject.

I believe that your p2pool instance, is reporting the transactions your bitcoind knows about, and will include in the solved block.

-- Smoov


Jump to: