Pages:
Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 97. (Read 2591920 times)

sr. member
Activity: 266
Merit: 250
Sorry, but IMHO forrest was referring to tampering with the p2pool settings, namely

Ah, I see where you're coming from - could be so. It would be nice if forrestv chimes in with his thoughts/ideas about the upcoming blocksize increase & what aterations are needed for p2pool to take advantage of it though.
full member
Activity: 162
Merit: 100

According to forrestv on github:

Quote
I wouldn't recommend for you to change that value. Depending on where the problem is, it could either work or split you off from the P2Pool network.

...which I think is what happened to me & I had to re-download a new sharechain. If it works for you, that's great, but I'll take forrestv's advice & keep it at the default setting of 750000 until/if the problem has been fixed.

Sorry, but IMHO forrest was referring to tampering with the p2pool settings, namely
Code:
max_remembered_txs_size = 2500000
- not with the bitcoind blockmaxsize.

I didn't touch a thing inside p2pool, so my shares stay perfectly valid Wink Of course blockmaxsize is just a workaround to use v0.12 with p2pool at all. It's no solution for an increased blocksize > 1M, which we seem to get sooner or not.
legendary
Activity: 3164
Merit: 2258
I fix broken miners. And make holes in teeth :-)
There appears to be an issue with p2pool producing lots of orphaned shares if the blocksize is greater than about 750 kB. This is caused by the limit on the number of transactions per share being too low.

https://github.com/p2pool/p2pool/issues/274

As Bitcoin Classic sets the default block size limit to the largest allowed by the consensus rules, this can result in Bitcoin Classic nodes failing to produce valid shares. Consequently, if you run Bitcoin Classic with p2pool, you should use blockmaxsize=750000 or lower in your ~/.bitcoin/bitcoin.conf.
That would explain all the orphaned shares I have been pulling as of late. Off to fix bitcoin.conf again...
sr. member
Activity: 266
Merit: 250
Fixing this issue will likely require a hard fork of the p2pool share chain.

It looks like that's what will be needed then, as it seems that most (80%) of the hash rate has finally come to an agreement with Core to increase the blocksize:

http://www.coindesk.com/bitcoin-miners-back-proposed-timeline-for-2017-network-hard-fork/

So I'd recommend v0.12 (classic or core, up to your taste) to every p2pool node out there. I use
Code:
blockmaxsize=930000
in the bitcoin.conf on my node and don't have any problems with orphan rate or tx errors.

According to forrestv on github:

Quote
I wouldn't recommend for you to change that value. Depending on where the problem is, it could either work or split you off from the P2Pool network.

...which I think is what happened to me & I had to re-download a new sharechain. If it works for you, that's great, but I'll take forrestv's advice & keep it at the default setting of 750000 until/if the problem has been fixed.
full member
Activity: 162
Merit: 100
blockmaxsize=1000000
I don't get it - if p2pool can only handle blockmaxsize of 750000, how can setting it to 1000000 in bitcoin.conf be beneficial? Wont that cause problems?

For me it looks like p2pool has problems with reaching the 1MB limit, but not with being close to it.

My impression after running v0.12 for about 2 days now: it has much better latency and much less memory consumption than v0.11.

So I'd recommend v0.12 (classic or core, up to your taste) to every p2pool node out there. I use
Code:
blockmaxsize=930000
in the bitcoin.conf on my node and don't have any problems with orphan rate or tx errors.
legendary
Activity: 1270
Merit: 1000
Hmm... not much good using a fork designed to increase the max blocksize limit if you can't... increase the max blocksize.

It's a p2pool bug, not a Classic bug.

I see the issue was brought to forrestv's attention over 5 months ago on github - it would be nice if he followed it up or privided some additional info/update.....

Fixing this issue will likely require a hard fork of the p2pool share chain. (You would have to increase the max_remembered_txs_size value, which may result in your node creating shares that other nodes would not be able to accept or validate. I could be wrong though. This would be a good research project for someone who has time to read the code.)


The posts below from earlier in this thread below may be related to the issue...

Was reviewing the code and came across this one part:

https://github.com/forrestv/p2pool/blob/master/p2pool/data.py#L152

Question: Why limit it to "50 kB of new txns/share"?
i even contacted you about that bug months ago Wink was asking forrestv about it, but he didnt respond. created a hackish fix in my repo.

It's limited to prevent DoS attacks on P2Pool by e.g. making a bunch of fake transactions and then forcing them to be relayed across the entire P2Pool network. With this limit, an attacker can only force every other P2Pool node to download, at most, 50kB per share the attacker mines.

Given that 100kB transactions are possible, it should probably be 100kB, not 50kB, but it doesn't have much of an effect otherwise, since 50kB/share is comparable to the maximum transaction throughput allowed by Bitcoin (500kB/block).

K1773R, your "hackish fix" will result in your shares being orphaned if it ever results in differing behavior. The contents of the generate_transaction function are used to determine consensus, so if your version acts different, other nodes will see your shares as invalid.
Good that we talk about it now. When i was still mining BTC with p2pool, i wondered why not all of my (sometimes bigger than 100kB) would be included in p2pool blocks. It didnt really bother me back then, as some other pool would mine them.
I think raising it (not as high as my hackish fix) would be a good addition to a future hardfork.

Im absolutely aware that i would get my shares rejected. I wasnt using it for BTC.
I wanted to mine the huge ANC stuck txs, so i had to create my own p2pool and set the limit higher.
sr. member
Activity: 266
Merit: 250

blockmaxsize=1000000
 

I don't get it - if p2pool can only handle blockmaxsize of 750000, how can setting it to 1000000 in bitcoin.conf be beneficial? Wont that cause problems?

Thanks.
full member
Activity: 213
Merit: 100
There appears to be an issue with p2pool producing lots of orphaned shares if the blocksize is greater than about 750 kB. This is caused by the limit on the number of transactions per share being too low.

https://github.com/p2pool/p2pool/issues/274

As Bitcoin Classic sets the default block size limit to the largest allowed by the consensus rules, this can result in Bitcoin Classic nodes failing to produce valid shares. Consequently, if you run Bitcoin Classic with p2pool, you should use blockmaxsize=750000 or lower in your ~/.bitcoin/bitcoin.conf.

In the case of blockmaxsize=1000000, I was testing this earlier but have a small data set so far. I'm running bitcoin 0.12.( I recommend this bitcoin 0.12.0 for p2pool users who have high getblocktemp latency. It reduced mine under 1s which was not the case before and keeps it steady.) My first attempt like the man says there is a problem. Stale rate shot up almost 100% effective rate was (0-38%). It also stagnated payout marginally but i caught the issue because i was keeping an eye on it and acted swiftly so i don't know how much it would have impacted payout in the long run. I though it was bitcoin issue though so i changed some parameters.

blockmaxsize=1000000
mintxfee=0.00001
minrelaytxfee=0.00001
maxuploadtarget=144 (I just threw this in but im testing it with max connections so I don't know how effective it is alone.)
maxconnections=20 (Was around 45 when bitcoin started to act up and p2pool started to loss connection.

Things seem good now but something worth mentioning is that p2pool is now relatively synced. Which wasn't the case before.
P2pool was lossing connection to bitcoin so i added -maxconnections and to keep latency down more -maxuploadtarget at lowest recommended levels by developers(144). Which i think could be lower but would hurt the community by slowing down nodes from syncing from what i read.

Other parameters of interest
-maxmempool=       Keep the transaction memory pool below megabytes (default: 300)
-maxreceivebuffer=  Maximum per-connection receive buffer, *1000 bytes (default: 5000)
-maxsendbuffer=       Maximum per-connection send buffer, *1000 bytes (default: 1000)
-bytespersigop               Minimum bytes per sigop in transactions we relay and mine (default: 20)
-datacarrier               Relay and mine data carrier transactions (default: 1)
-datacarriersize               Maximum size of data in data carrier transactions we relay and mine (default: 83)

I'll update my findings once i have some more data. Any info on above parameters would be helpful.
 

I have successfully ran p2pool with no issues for a day using those settings mentioned above. This was tested with 5 TH. I'm not sure if there is anything in the code preventing use of the whole 1MB but my node was stable with minimal stale rates and minimal orphans. It was pulling shares fine and my income steadily increased and stayed where i expected it to sometimes climbing even higher than what would normally be with default settings. This is on a older machine running 8 gb of ram and Intel® Core™2 Quad CPU Q6700 @ 2.66GHz × 4. I would love to stay on p2pool but the variance at the moment is killing me because I was goxed or crypted by cryptsy im behind on my electric bill I will return when im caught up.
legendary
Activity: 1512
Merit: 1012
...& my DOA/Orphan rate was quite high - anyone else experience this?

Yes.


Delete "share" files on share folder on bitcoin folder ... on P2Pool folder.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Hmm... not much good using a fork designed to increase the max blocksize limit if you can't... increase the max blocksize.

It's a p2pool bug, not a Classic bug.
I don't recall saying it was a classic bug anywhere... but I understand your desire to pop up here and defend classic in that way.
hero member
Activity: 818
Merit: 1006
Hmm... not much good using a fork designed to increase the max blocksize limit if you can't... increase the max blocksize.

It's a p2pool bug, not a Classic bug.

I see the issue was brought to forrestv's attention over 5 months ago on github - it would be nice if he followed it up or privided some additional info/update.....

Fixing this issue will likely require a hard fork of the p2pool share chain. (You would have to increase the max_remembered_txs_size value, which may result in your node creating shares that other nodes would not be able to accept or validate. I could be wrong though. This would be a good research project for someone who has time to read the code.)
sr. member
Activity: 266
Merit: 250
There appears to be an issue with p2pool producing lots of orphaned shares if the blocksize is greater than about 750 kB. This is caused by the limit on the number of transactions per share being too low.

https://github.com/p2pool/p2pool/issues/274

As Bitcoin Classic sets the default block size limit to the largest allowed by the consensus rules, this can result in Bitcoin Classic nodes failing to produce valid shares. Consequently, if you run Bitcoin Classic with p2pool, you should use blockmaxsize=750000 or lower in your ~/.bitcoin/bitcoin.conf.
Hmm... not much good using a fork designed to increase the max blocksize limit if you can't... increase the max blocksize.

I see the issue was brought to forrestv's attention over 5 months ago on github - it would be nice if he followed it up or privided some additional info/update.....
full member
Activity: 213
Merit: 100
There appears to be an issue with p2pool producing lots of orphaned shares if the blocksize is greater than about 750 kB. This is caused by the limit on the number of transactions per share being too low.

https://github.com/p2pool/p2pool/issues/274

As Bitcoin Classic sets the default block size limit to the largest allowed by the consensus rules, this can result in Bitcoin Classic nodes failing to produce valid shares. Consequently, if you run Bitcoin Classic with p2pool, you should use blockmaxsize=750000 or lower in your ~/.bitcoin/bitcoin.conf.

In the case of blockmaxsize=1000000, I was testing this earlier but have a small data set so far. I'm running bitcoin 0.12.( I recommend this bitcoin 0.12.0 for p2pool users who have high getblocktemp latency. It reduced mine under 1s which was not the case before and keeps it steady.) My first attempt like the man says there is a problem. Stale rate shot up almost 100% effective rate was (0-38%). It also stagnated payout marginally but i caught the issue because i was keeping an eye on it and acted swiftly so i don't know how much it would have impacted payout in the long run. I though it was bitcoin issue though so i changed some parameters.

blockmaxsize=1000000
mintxfee=0.00001
minrelaytxfee=0.00001
maxuploadtarget=144 (I just threw this in but im testing it with max connections so I don't know how effective it is alone.)
maxconnections=20 (Was around 45 when bitcoin started to act up and p2pool started to loss connection.

Things seem good now but something worth mentioning is that p2pool is now relatively synced. Which wasn't the case before.
P2pool was lossing connection to bitcoin so i added -maxconnections and to keep latency down more -maxuploadtarget at lowest recommended levels by developers(144). Which i think could be lower but would hurt the community by slowing down nodes from syncing from what i read.

Other parameters of interest
-maxmempool=       Keep the transaction memory pool below megabytes (default: 300)
-maxreceivebuffer=  Maximum per-connection receive buffer, *1000 bytes (default: 5000)
-maxsendbuffer=       Maximum per-connection send buffer, *1000 bytes (default: 1000)
-bytespersigop               Minimum bytes per sigop in transactions we relay and mine (default: 20)
-datacarrier               Relay and mine data carrier transactions (default: 1)
-datacarriersize               Maximum size of data in data carrier transactions we relay and mine (default: 83)

I'll update my findings once i have some more data. Any info on above parameters would be helpful.
 
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
There appears to be an issue with p2pool producing lots of orphaned shares if the blocksize is greater than about 750 kB. This is caused by the limit on the number of transactions per share being too low.

https://github.com/p2pool/p2pool/issues/274

As Bitcoin Classic sets the default block size limit to the largest allowed by the consensus rules, this can result in Bitcoin Classic nodes failing to produce valid shares. Consequently, if you run Bitcoin Classic with p2pool, you should use blockmaxsize=750000 or lower in your ~/.bitcoin/bitcoin.conf.
Hmm... not much good using a fork designed to increase the max blocksize limit if you can't... increase the max blocksize.
legendary
Activity: 1258
Merit: 1027
Think I'll wait until the official release & try again, I hate re-downloading the sharechain....lol It's running nice again now, so I'll let it ride.

Did you use the binary or compile from the 0.12 branch? (which is what I did btw)

I complied core rc3 and the classic 12 branch, both run fine with significantly lower getblocktemplate latency...

Edit: see jtoomim's comment above, I run at 750000 on my test node so did not test larger

Is it the same problem with Core as well?

Best way to find out is test it... Wink

But I speculate it's the same...
sr. member
Activity: 266
Merit: 250
Think I'll wait until the official release & try again, I hate re-downloading the sharechain....lol It's running nice again now, so I'll let it ride.

Did you use the binary or compile from the 0.12 branch? (which is what I did btw)

I complied core rc3 and the classic 12 branch, both run fine with significantly lower getblocktemplate latency...

Edit: see jtoomim's comment above, I run at 750000 on my test node so did not test larger

Is it the same problem with Core as well?
legendary
Activity: 1258
Merit: 1027
Think I'll wait until the official release & try again, I hate re-downloading the sharechain....lol It's running nice again now, so I'll let it ride.

Did you use the binary or compile from the 0.12 branch? (which is what I did btw)

I complied core rc3 and the classic 12 branch, both run fine with significantly lower getblocktemplate latency...

Edit: see jtoomim's comment above, I run at 750000 on my test node so did not test larger
hero member
Activity: 818
Merit: 1006
There appears to be an issue with p2pool producing lots of orphaned shares if the blocksize is greater than about 750 kB. This is caused by the limit on the number of transactions per share being too low.

https://github.com/p2pool/p2pool/issues/274

As Bitcoin Classic sets the default block size limit to the largest allowed by the consensus rules, this can result in Bitcoin Classic nodes failing to produce valid shares. Consequently, if you run Bitcoin Classic with p2pool, you should use blockmaxsize=750000 or lower in your ~/.bitcoin/bitcoin.conf.
sr. member
Activity: 266
Merit: 250
Think I'll wait until the official release & try again, I hate re-downloading the sharechain....lol It's running nice again now, so I'll let it ride.

Did you use the binary or compile from the 0.12 branch? (which is what I did btw)
legendary
Activity: 1258
Merit: 1027
Running Core v0.12 I got loads or these errors:

...

...& my DOA/Orphan rate was quite high - anyone else experience this?

Since going back to the previous master branch the errors have gone & my DOA/Orphan rate is fine again.

I've been running 12 for a couple weeks, been working great, perhaps clean out your P2Pool /data/Bitcoin directory and get a fresh share chain?
Pages:
Jump to: