Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 341. (Read 2591964 times)

legendary
Activity: 1540
Merit: 1001
I found a hack that might get my proxy to work S2s and p2pool.

I always set the jobid in the work submitted to the current jobid.  p2pool doesn't seem to mind.  It rejects one or two every so often, but that's it.

I need someone who knows p2pool or cgminer better than I do to see if that's legit or not.

M
legendary
Activity: 1540
Merit: 1001

It's close to 1th/s.  The problem is the reject level is too high.

What I don't understand is why it appears so different when using a proxy.  The reject level is high enough to make the effective rash rate what's showing when you connect directly.  But through the proxy is shows the reject count.

M

Isnt the DOA the same thing?

Yes.  I think the problem is p2pool isn't properly feeding the reject back to cgminer, so cgminer isn't counting the rejects.

It also looks like the Ants have problems changing difficulty quickly.  The lower the difficulty was set to, the longer it takes to change.  That causes boatloads of "worker submitted hash > target" messages.  That message itself is misleading, it's actually hash < target the way you and I think about it.  But in bitcoin world, it's > target.

M
full member
Activity: 932
Merit: 100
arcs-chain.com

It's close to 1th/s.  The problem is the reject level is too high.

What I don't understand is why it appears so different when using a proxy.  The reject level is high enough to make the effective rash rate what's showing when you connect directly.  But through the proxy is shows the reject count.

M

Isnt the DOA the same thing?
legendary
Activity: 1540
Merit: 1001

It's close to 1th/s.  The problem is the reject level is too high.

What I don't understand is why it appears so different when using a proxy.  The reject level is high enough to make the effective rash rate what's showing when you connect directly.  But through the proxy is shows the reject count.

M
full member
Activity: 932
Merit: 100
arcs-chain.com
The good news:
I managed to make a stratum proxy that works with p2pool.

The bad news:
It looks to me like the S2s can't respond quick enough to the restarts from p2pool.  That causes rejects.  If the difficulty is too low, it gets out of hand, where the shares coming from the S2s are from prior jobIDs, as many as two or three back.

I'm watching the data flow through real time, and shares submitted 1-5 seconds after the restart request (with the new jobID) are rejected because the jobID is the wrong one.

Other things of note:
- It'd be useful if p2pool returned some indication as to what was wrong when an item is rejected.  I'm going to have to track jobIDs myself to visually indicate if it was rejected because of a bad jobid.
- The "id" in the response from p2pool doesn't match the "id" from the sender.  The stratum specs are a bit vague, but it seems to me the response "id" should be the same as the sender id.  Instead it seems to be an incremental number for that connection.

More to come.

M

So it works like a charm with walletadress/10000000+1024 ?

And without those 30s altcoins..

What does graphs tell, hourly? Is it hashing below 1th?

I'm not sure if 512 or 1024 works best.  Overall, while a proxy definitely helps, the fundamental issue appears to be:

1 - the Ants can't change work fast enough to keep up with the constant changing of job IDs from p2pool
2 - p2pool doesn't accept work with old job IDs.  ever.

Eligius does submit a new job ids roughly 30-50 seconds, but it usually doesn't tell the Ants to drop what they're doing and restart "clean_jobs == false".

So I'd say either Bitmain needs to improve the response time on the Ants.. or p2pool needs to stop telling the workers to drop their work and start fresh every 30-40 seconds.

M

So its below 1th in http://127.0.0.1:9332/static/graphs.html?Hour ?
full member
Activity: 932
Merit: 100
arcs-chain.com
The good news:
I managed to make a stratum proxy that works with p2pool.

The bad news:
It looks to me like the S2s can't respond quick enough to the restarts from p2pool.  That causes rejects.  If the difficulty is too low, it gets out of hand, where the shares coming from the S2s are from prior jobIDs, as many as two or three back.

I'm watching the data flow through real time, and shares submitted 1-5 seconds after the restart request (with the new jobID) are rejected because the jobID is the wrong one.

Other things of note:
- It'd be useful if p2pool returned some indication as to what was wrong when an item is rejected.  I'm going to have to track jobIDs myself to visually indicate if it was rejected because of a bad jobid.
- The "id" in the response from p2pool doesn't match the "id" from the sender.  The stratum specs are a bit vague, but it seems to me the response "id" should be the same as the sender id.  Instead it seems to be an incremental number for that connection.

More to come.

M

So it works like a charm with walletadress/10000000+1024 ?

And without those 30s altcoins..

What does graphs tell, hourly? Is it hashing below 1th?

I'm not sure if 512 or 1024 works best.  Overall, while a proxy definitely helps, the fundamental issue appears to be:

1 - the Ants can't change work fast enough to keep up with the constant changing of job IDs from p2pool
2 - p2pool doesn't accept work with old job IDs.  ever.

Eligius does submit a new job ids roughly 30-50 seconds, but it usually doesn't tell the Ants to drop what they're doing and restart "clean_jobs == false".

So I'd say either Bitmain needs to improve the response time on the Ants.. or p2pool needs to stop telling the workers to drop their work and start fresh every 30-40 seconds.

M
legendary
Activity: 1540
Merit: 1001
The good news:
I managed to make a stratum proxy that works with p2pool.

The bad news:
It looks to me like the S2s can't respond quick enough to the restarts from p2pool.  That causes rejects.  If the difficulty is too low, it gets out of hand, where the shares coming from the S2s are from prior jobIDs, as many as two or three back.

I'm watching the data flow through real time, and shares submitted 1-5 seconds after the restart request (with the new jobID) are rejected because the jobID is the wrong one.

Other things of note:
- It'd be useful if p2pool returned some indication as to what was wrong when an item is rejected.  I'm going to have to track jobIDs myself to visually indicate if it was rejected because of a bad jobid.
- The "id" in the response from p2pool doesn't match the "id" from the sender.  The stratum specs are a bit vague, but it seems to me the response "id" should be the same as the sender id.  Instead it seems to be an incremental number for that connection.

More to come.

M

So it works like a charm with walletadress/10000000+1024 ?

And without those 30s altcoins..

I'm not sure if 512 or 1024 works best.  Overall, while a proxy definitely helps, the fundamental issue appears to be:

1 - the Ants can't change work fast enough to keep up with the constant changing of job IDs from p2pool
2 - p2pool doesn't accept work with old job IDs.  ever.

Eligius does submit a new job ids roughly 30-50 seconds, but it usually doesn't tell the Ants to drop what they're doing and restart "clean_jobs == false".

So I'd say either Bitmain needs to improve the response time on the Ants.. or p2pool needs to stop telling the workers to drop their work and start fresh every 30-40 seconds.

M
full member
Activity: 932
Merit: 100
arcs-chain.com
The good news:
I managed to make a stratum proxy that works with p2pool.

The bad news:
It looks to me like the S2s can't respond quick enough to the restarts from p2pool.  That causes rejects.  If the difficulty is too low, it gets out of hand, where the shares coming from the S2s are from prior jobIDs, as many as two or three back.

I'm watching the data flow through real time, and shares submitted 1-5 seconds after the restart request (with the new jobID) are rejected because the jobID is the wrong one.

Other things of note:
- It'd be useful if p2pool returned some indication as to what was wrong when an item is rejected.  I'm going to have to track jobIDs myself to visually indicate if it was rejected because of a bad jobid.
- The "id" in the response from p2pool doesn't match the "id" from the sender.  The stratum specs are a bit vague, but it seems to me the response "id" should be the same as the sender id.  Instead it seems to be an incremental number for that connection.

More to come.

M

So it works like a charm with walletadress/10000000+1024 ?

And without those 30s altcoins..
full member
Activity: 932
Merit: 100
arcs-chain.com
Pool Luck(?) (7 days, 30 days, 90 days): 185.7%102.4%98.9%

sr. member
Activity: 543
Merit: 250
Orjinal üyelik ToRiKaN banlanalı asır ol
The good news:
I managed to make a stratum proxy that works with p2pool.

The bad news:
It looks to me like the S2s can't respond quick enough to the restarts from p2pool.  That causes rejects.  If the difficulty is too low, it gets out of hand, where the shares coming from the S2s are from prior jobIDs, as many as two or three back.

I'm watching the data flow through real time, and shares submitted 1-5 seconds after the restart request (with the new jobID) are rejected because the jobID is the wrong one.

Other things of note:
- It'd be useful if p2pool returned some indication as to what was wrong when an item is rejected.  I'm going to have to track jobIDs myself to visually indicate if it was rejected because of a bad jobid.
- The "id" in the response from p2pool doesn't match the "id" from the sender.  The stratum specs are a bit vague, but it seems to me the response "id" should be the same as the sender id.  Instead it seems to be an incremental number for that connection.

More to come.

M

I know this is used for scrypt and not btc, but could this be solved by setting up a sub pool like doge.st has setup.  They run a stratum to stratum proxy that, if I get how it works correctly, splits the stratum work up and then distributes it to he miners looking for smaller shares.  Here is the link to their proxypool server:

https://github.com/dogestreet/proxypool

I'm not sure if this works outside of scrypt based coins as I don't know the inner workings of the protocols involved, but I do know that, if I remember correctly, the doge p2pool is set on 10 or 15 second share times so it should be requesting new work faster and more frequently that our p2pool does.  Just an idea I had to through out, I'd test it, but I don't have an S2 to test it with.  If it does work for the S2, I wonder if it would work with the bitfury stuff that is preventing Petamine from mining with P2Pool.

Edit:  Just read the readme and it only works with scrypt upstream servers, but maybe it's something worth forking if someone knows how to get it working.
legendary
Activity: 1540
Merit: 1001
I'm watching my ants go through my proxy to Eligius now.

Two big things that are different between Eligius and p2pool:

- when Eligius sends the "new job info", in most cases it doesn't tell the miner to restart the work (p2pool does all the time)
- the Ants still take a while to switch over to the new "job id".  However Eligius accepts the work from old jobids anyhow (p2pool rejects them all the time)

M
legendary
Activity: 1540
Merit: 1001
The good news:
I managed to make a stratum proxy that works with p2pool.

The bad news:
It looks to me like the S2s can't respond quick enough to the restarts from p2pool.  That causes rejects.  If the difficulty is too low, it gets out of hand, where the shares coming from the S2s are from prior jobIDs, as many as two or three back.

I'm watching the data flow through real time, and shares submitted 1-5 seconds after the restart request (with the new jobID) are rejected because the jobID is the wrong one.

Other things of note:
- It'd be useful if p2pool returned some indication as to what was wrong when an item is rejected.  I'm going to have to track jobIDs myself to visually indicate if it was rejected because of a bad jobid.
- The "id" in the response from p2pool doesn't match the "id" from the sender.  The stratum specs are a bit vague, but it seems to me the response "id" should be the same as the sender id.  Instead it seems to be an incremental number for that connection.

More to come.

M
hero member
Activity: 798
Merit: 1000
Peer =148.251.238.178:18702 is causing bitcoind to go out of sync, 0 hours behind..... lying about blocks

Is there a way to block these getaddr.bitnodes.io:0.1 clients from bitcoind, or do one have to block ip after ip from firewall?

Not sure but, it seems to cause orphan and dead shares while this happens..0 hours behind....

As far as I know, you can only block them by setting up a whitelist of connections to use, so only setting up connections for specific peers. 
Use "connect" to establish a list of trusted peers, but then you're only locked to those peers which has its pros and cons.

Quote
-addnode=          Add a node to connect to and attempt to keep the connection open
 -connect=          Connect only to the specified node(s)
 -seednode=         Connect to a node to retrieve peer addresses, and disconnect
 -externalip=       Specify your own public address
 -onlynet=         Only connect to nodes in network (IPv4, IPv6 or Tor)

-connect method would be nice if I only knew how to connect to p2pool node, not sure what happens if bitcoind is started up witout connect settings and then p2pool brought up - then change the config to connect=specified hosts and reboot bitcoind.

It can be that if a block is found it cant be submitted to bitcoind..

Installed and configured iptables and now it stays synced..

I think the connect param only applies to peer connections, not RPC connections from the same machine. Worst case scenario, you could allow 127.0.0.1 (which is localhost).
legendary
Activity: 1258
Merit: 1027
How do the Antminer S3s perform on p2pool?

I have 4 batch 1 units that will go live on my node upon arrival, Bitmain has stated that they will work to make them play well with p2pool, I've also started a bounty for forrestv to get S2s & S3s working correctly...
hero member
Activity: 918
Merit: 1002
How do the Antminer S3s perform on p2pool?

Batch 1 ships July 10th, I'll slap one up on receipt of goods.
sr. member
Activity: 434
Merit: 250
How do the Antminer S3s perform on p2pool?
full member
Activity: 932
Merit: 100
arcs-chain.com
Peer =148.251.238.178:18702 is causing bitcoind to go out of sync, 0 hours behind..... lying about blocks

Is there a way to block these getaddr.bitnodes.io:0.1 clients from bitcoind, or do one have to block ip after ip from firewall?

Not sure but, it seems to cause orphan and dead shares while this happens..0 hours behind....

As far as I know, you can only block them by setting up a whitelist of connections to use, so only setting up connections for specific peers. 
Use "connect" to establish a list of trusted peers, but then you're only locked to those peers which has its pros and cons.

Quote
-addnode=          Add a node to connect to and attempt to keep the connection open
 -connect=          Connect only to the specified node(s)
 -seednode=         Connect to a node to retrieve peer addresses, and disconnect
 -externalip=       Specify your own public address
 -onlynet=         Only connect to nodes in network (IPv4, IPv6 or Tor)

-connect method would be nice if I only knew how to connect to p2pool node, not sure what happens if bitcoind is started up witout connect settings and then p2pool brought up - then change the config to connect=specified hosts and reboot bitcoind.

It can be that if a block is found it cant be submitted to bitcoind..

Installed and configured iptables and now it stays synced..
hero member
Activity: 798
Merit: 1000
Peer =148.251.238.178:18702 is causing bitcoind to go out of sync, 0 hours behind..... lying about blocks

Is there a way to block these getaddr.bitnodes.io:0.1 clients from bitcoind, or do one have to block ip after ip from firewall?

Not sure but, it seems to cause orphan and dead shares while this happens..0 hours behind....

As far as I know, you can only block them by setting up a whitelist of connections to use, so only setting up connections for specific peers. 
Use "connect" to establish a list of trusted peers, but then you're only locked to those peers which has its pros and cons.

Quote
-addnode=          Add a node to connect to and attempt to keep the connection open
 -connect=          Connect only to the specified node(s)
 -seednode=         Connect to a node to retrieve peer addresses, and disconnect
 -externalip=       Specify your own public address
 -onlynet=         Only connect to nodes in network (IPv4, IPv6 or Tor)
full member
Activity: 932
Merit: 100
arcs-chain.com
Peer =148.251.238.178:18702 is causing bitcoind to go out of sync, 0 hours behind..... lying about blocks

Is there a way to block these getaddr.bitnodes.io:0.1 clients from bitcoind, or do one have to block ip after ip from firewall?

Not sure but, it seems to cause orphan and dead shares while this happens..0 hours behind....
member
Activity: 85
Merit: 10
Our luck remains strong my friends Smiley


2 blocks in 38 minutes... nice.  If only my miners hadn't decided to take a vacation from share finding... lol.

That Faith I was speaking of yesterday!!!!!!! Looking better all........
Have to admit, the numbers are way up now.....
Jump to: