Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 576. (Read 2591920 times)

legendary
Activity: 1361
Merit: 1003
Don`t panic! Organize!
Another way would be create 3 type of shares.
For current pool hash rate sd is about 700.
Make one share type that have diff 1/3 of standard share and one type that have 3x standard share diff. Each type have to be scored according to its diff.
Pool will take sd1+ shares form worker and calculate its hash rate. Then pool decide what type of share should be used for this worker.
We should also have ability to force standard or higher share diff using prefix or postfix in worker name (not allow to drop to type 1).
Lower share diff should be only enabled on pool side.
Also if node is making lots of low diff shares pool should punish those shares as invalid (in case that s1 mess in code).
This way we avoid too high share diff and allow smaller and bigger miners to mine in p2pool Smiley
This change require hard fork ofc.
hero member
Activity: 658
Merit: 500
It sounds like forrestv is instead in favor of alternate means to extend the effective work interval through merging of parallel chains.  Various theoretical designs were discussed.

We may end up making a fork or a frankenbuild of p2pool to fix things for testing. I don't see why in the end it would need to fork for everyone, but it would be a hard fork and everyone would need to upgrade to a version where everyone is on the same share chain.

It ultimately seems like 10 seconds is just too short, given Internet propagation, current Avalon hashrate, and the up to 1.5-second delay it can take for work to be returned from Avalons (high latency).  Thus, I argue for around 30 seconds, which would imply a hard fork at some point.

No, taking this direction is a mistake. You are trying to redesign p2pool around the design of one device. There is nothing that says that all future ASICs will have this same hardware limitation. It is an intrinsic design flaw/limitation/shortcut taken in the first generation Avalons.

it affects more devices, the bfl singles have the same issue. the minirigs have a work around, but sort of the same as well. I will bet money the bfl SC when/if they come out will as well. The way the current asics are designed is the issue, they are clusters of many small devices with overhead.

anyways, the issue is kinda moot right now since it appears that the avalons will work on p2pool once you disable work caching on stratum as well. No need to create a fork to test. But it would be nice to not loose 20-30% to DOA. But in the long run, once asic hit mainstream everyone else will have them too and it will even out.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
It sounds like forrestv is instead in favor of alternate means to extend the effective work interval through merging of parallel chains.  Various theoretical designs were discussed.

We may end up making a fork or a frankenbuild of p2pool to fix things for testing. I don't see why in the end it would need to fork for everyone, but it would be a hard fork and everyone would need to upgrade to a version where everyone is on the same share chain.

It ultimately seems like 10 seconds is just too short, given Internet propagation, current Avalon hashrate, and the up to 1.5-second delay it can take for work to be returned from Avalons (high latency).  Thus, I argue for around 30 seconds, which would imply a hard fork at some point.

No, taking this direction is a mistake. You are trying to redesign p2pool around the design of one device. There is nothing that says that all future ASICs will have this same hardware limitation. It is an intrinsic design flaw/limitation/shortcut taken in the first generation Avalons.
hero member
Activity: 924
Merit: 1000
Watch out for the "Neg-Rep-Dogie-Police".....
We are concerning about that Avalons will sky high share diff to high. Share diff is raised, when shares are showing to fast in chain.
Maybe allow Avalon (or another high power devices) users to set share diff as high as they want?

There is an argument for multiple pools... an Avalon/ASIC pool, a GPU-and-smaller pool, etc.



I not sure that this is a good option, it may cause more problems than it solves. Far better to have the one pool for everyone I think...keeps it simple also.
hero member
Activity: 896
Merit: 1000
Another solution involving several pools:

The p2pool network could more or less automatically organize itself in subpools to avoid the problems of a too large pool. A node should target a pool where it gets a percentage of the hashrate in a range suited for low variance.

The problem I see is how a new subpool could be automatically created (it should be done cooperatively to avoid one single node on a subpool). A node could be connected to the old and new pools at the same time and balancing its hashrate among them progressively (monitoring other node hashrate rising) to make the transition less risky for its variance.
hero member
Activity: 896
Merit: 1000
We are concerning about that Avalons will sky high share diff to high. Share diff is raised, when shares are showing to fast in chain.
Maybe allow Avalon (or another high power devices) users to set share diff as high as they want?

There is an argument for multiple pools... an Avalon/ASIC pool, a GPU-and-smaller pool, etc.

I'm not sure the argument is valid. The argument is based on the assumption that a GPU on an ASIC pool will get such a high variance that it will be a deterrent.

The problem with this line of thinking is that it doesn't scale: a GPU in a small ASIC pool is the same today as an ASIC in a big ASIC pool tomorrow. The problem is small relative hashrate, it will exist even with a balanced p2pool (with everybody in the same ballpark) when it grows.
hero member
Activity: 658
Merit: 500
I think I got p2pool working on avalon with stratum.. maybe.

its hashing at full speed last couple of minutes Cheesy

in main.py

Quote
serverfactory = switchprotocol.FirstByteSwitchFactory({'{': stratum.StratumServerFactory(wb)}, web_serverfactory)

same workaround as p2pool avalon branch, to disable work caching.
sr. member
Activity: 454
Merit: 252
This proposal will "only" need minor changes in code and we will not need separate share chain or hard fork.
OFC there should be "some" protection against changes in code, ie there should be at least 2 shares reported on same higher sd from same node/user/address.

I think there is another concern: avalons have high work-return latency. The hard fork would be caused if moving to a 30 second per share target to compensate for the latency issues. That alone would cause a 73% increase in variance across the board. Large miners (ASICS) might not care since their variance is low to begin with, but it might be too much to swallow for small miners who are already experiencing higher variance.

However, if the 3x increase in target time is combined with a 3x increase in the percentage of bitcoin hashing rate attributed to p2pool (thanks to ASICs now being able mine), then the small miners won't even notice the change in variance and there can be just one pool: the new 30 second one.

Or you can make a command line flag on p2pool and let the market choose/decide. Nothing would stop the smaller miners from choosing the 30 second pool if the variance is lower there.
legendary
Activity: 1596
Merit: 1100
We are concerning about that Avalons will sky high share diff to high. Share diff is raised, when shares are showing to fast in chain.
Maybe allow Avalon (or another high power devices) users to set share diff as high as they want?

There is an argument for multiple pools... an Avalon/ASIC pool, a GPU-and-smaller pool, etc.

legendary
Activity: 1361
Merit: 1003
Don`t panic! Organize!
We are concerning about that Avalons will sky high share diff to high. Share diff is raised, when shares are showing to fast in chain.
Maybe allow Avalon (or another high power devices) users to set share diff as high as they want?
Easiest way I see is add another mark to username ie "*".
Shares found this way should be saved in chain as diff that high.
This way shares will NOT come up so much often and "normal" share diff will be on sane level for smaller miners and high hash power users will be paid more for higher diff shares.
This proposal will "only" need minor changes in code and we will not need separate share chain or hard fork.
OFC there should be "some" protection against changes in code, ie there should be at least 2 shares reported on same higher sd from same node/user/address.
hero member
Activity: 658
Merit: 500
It sounds like forrestv is instead in favor of alternate means to extend the effective work interval through merging of parallel chains.  Various theoretical designs were discussed.

We may end up making a fork or a frankenbuild of p2pool to fix things for testing. I don't see why in the end it would need to fork for everyone, but it would be a hard fork and everyone would need to upgrade to a version where everyone is on the same share chain.

It ultimately seems like 10 seconds is just too short, given Internet propagation, current Avalon hashrate, and the up to 1.5-second delay it can take for work to be returned from Avalons (high latency).  Thus, I argue for around 30 seconds, which would imply a hard fork at some point.


I'm going to play with it today. ckolivas and xiangfu were in #cgminer today, got a new build of cgminer working with bugfixes from ckolivas. I'll see what we can come up with.

babysteps, but we are moving forward.
legendary
Activity: 1596
Merit: 1100
It sounds like forrestv is instead in favor of alternate means to extend the effective work interval through merging of parallel chains.  Various theoretical designs were discussed.

We may end up making a fork or a frankenbuild of p2pool to fix things for testing. I don't see why in the end it would need to fork for everyone, but it would be a hard fork and everyone would need to upgrade to a version where everyone is on the same share chain.

It ultimately seems like 10 seconds is just too short, given Internet propagation, current Avalon hashrate, and the up to 1.5-second delay it can take for work to be returned from Avalons (high latency).  Thus, I argue for around 30 seconds, which would imply a hard fork at some point.
hero member
Activity: 658
Merit: 500
I havent talked to ckovilas today, but looking at the linux box, hes pulled p2pool down and setup a bunch of other things. He's in Poland, I'm in the USA. We are about 10 hours off from each other.

He's going to primarily bring cgminer for avalon up to the current codebase. p2pool compatibility is my special request and I'm sure we'll be screwing with it for some time.

We may end up making a fork or a frankenbuild of p2pool to fix things for testing. I don't see why in the end it would need to fork for everyone, but it would be a hard fork and everyone would need to upgrade to a version where everyone is on the same share chain.

The ball is rolling just not quickly yet Tongue

sr. member
Activity: 263
Merit: 250
Yes I think you've found the "problem" The issue is, the asics NEED a higher difficulty, or they are going to kill the smaller miners in p2pool.

Yes, this was established the day the first Avalon arrived Smiley

Quote
I also think as difficulty greatly increases, we are going to need a longer long-poll time as well. maybe 20 or 30 seconds.

On IRC, an ASIC-only p2pool share chain idea was floated, with a higher difficulty by default and a longer time between shares.

It sounds like forrestv is instead in favor of alternate means to extend the effective work interval through merging of parallel chains.  Various theoretical designs were discussed.
legendary
Activity: 1596
Merit: 1100
Yes I think you've found the "problem" The issue is, the asics NEED a higher difficulty, or they are going to kill the smaller miners in p2pool.

Yes, this was established the day the first Avalon arrived Smiley

Quote
I also think as difficulty greatly increases, we are going to need a longer long-poll time as well. maybe 20 or 30 seconds.

On IRC, an ASIC-only p2pool share chain idea was floated, with a higher difficulty by default and a longer time between shares.

hero member
Activity: 658
Merit: 500
...
The highest p2pool would let me go is 6535. Any higher number just comes back as 6535.
...
Better get that fixed fast ...
I think it is in getwork.py:
Code:
'target': pack.IntType(256).pack(self.share_target).encode('hex'),


Regarding minimum difficulty:
rav3n_pl has helped point me to what's going on in worker.py
Code:
       if desired_pseudoshare_target is None:
            target = 2**256-1
            if len(self.recent_shares_ts_work) == 50:
                hash_rate = sum(work for ts, work in self.recent_shares_ts_work[1:])//(self.recent_shares_ts_work[-1][0] - self.recent_shares_ts_work[0][0])
                if hash_rate:
                    target = min(target, int(2**256/hash_rate))
        else:
            target = desired_pseudoshare_target
        target = max(target, share_info['bits'].target)

The last line shows that if the desired_pseudoshare_target (that is the diff that is served to your miner and it is taken from the username+desired_pseudoshare_target log in to the server) is harder (higher difficulty or lower target) than the current p2pool network difficulty (share_info['bits'].target), the target served as work to your miner will be the current p2pool network difficulty.

rav3n_pl had the point that unless you plug something into the p2pool network that hashes at > 1000% of the current network hashrate, you will submit shares to your local p2pool instance at a rate of < 1 share per second. Most servers should be able to handle that somewhat easily.

So the difficulty "bug" does not appear to be one, unless someone else has something to add.

And thank you @Aseras for donating you machine to help get avalon working on p2pool.


EDIT: I realize that Aseras may also have been talking about the maximum difficulty returned to the p2pool network (which should have no connection to server load).
From data.py, get_transaction:
Code:
bits = bitcoin_data.FloatingInteger.from_target_upper_bound(math.clip(desired_target, (pre_target3//10, pre_target3)))
So you will return to the network the easier (lower difficulty) of the desired target from "username/desired_target" or 10 times the current p2pool network difficulty.

That's why you were getting 6535: the network difficulty was 653.5 and it wouldn't let you set a target greater than 10x harder.

Yes I think you've found the "problem" The issue is, the asics NEED a higher difficulty, or they are going to kill the smaller miners in p2pool.

I also think as difficulty greatly increases, we are going to need a longer long-poll time as well. maybe 20 or 30 seconds.
sr. member
Activity: 454
Merit: 252
...
The highest p2pool would let me go is 6535. Any higher number just comes back as 6535.
...
Better get that fixed fast ...
I think it is in getwork.py:
Code:
'target': pack.IntType(256).pack(self.share_target).encode('hex'),


Regarding minimum difficulty:
rav3n_pl has helped point me to what's going on in worker.py
Code:
       if desired_pseudoshare_target is None:
            target = 2**256-1
            if len(self.recent_shares_ts_work) == 50:
                hash_rate = sum(work for ts, work in self.recent_shares_ts_work[1:])//(self.recent_shares_ts_work[-1][0] - self.recent_shares_ts_work[0][0])
                if hash_rate:
                    target = min(target, int(2**256/hash_rate))
        else:
            target = desired_pseudoshare_target
        target = max(target, share_info['bits'].target)

The last line shows that if the desired_pseudoshare_target (that is the diff that is served to your miner and it is taken from the username+desired_pseudoshare_target log in to the server) is harder (higher difficulty or lower target) than the current p2pool network difficulty (share_info['bits'].target), the target served as work to your miner will be the current p2pool network difficulty.

rav3n_pl had the point that unless you plug something into the p2pool network that hashes at > 1000% of the current network hashrate, you will submit shares to your local p2pool instance at a rate of < 1 share per second. Most servers should be able to handle that somewhat easily.

So the difficulty "bug" does not appear to be one, unless someone else has something to add.

And thank you @Aseras for donating you machine to help get avalon working on p2pool.


EDIT: I realize that Aseras may also have been talking about the maximum difficulty returned to the p2pool network (which should have no connection to server load).
From data.py, get_transaction:
Code:
bits = bitcoin_data.FloatingInteger.from_target_upper_bound(math.clip(desired_target, (pre_target3//10, pre_target3)))
So you will return to the network the easier (lower difficulty) of the desired target from "username/desired_target" or 10 times the current p2pool network difficulty.

That's why you were getting 6535: the network difficulty was 653.5 and it wouldn't let you set a target greater than 10x harder.
hero member
Activity: 658
Merit: 500
Sorry, I'm quite serious that I do not wish to be involved in any way with GitSyncom or the companies he is a lowly employee of.
If I had an Avalon I would be supporting them and I will not do that.

They ignored the GPL license for cgminer for a long time until they released the source and made up excuses that were unrelated to cgminer as to why they didn't release the code at first.
GitSyncom also directly stated that he thought my suggestions that hardware was required by me to properly support it was just an excuse to get "free hardware" and also that when I pointed this out to him last year his thoughts on that were:
https://bitcointalksearch.org/topic/m.1513358
"I'll reject you on sheer principle fucking level."

I don't mind helping Xiangfu or CKolivas with the implementation, but I will be leaving any non USB specific code directly up to them (as they of course are well able to deal with it)

I'm not motivated by money above my own conscience and since I cannot with a good conscience accept an Avalon, the monetary gain is irrelevant.
I have been offered 2 already and turned them both down. One you will see in one of the Avalon threads and the other in PM.

I'm not sure if you consider this to be yet another offer - but either way - I'm not interested in it.

I totally understand I watched the whole thing develop. bitsyncom was a total dick about it all. xiangfu is ok, but hes very quiet and doesn't talk much, and what he does say is quite hard to follow sometimes.

that said, i do wish you might reconsider and help US out.

ckolivas has been working all day on my units. he had a crash course in l2tp under ubuntu last night Cheesy Anyways, hes in now, and mining away while he tries out new things. they are making ~9BTC per day, so by the end of the week @ > $100/BTC he should make out, and hopefully we'll have a much better improved cgminer on avalon for it soon.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Just get an Avalon for ckolivas and cgminer will be hashing Avalon on all working pools as quickly as possible.
I don't want an Avalon as I have made clear for quite a while now.
So no, it's not get "cgminer team" an Avalon, it's get "ckolivas" an Avalon.

This is happening next week. If plans go well.

And Kano, take what you can get. Just because you have beef with the avalon team, don't screw with everyone else. If anything we need people like you to get the avalons to operate like they should. Instead, we have this half assed build of cgminer because they wanted it all in house. Better to just cut them out and move on.

Bitcoin is going to ASIC. BFL may be close, or one post away from bankruptcy. There's no one else. Might as well get your hands on what's out there.
Sorry, I'm quite serious that I do not wish to be involved in any way with GitSyncom or the companies he is a lowly employee of.
If I had an Avalon I would be supporting them and I will not do that.

They ignored the GPL license for cgminer for a long time until they released the source and made up excuses that were unrelated to cgminer as to why they didn't release the code at first.
GitSyncom also directly stated that he thought my suggestions that hardware was required by me to properly support it was just an excuse to get "free hardware" and also that when I pointed this out to him last year his thoughts on that were:
https://bitcointalksearch.org/topic/m.1513358
"I'll reject you on sheer principle fucking level."

I don't mind helping Xiangfu or CKolivas with the implementation, but I will be leaving any non USB specific code directly up to them (as they of course are well able to deal with it)

I'm not motivated by money above my own conscience and since I cannot with a good conscience accept an Avalon, the monetary gain is irrelevant.
I have been offered 2 already and turned them both down. One you will see in one of the Avalon threads and the other in PM.

I'm not sure if you consider this to be yet another offer - but either way - I'm not interested in it.
legendary
Activity: 916
Merit: 1003
Is there a site like p2pool.info for LTC?
Jump to: