Pages:
Author

Topic: [1200 TH] EMC: 0 Fee DGM. Anonymous PPS. US & EU servers. No Registration! - page 50. (Read 499709 times)

legendary
Activity: 1260
Merit: 1000
I must have missed it if he did, anyone got a lead on it?
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Didn't organofcorti post somewhere the appropriate calculation?
(for vardiff)
legendary
Activity: 1795
Merit: 1208
This is not OK.
As I mentioned a few pages back, do experimentations, gather data, build a table of results, draw graphs, then chose a method that works best for you. Pulling numbers out of the air is not a scientific method. Get numbers.
vip
Activity: 1358
Merit: 1000
AKA: gigavps

I was at 8 and that was too low... I moved it up to 16 and that still seems too low for some people.  Of course, when the ASICs are out, I think 10 is absolutely reasonable, if the minimum hashrate on a given unit is 4.5 GH/s.  Right now, though, I think 10 might be too low for GPU miners... not from a technical perspective, but from an emotional one: it drives their variance up too high for comfort is the feeling I get from people.

I think we might need to go back to the drawing board and shoot for a variable difficulty based on server load, vs a getwork target... though that adds quite a bit of complexity.  I'm not sure what metric would be the best to account for server load, as there are many other factors that come into play just looking at the system load in top or some such.

I would really like to see a scenario where:

  • Pool server checks for the X-Mining-Hashrate header and calculates a diff based on the reported hash rate similar to what conman originally suggested. The lowest diff is still 1.
  • If X-Mining-Hashrate doesn't exist, fall back to 20-30 shares per minute per worker target.
  • If the server load is too high, move the lowest diff allow from 1 to 2.

What do you think?
member
Activity: 66
Merit: 10
I'm a GPU miner with about 600MH/s on one machine under one worker.  I haven't seen my Diff go higher than 1 during this whole testing phase, so it doesn't appear to be affecting my variance at all.
legendary
Activity: 1260
Merit: 1000
So Gigavps and anyone else:  Did we ever decide on what a good share target was?  20?  24?
I'd say 10 shares per minute would be fine.

I was at 8 and that was too low... I moved it up to 16 and that still seems too low for some people.  Of course, when the ASICs are out, I think 10 is absolutely reasonable, if the minimum hashrate on a given unit is 4.5 GH/s.  Right now, though, I think 10 might be too low for GPU miners... not from a technical perspective, but from an emotional one: it drives their variance up too high for comfort is the feeling I get from people.

I think we might need to go back to the drawing board and shoot for a variable difficulty based on server load, vs a getwork target... though that adds quite a bit of complexity.  I'm not sure what metric would be the best to account for server load, as there are many other factors that come into play just looking at the system load in top or some such.

Quote
I have a very general question about variable difficulty which has bugged me.

I've asked before and I think the answer was experiment and see how you make out. I did some of that and the earnings have steadily been dropping due to increased hash rates, bad luck, and increased difficulty. So... it is hard to tell the results of testing.

So the question is...

If you have multiple devices... should each device have its own worker or should all of your devices share a worker?

If the devices are FPGA/GPU - should you use a different approach than when people receive ASIC hardware?

Well, the FPGA/GPU vs ASIC question really needs to be asked as at what GH/s speed does the getwork target make the most sense... so if you combine all your units into one worker, then a lower getwork target makes more sense, since your variance will "apparently" be reduced by the higher hashrate.  If you split them all up, a higher target is better, for the same reason.  It's all about perception for the most part... over a long enough period, it doesn't really matter from a functional standpoint.

full member
Activity: 784
Merit: 101
So Gigavps and anyone else:  Did we ever decide on what a good share target was?  20?  24?
I'd say 10 shares per minute would be fine.

I agree... 10 shares per minute seems like a reasonable number of network connections.

I have a very general question about variable difficulty which has bugged me.

I've asked before and I think the answer was experiment and see how you make out. I did some of that and the earnings have steadily been dropping due to increased hash rates, bad luck, and increased difficulty. So... it is hard to tell the results of testing.

So the question is...

If you have multiple devices... should each device have its own worker or should all of your devices share a worker?

If the devices are FPGA/GPU - should you use a different approach than when people receive ASIC hardware?

-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
So Gigavps and anyone else:  Did we ever decide on what a good share target was?  20?  24?
I'd say 10 shares per minute would be fine.
full member
Activity: 147
Merit: 100
it's all working for me now yay!  probably has been for the past day or so, but I haven't really checked it until now :/
legendary
Activity: 1260
Merit: 1000
Thanks! Should be fixed now!
full member
Activity: 784
Merit: 101
Forum is currently giving a database error.
legendary
Activity: 1260
Merit: 1000
There really isn't a range, any server should be usable by anyone.  We are just trying to find the optimum point and then I will change all servers to that.
hero member
Activity: 988
Merit: 1000
Yes, that's correct.

Thanks,

It might help if you put a speed range for people to use:
Up to 3Gh or 10 workers US3 is recommended

and so on
legendary
Activity: 1260
Merit: 1000
hero member
Activity: 988
Merit: 1000
Ok, here's the config now:

US1: 12 getworks per minute target
US2: 16 getworks per minute target
US3: 20 getworks per minute target

See which one works best for you.



I may bump that up to 20, 24 and 32?

So US1 gives the highest diff?
legendary
Activity: 1260
Merit: 1000
Ok, here's the config now:

US1: 12 getworks per minute target
US2: 16 getworks per minute target
US3: 20 getworks per minute target

See which one works best for you.



I may bump that up to 20, 24 and 32?
hero member
Activity: 988
Merit: 1000
Could you give us a rundown on the diff at each server?
legendary
Activity: 1260
Merit: 1000
I am migrating the web server to a different DC... well it's already migrated.  I just repointed the DNS servers to the new server... it should be completely seemless and no one should notice the switch (unless your AV program or something pops up an alert that the IP has changed).

If you have a problem, please let me know so I can get it fixed ASAP.  Everything appears to be working on the new servers, so it should be pretty transparent.

The new DC should be much more stable (assuming I don't forget to pay the bill again).  It will also have the added bonus of being accessible via IPv6 as soon as I get it configured on the server.
legendary
Activity: 1540
Merit: 1001
Do they all run under the same worker?  Vardiff is by worker if I hadn't made that clear as of yet.


btw, I like the vardiff.  Doesn't matter too much with 2.6gh, but it'll mean a lot when the asics roll in.

M
legendary
Activity: 1260
Merit: 1000
Do they all run under the same worker?  Vardiff is by worker if I hadn't made that clear as of yet.
Pages:
Jump to: