Pages:
Author

Topic: [1200 TH] EMC: 0 Fee DGM. Anonymous PPS. US & EU servers. No Registration! - page 53. (Read 499597 times)

sr. member
Activity: 850
Merit: 331

I've noticed the change of bfg by cow.
Huh?

...
cowminer version 2.8.1
Code:
cowminer version 2.8.1 - Started: [2012-10-01 19:28:12] - [  0 days 20:06:41]
--------------------------------------------------------------------------------
 5s:200.9 avg:201.1 u:199.7 Mh/s | A:3367  R:21  HW:0  E:174%  U:2.8/m

I mine with a 6770 and get 200 Mh/s so 1/10 your's but U: that means real shares submitted to pool is much higher, always get 2.6-2.7 shares/m. Can anyboy confirm it?, is a matter or GPU vs FPGA? or Am I too lucky?.

Too lucky? No, you're just using the wrong software Tongue
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
That said, how does that effect efficiency calculations going forward?  Stratum is effectively the same in that regard, so if you pull a template and send back getworks, how is CGminer going to calculate efficiency or does that just become a redundant metric at that point?
I haven't decided what to do with the efficiency metric. Either I'll make up something or just not use it.
Making it halfway through the stratum protocol, I've decided that each mining notify message will be counted as the equivalent of a getwork. Of course efficiency is increasingly becoming a figure that is of not much use to miners and pool ops alike, but perhaps a target efficiency will be the endpoint of tuning what variable diff to set it to.

I'd argue efficiency isn't even a meaningful stat on Stratum.  Pools sending you more job notifications aren't less efficient, they're actually MORE efficient (more frequent jobs = more current on transactions in the network).
Indeed efficiency is already confusing enough in the light of rolltime and vardiff, and not even defined in any meaningful fashion for stratum. It looks like it might be time to retire it as a metric.
legendary
Activity: 2576
Merit: 1186
Sorry but don't understand, wrong software? measures are differents if bfgminer is used with GPU or FPGA?
Kano is a troll, just ignore him.

I've noticed the change of bfg by cow.
Huh?
legendary
Activity: 1750
Merit: 1007
That said, how does that effect efficiency calculations going forward?  Stratum is effectively the same in that regard, so if you pull a template and send back getworks, how is CGminer going to calculate efficiency or does that just become a redundant metric at that point?
I haven't decided what to do with the efficiency metric. Either I'll make up something or just not use it.
Making it halfway through the stratum protocol, I've decided that each mining notify message will be counted as the equivalent of a getwork. Of course efficiency is increasingly becoming a figure that is of not much use to miners and pool ops alike, but perhaps a target efficiency will be the endpoint of tuning what variable diff to set it to.

I'd argue efficiency isn't even a meaningful stat on Stratum.  Pools sending you more job notifications aren't less efficient, they're actually MORE efficient (more frequent jobs = more current on transactions in the network).
sr. member
Activity: 850
Merit: 331
...
Code:
cowminer version 2.8.1 - Started: [2012-10-01 19:28:12] - [  0 days 20:06:41]
--------------------------------------------------------------------------------
 5s:200.9 avg:201.1 u:199.7 Mh/s | A:3367  R:21  HW:0  E:174%  U:2.8/m

I mine with a 6770 and get 200 Mh/s so 1/10 your's but U: that means real shares submitted to pool is much higher, always get 2.6-2.7 shares/m. Can anyboy confirm it?, is a matter or GPU vs FPGA? or Am I too lucky?.

Too lucky? No, you're just using the wrong software Tongue

Sorry but don't understand, wrong software? measures are differents if bfgminer is used with GPU or FPGA?

I've noticed the change of bfg by cow.

Regards
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
...
Code:
cowminer version 2.8.1 - Started: [2012-10-01 19:28:12] - [  0 days 20:06:41]
--------------------------------------------------------------------------------
 5s:200.9 avg:201.1 u:199.7 Mh/s | A:3367  R:21  HW:0  E:174%  U:2.8/m

I mine with a 6770 and get 200 Mh/s so 1/10 your's but U: that means real shares submitted to pool is much higher, always get 2.6-2.7 shares/m. Can anyboy confirm it?, is a matter or GPU vs FPGA? or Am I too lucky?.

Too lucky? No, you're just using the wrong software Tongue
sr. member
Activity: 850
Merit: 331
I have an older cgminer (2.4.1) running on OpenWRT that was working great with EMC until last weekend (which I assume was when the var diff got turned on). Since then I get roughly 50% of my hashing power reported on the workers page.

I tried upgrading to the latest git version which rendered the exact same result (and random segfaults) so I moved back to my trusted version.

What am I missing here? var diff should work fine even with 2.4.1 if I understand it correctly, so what am I missing?
Try 2.7.5 ...

On 2.7.5 now, I'm putting 2GH/s+ in (10x ZTEX singles) and it all looks good on the miner side (apart from a few rejected with high-hash, which is new to me). On EMC, however, the hash rate reported fluctuates between 1~1.4GH/s, avg diff is 1.088. I expected it to fluctuate a bit higher, obviously.

Code:
(5s):2275.3 (avg):2042.6 Mh/s | Q:275  A:1573  R:214  HW:0  E:572%  U:16.6/m



Code:
bfgminer version 2.8.1 - Started: [2012-10-01 19:28:12] - [  0 days 20:06:41]
--------------------------------------------------------------------------------
 5s:200.9 avg:201.1 u:199.7 Mh/s | A:3367  R:21  HW:0  E:174%  U:2.8/m

I mine with a 6770 and get 200 Mh/s so 1/10 your's but U: that means real shares submitted to pool is much higher, always get 2.6-2.7 shares/m. Can anyboy confirm it?, is a matter or GPU vs FPGA? or Am I too lucky?.

-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
That said, how does that effect efficiency calculations going forward?  Stratum is effectively the same in that regard, so if you pull a template and send back getworks, how is CGminer going to calculate efficiency or does that just become a redundant metric at that point?
I haven't decided what to do with the efficiency metric. Either I'll make up something or just not use it.
Making it halfway through the stratum protocol, I've decided that each mining notify message will be counted as the equivalent of a getwork. Of course efficiency is increasingly becoming a figure that is of not much use to miners and pool ops alike, but perhaps a target efficiency will be the endpoint of tuning what variable diff to set it to.
legendary
Activity: 1540
Merit: 1001
I have an older cgminer (2.4.1) running on OpenWRT that was working great with EMC until last weekend (which I assume was when the var diff got turned on). Since then I get roughly 50% of my hashing power reported on the workers page.

I tried upgrading to the latest git version which rendered the exact same result (and random segfaults) so I moved back to my trusted version.

What am I missing here? var diff should work fine even with 2.4.1 if I understand it correctly, so what am I missing?
Try 2.7.5 ...

On 2.7.5 now, I'm putting 2GH/s+ in (10x ZTEX singles) and it all looks good on the miner side (apart from a few rejected with high-hash, which is new to me). On EMC, however, the hash rate reported fluctuates between 1~1.4GH/s, avg diff is 1.088. I expected it to fluctuate a bit higher, obviously.

Code:
(5s):2275.3 (avg):2042.6 Mh/s | Q:275  A:1573  R:214  HW:0  E:572%  U:16.6/m

legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
I have an older cgminer (2.4.1) running on OpenWRT that was working great with EMC until last weekend (which I assume was when the var diff got turned on). Since then I get roughly 50% of my hashing power reported on the workers page.

I tried upgrading to the latest git version which rendered the exact same result (and random segfaults) so I moved back to my trusted version.

What am I missing here? var diff should work fine even with 2.4.1 if I understand it correctly, so what am I missing?
Try 2.7.5 ...
legendary
Activity: 1540
Merit: 1001
I have an older cgminer (2.4.1) running on OpenWRT that was working great with EMC until last weekend (which I assume was when the var diff got turned on). Since then I get roughly 50% of my hashing power reported on the workers page.

I tried upgrading to the latest git version which rendered the exact same result (and random segfaults) so I moved back to my trusted version.

What am I missing here? var diff should work fine even with 2.4.1 if I understand it correctly, so what am I missing?
full member
Activity: 140
Merit: 100
full member
Activity: 121
Merit: 100
I just saw it. half mhash for 17 blocks now... why?

cgminer 2.4.1 cant upgrade
What he probably means is that cgminers mhash is approx half, in my case from 320 to 160 MH.

For me seen also with cgminer 2.4.1 but directly against stratum proxy (tested both) 0.5.0 and 0.8.3, (against btcguild ). Only clue for me yet is that its a 6950, 3 other 5850 same setup, no problem. Maybe you also got a 6950 ?
Nevermind, too tired, had launched an earlier cgminer. Sorry for the fuzz.
full member
Activity: 140
Merit: 100
what is the pool software you are using?
legendary
Activity: 1540
Merit: 1001
I'm pointing my measly 5g/h here now.  So far I like the interface.  Note to self, and others, if switching from a pool where the workers were user.miner to eclipse, change it to user_miner if you expect to see anything.  Otherwise it happily mines away, presumably to never never land.

Q: are you keeping transaction fees?  I assume you need something to pay for this, unless you have some happy donators.

M
420
hero member
Activity: 756
Merit: 500
US2 is having some trouble, I am investigating it now.  


US3 gave me same result

Nevermind US3 seems to work now

EDIT again: both working again for RPCMiner

EDIT 3x: bad luck streak? I switch to Eclipse and no blocks are found? raincloud Sad
legendary
Activity: 1260
Merit: 1000
US2 is having some trouble, I am investigating it now. 
420
hero member
Activity: 756
Merit: 500
Anyone else have a problem connecting to account? I do. can login but worker not working when was working yesterday
vip
Activity: 1358
Merit: 1000
AKA: gigavps
I can readjust all servers to different amounts as well if we want to try that.  It's also possible to leave servers at different difficulties for whatever people feel most comfortable with. 

Do you see any value of going to 24 or higher to test?  My only concern with higher GW/m rate is that when the system gets slammed with 20 TH/s, it starts to overload the back end, but in all honesty, the back end is pretty robust at this point and can handle around 5 TH/s per server at diff1, if not more.

You might want to keep a minimum diff for each user mining so that once they start mining for the first time, the system will remember a minimum difficulty for that particular miner. That way, if someone with 10Th/s hops on and off the server, they don't start with diff 1 shares everytime.

Just my two bitcents.  Cheesy
legendary
Activity: 1260
Merit: 1000
I can readjust all servers to different amounts as well if we want to try that.  It's also possible to leave servers at different difficulties for whatever people feel most comfortable with. 

Do you see any value of going to 24 or higher to test?  My only concern with higher GW/m rate is that when the system gets slammed with 20 TH/s, it starts to overload the back end, but in all honesty, the back end is pretty robust at this point and can handle around 5 TH/s per server at diff1, if not more. 

Pages:
Jump to: