Author

Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.11.0 - page 541. (Read 5805728 times)

member
Activity: 125
Merit: 10
Can't compile

Code:
  CC     cgminer-util.o
util.c: In function 'nmsleep':
util.c:703:7: error: void value not ignored as it ought to be
make[2]: *** [cgminer-util.o] Error 1
make[2]: Leaving directory `/home/z/cgminer'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/z/cgminer'
make: *** [all] Error 2

any suggestion?
hero member
Activity: 658
Merit: 500
2.5 has caused me HUGE problems. Every miner but 1 mines a couple shares and stops. Just sits there. 2.4.4 did something similar. I dropped back to 2.4.3 and everything is peachy.

What changed between them?
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
I think I know what's going on now. P2pool assumes that the miner will not roll ntime more than 10 seconds into the future (because it sets X-Roll-NTime to "expire=10"). At every work request it increments ntime by 12 and returns that.

cgminer doesn't respect the short expire, and rolls more than that. The two programs rolls independently and every so often they clash.
Thanks, see my post in the p2pool thread. This is more about interpretation of the expire= parameter than anything else.
sr. member
Activity: 337
Merit: 252
I think I know what's going on now. P2pool assumes that the miner will not roll ntime more than 10 seconds into the future (because it sets X-Roll-NTime to "expire=10"). At every work request it increments ntime by 12 and returns that.

cgminer doesn't respect the short expire, and rolls more than that. The two programs rolls independently and every so often they clash.

This is a short version of the p2pool log:
Quote
long poll returns new work:
  ntime: 1341949597, id: 157
result from miner:          
  ntime: 1341949618, nonce: 2503472672
  ntime: 1341949602, nonce: 3756563217
work request:
  ntime: 1341949609 (1341949597+12), id: 157
result from miner:
  ntime: 1341949604, nonce: 0474846110
  ntime: 1341949597, nonce: 1088064670
  ntime: 1341949598, nonce: 0826850881
  ntime: 1341949612, nonce: 3546112957, hash: 00000000a89439acea9673042010653694b76b6bcae528ec034bd94144fe476b
  ntime: 1341949600, nonce: 2986584650
  ntime: 1341949614, nonce: 4073111605
work request:
  ntime: 1341949621 (1341949597+24), id: 157
result from miner:
  ntime: 1341949618, nonce: 2503472672
  ntime: 1341949610, nonce: 2498371303
  ntime: 1341949612, nonce: 3546112957, hash: 00000000a89439acea9673042010653694b76b6bcae528ec034bd94144fe476b
legendary
Activity: 1540
Merit: 1001
What you are looking for is the work that produced the 2 shares and where it came from.

If p2pool sent the same work twice then that would be explanation #1
If cgminer rolled the work and produced 2 pieces of work the same that would be explanation #2
If the work sent to both of the Icarus was different but the results ended up the same that would be explanation #3

1) would be caused by something in p2pool
2) would be a problem with the rolling code
3) would probably be caused by some sort of static variable in the Icarus code overwriting another icarus device data

I certainly have no idea which it is, but you should be able to work that out, either from the current full cgminer logs, or with adding an extra debug somewhere in cgminer to show the incoming work (if it doesn't already show that)

Once you have a log showing all 3 pieces of information (getwork, work sent to Icarus, results) for a duplicate pair of shares, we can then determine where the problem is and fix it (or pass it on to p2pool to fix)

As I've said in the p2pool thread, I've seen this dupe message with phoenix as well.  I seem to get it more on cgminer, but I definitely see it with phoenix.

M
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
What you are looking for is the work that produced the 2 shares and where it came from.

If p2pool sent the same work twice then that would be explanation #1
If cgminer rolled the work and produced 2 pieces of work the same that would be explanation #2
If the work sent to both of the Icarus was different but the results ended up the same that would be explanation #3

1) would be caused by something in p2pool
2) would be a problem with the rolling code
3) would probably be caused by some sort of static variable in the Icarus code overwriting another icarus device data

I certainly have no idea which it is, but you should be able to work that out, either from the current full cgminer logs, or with adding an extra debug somewhere in cgminer to show the incoming work (if it doesn't already show that)

Once you have a log showing all 3 pieces of information (getwork, work sent to Icarus, results) for a duplicate pair of shares, we can then determine where the problem is and fix it (or pass it on to p2pool to fix)
sr. member
Activity: 337
Merit: 252
Here is an example of duplicates I get. Is there a simple explanation for this? (I might add that my miner doesn't handle the excessive logging very well becase of the puny USB-drive it is running from)

from cgminer debug log:
Quote
[2012-07-10 21:46:42] PROOF OF WORK RESULT: true (yay!!!)
 [2012-07-10 21:46:42] ICA0                | (5s):117.8 (avg):215.1 Mh/s | A:39 R:10 HW:0 U:3.3/m
 [2012-07-10 21:46:41] HTTP hdr(Content-Length): 58
 [2012-07-10 21:46:41]  Proof: 00000000a89439acea9673042010653694b76b6bcae528ec034bd94144fe476b
Target: 00000000ffffffffffffffffffffffffffffffffffffffffffffffffffffffff
TrgVal? YES (hash < target)
 [2012-07-10 21:46:42] Pushing submit work to work thread
p2pool debug log:
Quote
2012-07-10 21:46:42.614994 > "Miner digger @ 192.168.1.102 submitted work: u'000000018e45fff93abc335ff6468d63036f5f368732cd938f253ede000008b500000000adc302d ba93ec61b888909f957f8b07bc7c5e265504899d9b44ccfbe14d0d9d94ffc86ac1a099431d35d63 bd00000080000000000000000000000000000000000000000000000000000000000000000000000 0000000000080020000'"
2012-07-10 21:46:42.615215 > "Submitted header: {'nonce': 3546112957, 'timestamp': 1341949612, 'merkle_root': 9415264709392722949772953808229952210827103327730856756643112439749055546075L, 'version': 1, 'previous_block': 13995169685392530026568800876350891503467619823386489753305081L, 'bits': FloatingInteger(bits=0x1a099431, target=0x994310000000000000000000000000000000000000000000000L)}"
2012-07-10 21:46:42.615439 > 'header hash: 00000000a89439acea9673042010653694b76b6bcae528ec034bd94144fe476b'
a few seconds later from cgminer log (it says nothing about submitting stale work)
Quote
[2012-07-10 21:46:47] PROOF OF WORK RESULT: true (yay!!!)
 [2012-07-10 21:46:47] ICA7                | (5s):199.8 (avg):212.7 Mh/s | A:44 R:5 HW:0 U:3.7/m
 [2012-07-10 21:46:46] HTTP hdr(Content-Length): 58
 [2012-07-10 21:46:46]  Proof: 00000000a89439acea9673042010653694b76b6bcae528ec034bd94144fe476b
Target: 00000000ffffffffffffffffffffffffffffffffffffffffffffffffffffffff
TrgVal? YES (hash < target)
 [2012-07-10 21:46:47] Pushing submit work to work thread
and p2pool responds
Quote
2012-07-10 21:46:47.873337 > "Miner digger @ 192.168.1.102 submitted work: u'000000018e45fff93abc335ff6468d63036f5f368732cd938f253ede000008b500000000adc302d ba93ec61b888909f957f8b07bc7c5e265504899d9b44ccfbe14d0d9d94ffc86ac1a099431d35d63 bd00000080000000000000000000000000000000000000000000000000000000000000000000000 0000000000080020000'"
2012-07-10 21:46:47.873639 > "Submitted header: {'nonce': 3546112957, 'timestamp': 1341949612, 'merkle_root': 9415264709392722949772953808229952210827103327730856756643112439749055546075L, 'version': 1, 'previous_block': 13995169685392530026568800876350891503467619823386489753305081L, 'bits': FloatingInteger(bits=0x1a099431, target=0x994310000000000000000000000000000000000000000000000L)}"
2012-07-10 21:46:47.873877 > 'header hash: 00000000a89439acea9673042010653694b76b6bcae528ec034bd94144fe476b'
2012-07-10 21:46:47.998484 > Worker digger @ 192.168.1.102 submitted share 00000000a89439acea9673042010653694b76b6bcae528ec034bd94144fe476b more than once!
legendary
Activity: 2702
Merit: 1468
Decreasing the time from 112 to 60 results in an increased hashrate to ~275MH/s. About 25%. Better but no cigarr...

The Utility stays about the same. The total goes from about 69 to 70. But it was correct from the start. Theoretically my total hashrate is 4956MH/s, which should give a utility of 60 x 4.956e9/4295032833 = 69.23 shares/minute.


The hash rate is an estimate.  If you put 80 as abort time, you'll get a better estimate.
If you put 90 or a 100, your hash rate estimate will be lower.

In my setup, the new icarus code does weird things (hash rates swing wildly), my utility rarely goes above 5/min.

With the old icarus code (some small mods by me, see my site) I get consistently 26.80/min out of my 5 icarus boards.
I stopped using the latest greatest icarus code as it is making me less money.

BTW, I'm using 6.5 secs to abandon previously submitted jobs.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Expire was added in cgminer 2.4.4

Expire=10 was a bug in much pool software. It did not make sense at all to have that value for anything but p2pool so the scantime is used if it is longer than expire (expire should be more like 120 seconds). This can't be creating duplicates though.

p2pool has a recent bug where duplicate work items are sent out regardless of the mining software. Anyone on p2pool?

Lots going on folks.... There is always more than meets the eye.
Well yeah, I'm on p2pool. So, if expire doesn't make any sense what headers should be used by p2pool? Should I set --scan-time to 10 perhaps?

Neither the p2pool software nor cgminer logs the getwork reply, so I'm adding that and sees what happens.
Yes I guess you should set scantime to 10 seconds for now. In a future version I'll actually honour what otherwise appears to be a bogus roll expire time if the pool really wants that.
sr. member
Activity: 337
Merit: 252
Expire was added in cgminer 2.4.4

Expire=10 was a bug in much pool software. It did not make sense at all to have that value for anything but p2pool so the scantime is used if it is longer than expire (expire should be more like 120 seconds). This can't be creating duplicates though.

p2pool has a recent bug where duplicate work items are sent out regardless of the mining software. Anyone on p2pool?

Lots going on folks.... There is always more than meets the eye.
Well yeah, I'm on p2pool. So, if expire doesn't make any sense what headers should be used by p2pool? Should I set --scan-time to 10 perhaps?

Neither the p2pool software nor cgminer logs the getwork reply, so I'm adding that and sees what happens.
sr. member
Activity: 337
Merit: 252
...
They are different:
[2012-07-09 18:10:45] Icarus 8 sent: b586fa1de5c...c6d325e1c207d00...003194091ae002fb4f58386ba3
[2012-07-09 18:10:45] DBG: sending http://hack:9332 submit RPC call: {...9803830ca439defe9a1cea36b38584ffb02bc1a09943186654dc20000008...
[2012-07-09 18:10:45] Icarus 5 sent: b586fa1de5c...c6d325e1c207d00...003194091adf02fb4f58386ba3
[2012-07-09 18:10:45] DBG: sending http://hack:9332 submit RPC call: {...9803830ca439defe9a1cea36b38584ffb02981a099431a3ee925c0000008...

If you look at the proof of the second one it will be different of course also
When time is rolled, the data is only very slightly different of course ...

i.e. they are not duplicate shares in the above example
Oh man, I'm very seldom not uncareful Wink Thanks.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
cgminer has stale x2 greater than DiabloMiner
on deepbit
(0.24%)   Prop.
(0.21%)   Prop.
(0.19%)   Prop.
(0.23%)   Prop.
(0.11%)   Prop.  - DiabloMiner
(0.20%)   Prop.
(0.20%)   Prop.
(0.23%)   Prop.
(0.22%)   Prop.
(0.24%)   Prop.
(0.37%)   Prop.
(0.22%)   Prop.
Try dropping intensity by 1 or number of gpu threads by 1. Perhaps you're just over the efficiency/time to search peak.
legendary
Activity: 1540
Merit: 1001
Expire was added in cgminer 2.4.4

Expire=10 was a bug in much pool software. It did not make sense at all to have that value for anything but p2pool so the scantime is used if it is longer than expire (expire should be more like 120 seconds). This can't be creating duplicates though.

p2pool has a recent bug where duplicate work items are sent out regardless of the mining software. Anyone on p2pool?

Lots going on folks.... There is always more than meets the eye.

That explains the dupes.  I'm getting them a plenty!  Sad

I'm trying p2pool (again) for a week.  So far the results have been dismal, and I'm having a hard time convincing myself to continue through the week. Sad

M
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
cgminer has stale x2 greater than DiabloMiner
on deepbit
(0.24%)   Prop.
(0.21%)   Prop.
(0.19%)   Prop.
(0.23%)   Prop.
(0.11%)   Prop.  - DiabloMiner
(0.20%)   Prop.
(0.20%)   Prop.
(0.23%)   Prop.
(0.22%)   Prop.
(0.24%)   Prop.
(0.37%)   Prop.
(0.22%)   Prop.

Huh, thats weird. cgminer uses pretty similar code to what I do.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
...
They are different:
[2012-07-09 18:10:45] Icarus 8 sent: b586fa1de5c...c6d325e1c207d00...003194091ae002fb4f58386ba3
[2012-07-09 18:10:45] DBG: sending http://hack:9332 submit RPC call: {...9803830ca439defe9a1cea36b38584ffb02bc1a09943186654dc20000008...
[2012-07-09 18:10:45] Icarus 5 sent: b586fa1de5c...c6d325e1c207d00...003194091adf02fb4f58386ba3
[2012-07-09 18:10:45] DBG: sending http://hack:9332 submit RPC call: {...9803830ca439defe9a1cea36b38584ffb02981a099431a3ee925c0000008...

If you look at the proof of the second one it will be different of course also
When time is rolled, the data is only very slightly different of course ...

i.e. they are not duplicate shares in the above example
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Expire was added in cgminer 2.4.4

Expire=10 was a bug in much pool software. It did not make sense at all to have that value for anything but p2pool so the scantime is used if it is longer than expire (expire should be more like 120 seconds). This can't be creating duplicates though.

p2pool has a recent bug where duplicate work items are sent out regardless of the mining software. Anyone on p2pool?

Lots going on folks.... There is always more than meets the eye.
sr. member
Activity: 337
Merit: 252
Sorry for spamming but this is what I've been talking about:
Code:
 [2012-07-09 18:10:45] Icarus 8 nonce = 0xc24d6586 = 0x849acb0e hashes (5.860506s)
 [2012-07-09 18:10:45] [thread 12: 3017971390 hashes, 379385971 khash/sec]
 [2012-07-09 18:10:45] Popping work from get queue to get work
 [2012-07-09 18:10:45] Pushing rolled converted work to stage thread
 [2012-07-09 18:10:45] Pushing work to stage thread
 [2012-07-09 18:10:45] Successfully rolled work
 [2012-07-09 18:10:45] Successfully rolled work
 [2012-07-09 18:10:45] Pushing work to stage thread
 [2012-07-09 18:10:45] Icarus 8 sent: b586fa1de5cdd76ea88f9e78c52694d0ee9b61a4f56e10e4f22c6d325e1c207d00000000000000000000000000000000000000003194091ae002fb4f58386ba3
 [2012-07-09 18:10:45] Popping work to work thread
 [2012-07-09 18:10:45] Pushing work to getwork queue
 [2012-07-09 18:10:45] Popping work to stage thread
 [2012-07-09 18:10:45] Pushing work to getwork queue
 [2012-07-09 18:10:45] Popping work to stage thread
 [2012-07-09 18:10:45] Creating extra submit work thread
 [2012-07-09 18:10:45] DBG: sending http://hack:9332 submit RPC call: {"method": "getwork", "params": [ "00000001de2f42fb1d6b7ca7a2a0b65a8d1dab19ce61c6abe2bcfd57000004540000000094e6587a379cef8109fb5cdf05540d9bf3e9803830ca439defe9a1cea36b38584ffb02bc1a09943186654dc2000000800000000000000000000000000000000000000000000000000000000000000000000000000000000080020000" ], "id":1}
 [2012-07-09 18:10:45] X-Roll-Ntime expiry set to 10
 [2012-07-09 18:10:45] PROOF OF WORK RESULT: true (yay!!!)
 [2012-07-09 18:10:45] Accepted 445d545e.e713b642 ICA 8 pool 0
 [2012-07-09 18:10:45] ICA8                | (5s):312.4 (avg):227.6 Mh/s | A:649 R:9 HW:0 U:5.5/m
 [2012-07-09 18:10:45]  Proof: 00000000686e0e540ba19b828314dcf41173d03b9eda020be41ceedc093dcb25
Target: 00000000ffffffffffffffffffffffffffffffffffffffffffffffffffffffff
TrgVal? YES (hash < target)
 [2012-07-09 18:10:45] Pushing submit work to work thread
 [2012-07-09 18:10:45] Icarus 5 nonce = 0x5c92eea3 = 0xb925dd48 hashes (8.180717s)
 [2012-07-09 18:10:45] [thread 9: 3106266440 hashes, 379680133 khash/sec]
 [2012-07-09 18:10:45] Popping work from get queue to get work
 [2012-07-09 18:10:45] Icarus 5 sent: b586fa1de5cdd76ea88f9e78c52694d0ee9b61a4f56e10e4f22c6d325e1c207d00000000000000000000000000000000000000003194091adf02fb4f58386ba3
 [2012-07-09 18:10:45] Popping work to work thread
 [2012-07-09 18:10:45] Creating extra submit work thread
 [2012-07-09 18:10:45] DBG: sending http://hack:9332 submit RPC call: {"method": "getwork", "params": [ "00000001de2f42fb1d6b7ca7a2a0b65a8d1dab19ce61c6abe2bcfd57000004540000000094e6587a379cef8109fb5cdf05540d9bf3e9803830ca439defe9a1cea36b38584ffb02981a099431a3ee925c000000800000000000000000000000000000000000000000000000000000000000000000000000000000000080020000" ], "id":1}
First "Icarus 8" sends something and then "Icarus 5" sends the exact same thing, except that the nonces are different. They are very close in time so in this case it might be that no new work has been received yet, but it happens a lot. As I said earlier 7% of my shares are duplicates.
sr. member
Activity: 337
Merit: 252
Yeah, that was basically what I wanted to say with my edit...

Quote
... also note that work returned after an LP is not counted in the hashmeter ...
Ok, since there is a long-poll before 10s 50% of the time (right?) that might explain it. Anyway it's not that important. There does not seem to be any lost work.

The roll-time issue on the other hand worries me. How can it be that expire is ignored and also what could possibly be the reason for resubmitting old hashes?
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
...
Well, taking a look at the log I find this:
Code:
[2012-07-09 18:10:37] Icarus Read: No data in 10.00 seconds
[2012-07-09 18:10:37] Icarus 3 no nonce = 0xe280310f hashes (10.000137s)
[2012-07-09 18:10:37] [thread 7: 3800051983 hashes, 283580784 khash/sec]
0xe280310f=3800051983 and 3800051983/10.000137=379999992.3. Not 283580784! Perhaps there is a simple resoluton after all.
(BTW, it says khash/sec but clearly it should say hash/sec)
[edit]no... i was wrong again. the first calculation comes directly from Hs so it will be 380MH/s by definition. the second comes from hashmeter and is measured by real elapsed time. damn.[/edit]
The 10.00 seconds is based on the counter, not the elapsed time.
(this is the issue I was referring to that may need changing to deal with crappy timers on some OS's if they exist?)

The 10.000137s is elapsed time from the send work to when the abort was performed.
Thus the timer on your OS is fine for that one since they match.

You will also see a "Icarus %d sent: %s" 10seconds before at ~2012-07-09 18:10:27

The hashmeter times things differently so you cannot assume it will calculate over the same time and hash count.
Sometimes it will be lower and sometimes it will be higher - due to the different start/finish times of the calculation.
This is only with Icarus because Icarus does not complete the full nonce range in either case:
1) When it finds a share it aborts - random processing time
2) When it gets close to the end of the nonce range it aborts - in your case 10s processing time
Due to this the hash meter will of course go up and down.

... also note that work returned after an LP is not counted in the hashmeter ...
sr. member
Activity: 337
Merit: 252
For the p2pool issue, my "guess" would be that p2pool might not be specifying the expiry time properly?
If you turn on protocol debug it will tell you what p2pool is telling you to do with rolled shares and expiry.

If there is an issue with the rolltime/expiry supplied by the pool, cgminer will use --expiry and --scan-time to work out what to do.

As for Icarus - in debug mode it reports the result of each piece of work it does and how long it took to run.
That should make it clear what is going on.

The 2 cases give debug:
1) Icarus %d nonce = 0x%08x = 0x%08llx hashes (%ld.%06lds)
2) Icarus %d no nonce = 0x%08llx hashes (%ld.%06lds)

And also for case 2) you should get a line before it saying:
Icarus Read: No data in %.2f seconds

Since you compiled cgminer yourself, also try using the binary release to see if that makes any difference (probably not though)
Well, taking a look at the log I find this:
Code:
[2012-07-09 18:10:37] Icarus Read: No data in 10.00 seconds
[2012-07-09 18:10:37] Icarus 3 no nonce = 0xe280310f hashes (10.000137s)
[2012-07-09 18:10:37] [thread 7: 3800051983 hashes, 283580784 khash/sec]
0xe280310f=3800051983 and 3800051983/10.000137=379999992.3. Not 283580784! Perhaps there is a simple resoluton after all.
(BTW, it says khash/sec but clearly it should say hash/sec)
[edit]no... i was wrong again. the first calculation comes directly from Hs so it will be 380MH/s by definition. the second comes from hashmeter and is measured by real elapsed time. damn.[/edit]

Regarding roll-time, the problem is not so easy as you suggest. P2pool sets the following headers:
Code:
request.setHeader('X-Long-Polling', '/long-polling')
request.setHeader('X-Roll-NTime', 'expire=10')
request.setHeader('X-Is-P2Pool', 'true')
and the protocol debug gives:
Code:
[2012-07-10 00:54:25] HTTP hdr(X-Roll-Ntime): expire=10
[2012-07-10 00:54:25] X-Roll-Ntime expiry set to 10
[2012-07-10 00:54:25] HTTP hdr(X-Long-Polling): /long-polling
Surely, scan-time is overridden by expire?

About the duplicates. Until today I have only been able to see that a lot of duplicate hashes are submitted but now forrestv has written some code to check the timestamp (ntime) on each submitted share and write a log message if it has been increased by more than 10 seconds. On my machine it spits out error messages like this every few seconds:

Code:
2012-07-10 01:34:45.211197 > Miner digger @ 192.168.1.102 rolled timestamp improperly! This may be a bug in the miner that is causing you to lose work!
Jump to: