For the p2pool issue, my "guess" would be that p2pool might not be specifying the expiry time properly?
If you turn on protocol debug it will tell you what p2pool is telling you to do with rolled shares and expiry.
If there is an issue with the rolltime/expiry supplied by the pool, cgminer will use --expiry and --scan-time to work out what to do.
As for Icarus - in debug mode it reports the result of each piece of work it does and how long it took to run.
That should make it clear what is going on.
The 2 cases give debug:
1) Icarus %d nonce = 0x%08x = 0x%08llx hashes (%ld.%06lds)
2) Icarus %d no nonce = 0x%08llx hashes (%ld.%06lds)
And also for case 2) you should get a line before it saying:
Icarus Read: No data in %.2f seconds
Since you compiled cgminer yourself, also try using the binary release to see if that makes any difference (probably not though)
Well, taking a look at the log I find this:
[2012-07-09 18:10:37] Icarus Read: No data in 10.00 seconds
[2012-07-09 18:10:37] Icarus 3 no nonce = 0xe280310f hashes (10.000137s)
[2012-07-09 18:10:37] [thread 7: 3800051983 hashes, 283580784 khash/sec]
0xe280310f=3800051983 and 3800051983/10.000137=379999992.3.
Not 283580784! Perhaps there is a simple resoluton after all.
(BTW, it says khash/sec but clearly it should say hash/sec)
[edit]no... i was wrong again. the first calculation comes directly from Hs so it will be 380MH/s by definition. the second comes from hashmeter and is measured by real elapsed time. damn.[/edit]
Regarding roll-time, the problem is not so easy as you suggest. P2pool sets the following headers:
request.setHeader('X-Long-Polling', '/long-polling')
request.setHeader('X-Roll-NTime', 'expire=10')
request.setHeader('X-Is-P2Pool', 'true')
and the protocol debug gives:
[2012-07-10 00:54:25] HTTP hdr(X-Roll-Ntime): expire=10
[2012-07-10 00:54:25] X-Roll-Ntime expiry set to 10
[2012-07-10 00:54:25] HTTP hdr(X-Long-Polling): /long-polling
Surely, scan-time is overridden by expire?
About the duplicates. Until today I have only been able to see that a lot of duplicate hashes are submitted but now forrestv has written some code to check the timestamp (ntime) on each submitted share and write a log message if it has been increased by more than 10 seconds. On my machine it spits out error messages like this every few seconds:
2012-07-10 01:34:45.211197 > Miner digger @ 192.168.1.102 rolled timestamp improperly! This may be a bug in the miner that is causing you to lose work!