ckolivas, you mentioned that Windows default 15ms timer resolution is shitty. Today I've just read that 15ms resolution is for reason
http://randomascii.wordpress.com/2013/07/08/windows-timer-resolution-megawatts-wasted/Indeed, after executing clockres and "powercfg -energy duration 5" I see, cgminer switched it to 1ms
ClockRes v2.0 - View the system clock resolution
Copyright (C) 2009 Mark Russinovich
SysInternals - www.sysinternals.com
Maximum timer interval: 15.600 ms
Minimum timer interval: 0.500 ms
Current timer interval: 1.000 ms
Platform Timer Resolution:Timer Request Stack
The stack of modules responsible for the lowest platform timer setting in this process.
Requested Period 10000
Requesting Process ID 5356
Requesting Process Path \Device\HarddiskVolume6\Downloads\prog\bt\cgminer\cgminer.exe
Calling Module Stack \Device\HarddiskVolume4\Windows\System32\ntdll.dll
\Device\HarddiskVolume4\Windows\System32\winmm.dll
\Device\HarddiskVolume6\Downloads\prog\bt\cgminer\cgminer.exe
I haven't found exact notice of that in NEWS.txt, but this
Version 3.0.0 - April 22nd, 2013
- Create a cgminer specific gettimeofday wrapper that is always called with tz
set to NULL and increases the resolution on windows.
- Add high resolution to nmsleep wrapper on windows.
So is timer resolution change in cgminer intentional?
Indeed it is intentional. We work with very tight timeframes and 15ms resolution isn't remotely accurate enough for what's required by some higher performance, but poorly designed hardware. If the hardware in question had nice queues for submitting work and buffers for returning work and didn't require polling, we wouldn't need to do this, but we do. Otherwise, depending on which device we're talking about, we'd lose results, the results would be corrupted or we'd be unable to keep it busy (the avalon suffers from all of the above for example).
Note that we try to only enable high resolution timers for the duration we need them, but with so much going on in cgminer, it ends up being on most of the time.