Author

Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.11.0 - page 585. (Read 5806088 times)

legendary
Activity: 2576
Merit: 1186
Firstly, why do I need to mention what small parts of api.c you wrote?
You should attribute things like that, because it sure did look like you were trying to claim credit for it.

Hmm - so (as can be seen above) I told you to move the extra info to it's own command rather than have it the way that you designed to send that extra static never changing info EVERY time you request 'devs', 'gpu|N' or 'pga|N' - and that was less efficient?
I'm talking about the implementation. As you noted, I did move it to a new command, as you requested.

Hmm - and you send your "get_extra_device_detail(cgpu)" both with the 'devs' command and the 'devdetail' command - i.e. duplicating it.
No, 'devs' uses get_extra_device_status, and 'devdetail' uses get_extra_device_detail. Different methods for different purposes.
hero member
Activity: 630
Merit: 500
ckolivas, any chance you can take a look at the possibility of churning some code out to take advantage of the Intel HD GPU's integrated into Sandy Bridge CPU's (and now Ivy Bridge)?  Apparently they just released OpenCL SDK to allow access to the GPU portion of the CPU instead of just the CPU itself.  See post #22 here - https://bitcointalksearch.org/topic/m.870723
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
- Use longpolls from backup pools with failover-only enabled just to check for
block changes, but don't use them as work.
I don't have enough data or it to be definitive yet, but I'm wondering if this didn't slightly lower my Utility.  I am mining on a a merged pool that sometimes uses submitold, but have a backup pool that doesn't.  Both use merged mining, and I am using --failover-only.  So, I'm wondering, when a pool other than 0 sends an LP before pool 0, is pool 0's work discarded even though it might still be good?  It seems to me like it would be, and then for the time between the LP on the backup pool and the LP on the primary pool, work wouldn't be done.  This may not be true, because it may immediately request new work that may then also be discarded when my pool does LP (I have seen this take 20 seconds with no share submitted, but sometimes it takes significantly longer than that to find a share at ~318 MH/s).  That having been said, I am asking because my U is at 4.35 (which rounds to 4.4 in the main stats, over only 6125 shares) where it was at 4.41 before (over tens of thousands of shares over weeks of work).  Obviously we are only talking about a difference of .06 in my U, which may be statistically insignificant, but it is still potentially 1.5% less shares being submitted over >12 hours.
That difference is surely enough to be accounted by in variance which in U is usually +/-10%, but the full discussion of its effects is here:
https://bitcointalksearch.org/topic/m.873742
Theoretically you might be losing a *tiny* bit of work across longpolls with --failover-only but in my experience it is less than 2 seconds' worth of work every 10 minutes.
newbie
Activity: 20
Merit: 0
I have come up with an issue on CG Miner which I'm not being quite able to sort through, so if any help would be possible.
I tried searching on the thread, and while some mentions to adjusting engine clock were around it didn't quite fit my situation.

For a while now I had been using CG  Miner 2.3.1 without any issues, but last night came up a problem where I couldn't seem to be able to adjust the clock speed below a certain value on neither of both cards (2x 6870). Wondering if it could be some odd bug I had struck on, I checked the thread, updated CG Miner, took the chance to update the video drivers, and on a new attempt I'm now able to adjust the speed of one of the cards to about any value, but not the other one.

I saw a mention that this could be driver related, but these cards are twins, worked always well before, but suddenly this came up even before I updated the drivers.

Any help?
full member
Activity: 196
Merit: 100
Web Dev, Db Admin, Computer Technician
Do you guys argue like this only to keep CGMiner as the top thread posting? You guys sound like either side of a Miller Lite commercial; crowd A: Great Taste! Crowd B: Less Filling!, then begin to look like a fight is about to occur over different perspectives for the same beer.  Roll Eyes
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
... and if anyone didn't realise:
If you use the api command "devdetails" it will tell you which "Kernel" it is using and the "Model" for each gpu
You forget to mention that I wrote this, both the back-end code shared by CGMiner and BFGMiner, and the original implementation of "devdetail" which you rejected so you could rewrite it less efficiently...
Firstly, why do I need to mention what small parts of api.c you wrote?
(Especially after you did the back-end refactor of cgminer but didn't bother to fix the problems that caused in api.c - and you still have one I can see - line 917, see below)

Secondly:

22:55 < luke-jr> kanoi: in other words, you want to be able to pull api.c out and put it with any other cgminer version?
22:56 < kanoi> sort of - but the reverse of that - I want cgmine version changes to mininise any affect onthe api
22:56 < kanoi> (the reports)
22:57 < kanoi> also fo ryour new info - that should be it's own 'report'
22:57 < kanoi> since it never changes, resending it every time is a waste
22:57 < kanoi> like the 'notify' - not always needed (but notify does change)
22:58 < kanoi> but in the case of the extra info - not needed more than once (unless the target forgets it)
22:58 < luke-jr> I suppose.
22:58 < kanoi> so that would be a new devs style command that just returns that extra new info
22:59 < luke-jr> what do you propose?
22:59 < luke-jr> "devdetail" ?
23:00 < kanoi> probably - sounds OK I guess

Hmm - so (as can be seen above) I told you to move the extra info to it's own command rather than have it the way that you designed to send that extra static never changing info EVERY time you request 'devs', 'gpu|N' or 'pga|N' - and that was less efficient?
Which - you did make that change as I said to.

The main differences I can see in the code is that you moved the GPU specific information OUT of api.c and instead call append_kv over and over again to append that data on the end instead of where it was before (miner.php looks crap in your version now) and instead of using a single (faster) print command
Also, the devdetails command removes fields that are blank so the external code processing the API output has to check for missing fields
You should correct line 917 the way I said it should be (and is in my version)
Hmm - and you send your "get_extra_device_detail(cgpu)" both with the 'devs' command and the 'devdetail' command - i.e. duplicating it.

The git "blame" page for api.c ... well anyway Smiley

Please stop trying to make it seem like there is some advantage to your clone (with not many changes) in this thread - especially when you've given an example that's not even true.
Go praise your miner in your own thread where your acolytes are less discerning than the folk here
full member
Activity: 196
Merit: 100
Web Dev, Db Admin, Computer Technician
I'd like to reduce stales/DOA for p2pool, rejected shares and Longpolling (hardware errors are zero). Is this more a combination of engine/mem, gpu threads, intensity, vectors, and worksize rather than a specific kernel?
There is only one phatk in cgminer and it's called -k phatk, and it's actually phatk2.2 as you would know it. If you do not specify anything, it is chosen by default on pretty much all 5X and 6X cards with any SDK before 2.6. If you have 5X cards, STICK TO AN OLDER SDK, 2.1, 2.4 or 2.5 and let it choose phatk.
Thanks. Maybe because someone says you must use the phatk2 kernel in cgminer I became confused because of the 3 .bin files labled phatk120223bart.....8l8.bin and the 1 phatk .cl file, assuming, because there are 4 phatk files, I could migrate to another kernel that might offer different performance.

To do the best with p2pool, read the readme.
But is intensity and threads the only factors that will increase how many BTC my system will generate with p2pool? Can a particular mem clock frequency affect how much BTC is produce in p2pool?
hero member
Activity: 807
Merit: 500
- Use longpolls from backup pools with failover-only enabled just to check for
block changes, but don't use them as work.
I don't have enough data or it to be definitive yet, but I'm wondering if this didn't slightly lower my Utility.  I am mining on a a merged pool that sometimes uses submitold, but have a backup pool that doesn't.  Both use merged mining, and I am using --failover-only.  So, I'm wondering, when a pool other than 0 sends an LP before pool 0, is pool 0's work discarded even though it might still be good?  It seems to me like it would be, and then for the time between the LP on the backup pool and the LP on the primary pool, work wouldn't be done.  This may not be true, because it may immediately request new work that may then also be discarded when my pool does LP (I have seen this take 20 seconds with no share submitted, but sometimes it takes significantly longer than that to find a share at ~318 MH/s).  That having been said, I am asking because my U is at 4.35 (which rounds to 4.4 in the main stats, over only 6125 shares) where it was at 4.41 before (over tens of thousands of shares over weeks of work).  Obviously we are only talking about a difference of .06 in my U, which may be statistically insignificant, but it is still potentially 1.5% less shares being submitted over >12 hours.
sr. member
Activity: 349
Merit: 250
I don't know what to tell you ... it didn't work before, now it works with gpu disabled.  Is there a reason why bfl's are not auto detected like they are in ufasoft's miner?
Ufasoft doesn't really autodetect, it just spams every serial port with a probe every few seconds. I'm working on a proper autodetect for Windows.

That would be fantastic.

Possible to detect if one has throttled and/or a way to know which one is which? I have a number of them and one is running a bit hot. I have no way of figuring out which one it is except removing them all and trying them one at a time.
I have found that the easiest way to identify a particular single is to disable it then reenable it using kano's api calls.  When pgadisable is sent to a particular device the red led on the right side(when looking at the front panel status led) will toggle off. Then send the pgeenable to the same device to confirm the light is on. Simple bash script follows:
Code:
#!/bin/bash
BFLHOST=192.168.0.199
BFLPORT=4028
if [ $# -eq 2 ] ; then
    if [ $1 == "d" ] ; then
        echo -n "pgadisable|$2" | nc $BFLHOST $BFLPORT | awk 'BEGIN { FS=","; } ; { for (i=1;i<=NF;i++) { print $i } exit; }'
    elif [ $1 == "e" ] ; then
        echo -n "pgaenable|$2" | nc $BFLHOST $BFLPORT | awk 'BEGIN { FS=","; } ; { for (i=1;i<=NF;i++) { print $i } exit; }'
    fi
else
    echo "Enable/Disable fpga "
    echo " arg1 e or d enable/disable a device"
    echo " arg2 device number to enable/disable"
fi
legendary
Activity: 2576
Merit: 1186
... and if anyone didn't realise:
If you use the api command "devdetails" it will tell you which "Kernel" it is using and the "Model" for each gpu
You forget to mention that I wrote this, both the back-end code shared by CGMiner and BFGMiner, and the original implementation of "devdetail" which you rejected so you could rewrite it less efficiently...
donator
Activity: 1218
Merit: 1080
Gerald Davis
... and if anyone didn't realise:
If you use the api command "devdetails" it will tell you which "Kernel" it is using and the "Model" for each gpu

Nice.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
... and if anyone didn't realise:
If you use the api command "devdetails" it will tell you which "Kernel" it is using and the "Model" for each gpu
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
/**FPGA's will not take over once I release my Zero-Point Energy Generator. The enrergy it produces will be free, but the device is gonna cost you.  Cool Cheesy

Someone suggested using phatk2 with 5800 and 5900 cards but when I designate '-k phatk2' cgminer said 'you can't do that'.
Using search on this thread I noticed ckolivas states cgminer automatically chooses which phatk kerenl to use, because there are several available I guess. Also, when I check the config file after creating it while running, it isn't obvious which phatk is being used, it just lists 'phatk'.

Do I use '--verbose -T' to check which phatk kernel is in use?

When starting from the commandline (Not -c), can I specify which specific phatk kernel to use instead of letting automatic choose?

Does it matter which SDK is in use to get the benefeits suggested for phatk2?

I'd like to reduce stales/DOA for p2pool, rejected shares and Longpolling (hardware errors are zero). Is this more a combination of engine/mem, gpu threads, intensity, vectors, and worksize rather than a specific kernel?
There is only one phatk in cgminer and it's called -k phatk, and it's actually phatk2.2 as you would know it. If you do not specify anything, it is chosen by default on pretty much all 5X and 6X cards with any SDK before 2.6. If you have 5X cards, STICK TO AN OLDER SDK, 2.1, 2.4 or 2.5 and let it choose phatk. To do the best with p2pool, read the readme.
full member
Activity: 196
Merit: 100
Web Dev, Db Admin, Computer Technician
/**FPGA's will not take over once I release my Zero-Point Energy Generator. The enrergy it produces will be free, but the device is gonna cost you.  Cool Cheesy

Someone suggested using phatk2 with 5800 and 5900 cards but when I designate '-k phatk2' cgminer said 'you can't do that'.
Using search on this thread I noticed ckolivas states cgminer automatically chooses which phatk kerenl to use, because there are several available I guess. Also, when I check the config file after creating it while running, it isn't obvious which phatk is being used, it just lists 'phatk'.

Do I use '--verbose -T' to check which phatk kernel is in use?

When starting from the commandline (Not -c), can I specify which specific phatk kernel to use instead of letting automatic choose?

Does it matter which SDK is in use to get the benefeits suggested for phatk2?

I'd like to reduce stales/DOA for p2pool, rejected shares and Longpolling (hardware errors are zero). Is this more a combination of engine/mem, gpu threads, intensity, vectors, and worksize rather than a specific kernel?
hero member
Activity: 868
Merit: 1000
makes you look like an immature whining fool ...

Maybe to the uninitiated. Stick around for a while and you'll see, but please refrain from discouraging my daily entertainment. Thank you Smiley

Kano & Luke-JR are like Statler & Waldorf (2 grumpy old men from the Muppet show).... I think they would get along very well IRL Wink
legendary
Activity: 1316
Merit: 1005
makes you look like an immature whining fool ...

Maybe to the uninitiated. Stick around for a while and you'll see, but please refrain from discouraging my daily entertainment. Thank you Smiley
member
Activity: 107
Merit: 10
makes you look like an immature whining fool ... completely devalues anything technical that you have to add
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
Hey Kano, why the name calling?
New around here I guess?
He is annoying - very.
That's my polite name calling Smiley

From: Kano the cry baby Smiley (<- as you called me before Cheesy )
member
Activity: 107
Merit: 10
Hey Kano, why the name calling?
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
...
Edit: for auto detection of course ZTX is always auto
ICA doesn't ever do auto
BFL does auto only in linux and only when you specify -S auto - however it has 2 methods:
 1) where it looks in /dev/serial/by-id (which can sometimes only show 1 BFL on some linux versions if you have more than 1 BFL)
 2) by libudev checking the USB Model when libudev is compiled in
I added that on to the end of my post at the bottom of the last page, but I guess I was slow doing it Smiley
So there it is again Smiley

Edit:
I'll be getting a BFL soon (bought it today from someone else here in the forum)

I can see a clear path to doing proper auto-detection so I'll do that after I get it.
Basically use nelisky's code in ZTX to detect BFL's the same way.

If Mr annoying wants to do it before me - feel free to do it Smiley
I'm just not sure if all BFL identify the same way so will need some feedback in that area when I get to it.

Then that would also be a neat way to do pseudo-auto detection on Icarus
(same way but it could get false positives so will still need to do the quick 0.1s Hash test also)
Jump to: