Pages:
Author

Topic: Radeonvolt - HD5850 reference voltage tweaking and VRM temp. display for Linux - page 2. (Read 27976 times)

-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
EDIT: Also, not sure if it's relevant, but using Linuxcoin (which uses an older version of the Catalyst driver as far as I'm aware of), I'm able to set both core clock and memory clock to any value I like on both my 5870 and 5770. Not sure if it's related to the Catalyst version or if it's related to the card models (XFX and Sapphire, respectively).
Some very early drivers had limits on what you could try to change, but anything 11.4+ on windows and 11.6+ on linux has none. 5xx0 cards are much more accepting of changes than 6/7xxx though.
legendary
Activity: 980
Merit: 1008

Thanks for the commitment, any support will help. I think most Linux fellows sooner or later feel this desire to move back to Windows, being it that LibreOffice is not able to open some DOCX or your fav game not working under WINE, right Sad

[...]
FYI, Office 2007 works fine under Linux using the latest WINE.
http://imgur.com/dtkXz

EDIT: Also, not sure if it's relevant, but using Linuxcoin (which uses an older version of the Catalyst driver as far as I'm aware of), I'm able to set both core clock and memory clock to any value I like on both my 5870 and 5770. Not sure if it's related to the Catalyst version or if it's related to the card models (XFX and Sapphire, respectively).
hero member
Activity: 714
Merit: 500
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
To resolve the confusion here: my primary goal was not to further OC cards to squeeze out their last kH/s but to maximize the H/J, which with Linux is not possible with the given max delta between mem and engine clocks Con is describing.


Con, you're often not happy with AMD's Linux drivers (who is), but you'd also agree better to live with the limitations at the safe side than having the freedom to kill miner's cards, right?
Indeed but I'm not advocating changes to raise voltage and engine clock speed further. There is no apparent limit to how high you can set engine clock speed with just the ADL support. I want to lower memory clock speed and voltage. Can't say that I've heard of underclocking or undervolting harming hardware.
hero member
Activity: 518
Merit: 500
I still see no resolve for my reading VRM temperatures using Linux.

I have modified this radeonvolt and it still does not appear to list the VRM temperatures but only the core temperatures.

All reference cards, too.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
(OT: Hell, not long ago I bought standby-killers to turn TV off over night instead of letting it consume 5W in standby-mode, now I'm burning kWatts 24/7  Undecided, different story).

New TVs typically use a watts or less on standby, which you exchange for instant on and less wear on the parts. Disabling standby will just kill your TV faster which is more expensive than the electricity it is "wasting".
donator
Activity: 919
Merit: 1000
To resolve the confusion here: my primary goal was not to further OC cards to squeeze out their last kH/s but to maximize the H/J, which with Linux is not possible with the given max delta between mem and engine clocks Con is describing.

Patching BIOS to surpass the absolute max ranges is fine to push the card to the limit (which I won't do any more after bricking one 6950 trying to unlock it  Embarrassed), the 7970s I lastly added to my rig do not really need patching - they just run fine with cgminer (see [1]). But reading people are able to reduce energy consumption by 20% lowering memclock and core voltage makes me wanting go back to Windows.
(OT: Hell, not long ago I bought standby-killers to turn TV off over night instead of letting it consume 5W in standby-mode, now I'm burning kWatts 24/7  Undecided, different story).

My pragmatic idea was to record the i2c-commands issued by Afterburner when controlling popular mining cards and to build up a library for directly accessing the controller chips (like radeonvolt does for the vt1165). But thinking further, with that lib you'd give the user the perfect tool to fry their cards. Counter measures (like use it only for reducing values) are not applicable in the open source world - we'll soon have folks yelling at cgminer/Linux for bricking their cards.

Con, you're often not happy with AMD's Linux drivers (who is), but you'd also agree better to live with the limitations at the safe side than having the freedom to kill miner's cards, right?


OP, sorry for hijacking this thread. Closing here.


[1] https://bitcointalksearch.org/topic/m.824652
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
cgminer is limited by what the bios will accept via the driver. Often it is -way- outside the reported "safe range" that the ATI Display Library tells it. cgminer will allow you to happily ignore the safe range and set whatever you like. Some cards respond to that, some don't, ignoring values you pass to it. On my cards I can overclock my engine to any value I like and same with the memory. But try to set the memory more than 125 below the engine it ignores it (6970). It also happily ignores -any- voltage setting I pass to it. On the other hand, flash the bios on those cards and you can set whatever you like via the ATI Display Library and therefore cgminer. The other tools that hack via i2c and stuff are so device and OS dependent that they'd be a nightmare to write in a general fashion that could be included in cgminer. Sure if someone else did the code, I'd include it. But short of having one of each card, and every possible OS to test it on, I cannot write the code myself.
donator
Activity: 1218
Merit: 1079
Gerald Davis
Thanks for clarification.

Whether ADL is limiting ranges or the BIOS does, effect remains the same: getting full control goes only by bypassing AMD provided interfaces and accessing HW directly (please correct me if I'm wrong assuming those controller chips are I2C accessible).

Well not exactly.  I for example was able to raise the stock voltage on my 5970s by modifying the BIOS.  Had the limit been enforced by ADL I would have no options.

Quote
I doubt that MSI as manufacturer had to reverse engineer to get Afterburner done, but GPU-Z folks for sure had to. That's why I proposed social engineering approach, i.e. maybe some guys from OC scene are also bitcoiners and have access to specs or source code willing to share. Maybe you as one of the technically most competent bitcoiner are the one?

I don't think so.  GPU-Z is useful because nobody else can do it.  The author has indicated he has absolutely no interest in ever providing a GPU-Z for Linux.  He has also indicated he will never release the source code to allow anyone else to write it.  I don't have a link as I researched it well over a year ago and when I saw that I was like "ok guess it won't be happening". Yes a very "non open" attitude but open source isn't embraced by all software developers.
donator
Activity: 919
Merit: 1000
Thanks for clarification.

Whether ADL is limiting ranges or the BIOS does, effect remains the same: getting full control goes only by bypassing AMD provided interfaces and accessing HW directly (please correct me if I'm wrong assuming those controller chips are I2C accessible).

I doubt that MSI as manufacturer had to reverse engineer to get Afterburner done, but GPU-Z folks for sure had to. That's why I proposed social engineering approach, i.e. maybe some guys from OC scene are also bitcoiners and have access to specs or source code willing to share. Maybe you as one of the technically most competent bitcoiner are the one?
donator
Activity: 1218
Merit: 1079
Gerald Davis
I think you are confusing two things.

ADL doesn't access low level components.  Not on 7000 series, not on 5000 series, not on Windows, not on Linux.
ADL simply ASKS the BIOS to make a change.  The BIOS is free to ignore that request (and routinely does). Even the return value is nearly worthless (success simply means ADL request was received by the card "please set voltage to 10,000V".  "success").

cgminer -> ADL -> GPU BIOS -> low level hardware.
ADL functionality is not materially different under Linux compared to Windows and is completely BIOS dependent.

Sadly AMD has crippled ADL access so various utilities complete bypass the ADL and directly read/write from underlying hardware.  IMHO AMD restrictive ADL defeats the entire purpose.  Since it is so painfully limited (for frack sake it doesn't even provide all GPU temp values) 3rd parties go around the entire system and write directly to the hardware which is far more dangerous than simply providing an unlocked ADL library.

Support and capabilities of those tools (GPU-Z, Afterburner, Radeonvolt, etc) are limited to what has been manually hacked together and reverse engineered as it totally bypasses all of AMD drivers and libraries.
donator
Activity: 919
Merit: 1000
After finding no valid solution for Linux I just flashed all my cards with custom BIOS using RBE.  Granted that is not an attractive option for everyone but just pointing out an option does exist.  

*D&T is not responsible for any bricked cards as a result of flawed bios installs.

Are you saying that the limitations with Linux can be bypassed by just patching the BIOS? I understood that the values kept in BIOS are the absolute min/max ratings, but that the ADL imposes additional relative limitations (like delta(core, mem) <= 150MHz on Tahiti).

Other thing is that Linux ADL functionality is probably years behind on what's doable in the Windows world (not because of ATI being more active there but because of manufacturers providing low-level access). Access to 79xx-VRM controller might not find its way into ADL during the use of those cards for mining Sad

Therefore an I2C sniffer would be a valuable one-time-investment to port low-level control functionality over to Linux. Since this should be a commonly useful tool for common devs, I was hoping to find some existing sniffers or bus loggers. No luck so far...
donator
Activity: 1218
Merit: 1079
Gerald Davis
After finding no valid solution for Linux I just flashed all my cards with custom BIOS using RBE.  Granted that is not an attractive option for everyone but just pointing out an option does exist.  

*D&T is not responsible for any bricked cards as a result of flawed bios installs.
donator
Activity: 919
Merit: 1000
Any supporters?
4 BTC commited,  2012 ,,, and this issue is getting more and more ridiculous.

I'm one the verge of going back to windows completely.

My sapphire's 5850 do 960mhz on windows and 840mhz under Linux.
Thanks for the commitment, any support will help. I think most Linux fellows sooner or later feel this desire to move back to Windows, being it that LibreOffice is not able to open some DOCX or your fav game not working under WINE, right Sad

We should at least try to make miners equally happy with Linux.

As per my understanding this will only do wonders for the 7970s and NOT the 5xxx cards.

I have some 5xxx cards myself so I would LOVE for something better than this crappy software.

Even after modifications to the source code I can not get it to report proper VRM temperatures.

Have not tried voltage adjustments but I bet those don't work as well.

No, if we succeeded to reverse engineer the thing we would have some mean to gradually add support for any card that is supported by Windows tools. So assume the outcome of this effort was an I2C-sniffer and you need to get your card supported. You'd just have to switch to Windows, run the sniffer, set some parameters with Afterburner and collect the commands sent over the bus. If communication is not crypted or intentionally crippled, you just take the command sequence over to the Linux library and voila - your card got supported.

Thats without the minor details Wink First step would be to hook into Afterburner and log its access to the I2C display adapter interface (see [1]). Interested?


[1] http://msdn.microsoft.com/en-us/library/windows/hardware/ff567381%28v=vs.85%29.aspx
hero member
Activity: 518
Merit: 500
Any supporters?
4 BTC commited,  2012 ,,, and this issue is getting more and more ridiculous.

I'm one the verge of going back to windows completely.

My sapphire's 5850 do 960mhz on windows and 840mhz under Linux.

As per my understanding this will only do wonders for the 7970s and NOT the 5xxx cards.

I have some 5xxx cards myself so I would LOVE for something better than this crappy software.

Even after modifications to the source code I can not get it to report proper VRM temperatures.

Have not tried voltage adjustments but I bet those don't work as well.
member
Activity: 66
Merit: 10
Any supporters?
4 BTC commited,  2012 ,,, and this issue is getting more and more ridiculous.

I'm one the verge of going back to windows completely.

My sapphire's 5850 do 960mhz on windows and 840mhz under Linux.
donator
Activity: 919
Merit: 1000
Folks,

sorry to bump this almost-dead thread.

It really bothers me to be restricted to fully control my GPUs when driving them with Linux. Most serious miners should agree that operating their GPUs with cgminer under Linux is the ideal setup (free, robust, head-less, efficient, etc.) -- if one could control clocks and voltages as freely as it is doable with Windows tools.

Right now cgminer uses the ADL library to control those values, but it is restricted to what AMD finds to be 'sane' values, that is e.g.: a max clock delta of 125MHz between core and mem clocks for 69xx and 150MHz for 79xx. From manufacturer's perspective there for sure are good reasons to restrict users' access to the controlling chips and prevent him from frying his cards, while OTOH it is completely insane to burn significantly more energy for mining on Linux than on Windows.

The latest Tahiti cards are equipped with the CHiL CHL8228G VRM. MSI Afterburner and GPU-Z recently added support for the latest AMD cards when those devices got supported. For Linux we'd need the same direct I2C-support to all controlling chips. Sadly, for that VRM only a product brief document is available at [1]. I poked around and it is not possible to get the full data sheet, even under NDA. Obviously MSI as card manufacturer has access to the specs to support it in Afterburner, but evenly obvious the GPU-Z folks had to reverse engineer to get their support implemented.

It is illusory to assume Linux folks will get access to the specs, even if we hint that bitcoin miners love ATI cards and we promise to be very careful (its just too dangerous to give users access at that HW level). Therefore, reverse engineering is the way we must take. We could:
  • reverse engineer MSI Afterburner or GPU-Z to unveil the command sequences required for full control
  • hook into Windows I2C driver and trace the I2C commands issued
  • social engineering (know someone at ATI research, MSI, CHiL, etc.)

The first approaches require deep Windows system knowledge, plus some cracking capabilities for the first approach or some DDK experience for the second (assuming there are no I2C sniffer already available).

My active times with Windows passed by long ago, but I know that this is a many-weeks job - no way to get the efforts compensated by bounties or crowd funding. Anyone willing to support here needs to do it for the glory (and you'd help saving the world by greatly reducing mining energy consumption Wink). Ideally we should end up with an ADL replacement that gives unrestricted access to the controlling chips. BTW, my own capabilities are limited to the Linux side, i.e. as soon as I get the specs I would work out a library to be included in cgminer (if OP or runeks aren't to take the glory).

Any supporters?


[1] http://www.irf.com/product-info/datasheets/data/pb-chl8225.pdf
hero member
Activity: 518
Merit: 500
It seems that this does not report VRM temps on a reference 5870 at all.

The values are too close to core temp when VRMs clearly run at 90 degrees or so.

How can I modify the code so that it reports correctly ?

I already modified 1002:6899 to 1002:6898 so that it works with my 5870 and not only 5850s.

Too sad that development is dead because this really could have been heaven for Linux miners like myself !

Anyone managed to solve this issue yet Huh

It really is bothering me I cannot for the life of me read those VRM temperatures.

How can the core be like 70 and VRM just 76 when it clearly should be like 90 etc.

Thanks !

I will try runeks fork ASAP but I doubt that will solve it.

As said before ALL are 100% REFERENCE ATI cards with Volterra chips ( opened them up to check ) so they should work perfectly but since this was designed for reference 5850s then it does not work properly.

AFAIK the 5870s have like 4 phases and 5850s only 3 phases so there also should be an extra VRM that is not getting reported at all.

I really want to avoid Windblows if possible Undecided
legendary
Activity: 980
Merit: 1008
Which Sapphire card do you have? This one:? http://www.newegg.com/Product/Product.aspx?Item=N82E16814102883
In any case, looks like the VRM on the card isn't the Volterra chip that is controllable from software.
sr. member
Activity: 362
Merit: 250
fork:

Code:
kanotix@Kanotix:~/runeksvendsen-radeonvolt-1e7abec$ sudo ./radeonvolt

Device [02]: Cypress [Radeon HD 5800 Series]
             PC Partner Limited

Unsupported i2c device (1a)

Thank you anyway.
Pages:
Jump to: