Author

Topic: Radeonvolt - HD5850 reference voltage tweaking and VRM temp. display for Linux (Read 27922 times)

Led
newbie
Activity: 28
Merit: 0
Yea, the original author who wrote this only put in support for the vt1165 voltage regulator, and the 6990s don't have that. My 5970s and 5870s have it. Basically, if you get "Unsupported i2c device (00)," then your card has a different VRM and won't work with this (unless there is documentation somewhere showing how to access those VRMs in a C program like the 1165, then we could add in support).

So, to clarify, if you run radeonvolt and you get NO output at all, the changes I made to enum_cards should track down that particular problem. If you get "unsupported i2c device," the changes I am making aren't going to help for now.

Gotcha. From what I have read 6990's use the VT1556 so we would need support for that. I'll do some digging...

Dig faster. I need support for r9 290s.
sr. member
Activity: 362
Merit: 250
full member
Activity: 133
Merit: 100
Yea, the original author who wrote this only put in support for the vt1165 voltage regulator, and the 6990s don't have that. My 5970s and 5870s have it. Basically, if you get "Unsupported i2c device (00)," then your card has a different VRM and won't work with this (unless there is documentation somewhere showing how to access those VRMs in a C program like the 1165, then we could add in support).

So, to clarify, if you run radeonvolt and you get NO output at all, the changes I made to enum_cards should track down that particular problem. If you get "unsupported i2c device," the changes I am making aren't going to help for now.

Gotcha. From what I have read 6990's use the VT1556 so we would need support for that. I'll do some digging...
full member
Activity: 200
Merit: 100
|Quantum|World's First Cloud Management Platform
Yea, the original author who wrote this only put in support for the vt1165 voltage regulator, and the 6990s don't have that. My 5970s and 5870s have it. Basically, if you get "Unsupported i2c device (00)," then your card has a different VRM and won't work with this (unless there is documentation somewhere showing how to access those VRMs in a C program like the 1165, then we could add in support).

So, to clarify, if you run radeonvolt and you get NO output at all, the changes I made to enum_cards should track down that particular problem. If you get "unsupported i2c device," the changes I am making aren't going to help for now.
full member
Activity: 133
Merit: 100
Here is the output from my dual 6990 rig:

Code:
TSB43AB23 IEEE-1394a-2000 Controller (PHY/Link) --> cls [0xc00]
IT8213 IDE Controller --> cls [0x101]
RTL8111/8168B PCI Express Gigabit Ethernet controller --> cls [0x200]
RTL8111/8168B PCI Express Gigabit Ethernet controller --> cls [0x200]
88SE9128 PCIe SATA 6 Gb/s RAID controller --> cls [0x106]
JMB362/JMB363 Serial ATA Controller --> cls [0x101]
JMB362/JMB363 Serial ATA Controller --> cls [0x106]
uPD720200 USB 3.0 Host Controller --> cls [0xc03]
Device aa80 --> cls [0x403]
Antilles [AMD Radeon HD 6990] --> cls [0x380] size 268435456 0 131072
Device aa80 --> cls [0x403]
Antilles [AMD Radeon HD 6990] --> cls [0x300] size 268435456 0 131072
PEX 8647 48-Lane, 3-Port PCI Express Gen 2 (5.0 GT/s) Switch --> cls [0x604]
PEX 8647 48-Lane, 3-Port PCI Express Gen 2 (5.0 GT/s) Switch --> cls [0x604]
PEX 8647 48-Lane, 3-Port PCI Express Gen 2 (5.0 GT/s) Switch --> cls [0x604]
Device aa80 --> cls [0x403]
Antilles [AMD Radeon HD 6990] --> cls [0x380] size 268435456 0 131072
Device aa80 --> cls [0x403]
Antilles [AMD Radeon HD 6990] --> cls [0x300] size 268435456 0 131072
PEX 8647 48-Lane, 3-Port PCI Express Gen 2 (5.0 GT/s) Switch --> cls [0x604]
PEX 8647 48-Lane, 3-Port PCI Express Gen 2 (5.0 GT/s) Switch --> cls [0x604]
PEX 8647 48-Lane, 3-Port PCI Express Gen 2 (5.0 GT/s) Switch --> cls [0x604]
5 Series/3400 Series Chipset 2 port SATA IDE Controller --> cls [0x101]
5 Series/3400 Series Chipset SMBus Controller --> cls [0xc05]
5 Series/3400 Series Chipset 4 port SATA IDE Controller --> cls [0x101]
5 Series Chipset LPC Interface Controller --> cls [0x601]
82801 PCI Bridge --> cls [0x604]
5 Series/3400 Series Chipset USB2 Enhanced Host Controller --> cls [0xc03]
5 Series/3400 Series Chipset USB Universal Host Controller --> cls [0xc03]
5 Series/3400 Series Chipset USB Universal Host Controller --> cls [0xc03]
5 Series/3400 Series Chipset USB Universal Host Controller --> cls [0xc03]
5 Series/3400 Series Chipset USB Universal Host Controller --> cls [0xc03]
5 Series/3400 Series Chipset PCI Express Root Port 8 --> cls [0x604]
5 Series/3400 Series Chipset PCI Express Root Port 7 --> cls [0x604]
5 Series/3400 Series Chipset PCI Express Root Port 6 --> cls [0x604]
5 Series/3400 Series Chipset PCI Express Root Port 5 --> cls [0x604]
5 Series/3400 Series Chipset PCI Express Root Port 4 --> cls [0x604]
5 Series/3400 Series Chipset PCI Express Root Port 1 --> cls [0x604]
5 Series/3400 Series Chipset High Definition Audio --> cls [0x403]
5 Series/3400 Series Chipset USB2 Enhanced Host Controller --> cls [0xc03]
5 Series/3400 Series Chipset USB Universal Host Controller --> cls [0xc03]
5 Series/3400 Series Chipset USB Universal Host Controller --> cls [0xc03]
5 Series/3400 Series Chipset USB Universal Host Controller --> cls [0xc03]
Core Processor QPI Routing and Protocol Registers --> cls [0x880]
Core Processor QPI Link --> cls [0x880]
Core Processor Miscellaneous Registers --> cls [0x880]
Core Processor System Control and Status Registers --> cls [0x880]
Core Processor Semaphore and Scratchpad Registers --> cls [0x880]
Core Processor System Management Registers --> cls [0x880]
Core Processor PCI Express Root Port 3 --> cls [0x604]
Core Processor PCI Express Root Port 1 --> cls [0x604]
Core Processor DMI --> cls [0x600]

Device [8]: Antilles [AMD Radeon HD 6990]
Unsupported i2c device (00)


Device [7]: Antilles [AMD Radeon HD 6990]
Unsupported i2c device (00)


Device [4]: Antilles [AMD Radeon HD 6990]
Unsupported i2c device (00)


Device [3]: Antilles [AMD Radeon HD 6990]
Unsupported i2c device (53)

full member
Activity: 200
Merit: 100
|Quantum|World's First Cloud Management Platform
Ok, it looks like people would rather the thread be spammed with the debugging attempts. I personally don't care, I was only trying to avoid causing thread clutter. That being said, those of you who can't get radeonvolt to output anything, I put in a few printf statements in the enum_cards function that should display what your radeon cards are showing up as in terms of their device classes and IO region sizes. It will print out all pci devices (similar to lspci), so you will have to look for your cards in there and paste the relevant information from the lines. Here's an example output on my xubuntu mining rig:

Radeon HD 5870 (Cypress) --> cls [0x300] size 268435456 0 131072
Cypress HDMI Audio [Radeon HD 5800 Series] --> cls [0x403]
Radeon HD 5870 (Cypress) --> cls [0x300] size 268435456 0 131072
Hemlock [ATI Radeon HD 5900 Series] --> cls [0x380] size 268435456 0 131072
Cypress HDMI Audio [Radeon HD 5800 Series] --> cls [0x403]
Hemlock [ATI Radeon HD 5900 Series] --> cls [0x300] size 268435456 0 131072
PEX 8647 48-Lane, 3-Port PCI Express Gen 2 (5.0 GT/s) Switch --> cls [0x604]
PEX 8647 48-Lane, 3-Port PCI Express Gen 2 (5.0 GT/s) Switch --> cls [0x604]
PEX 8647 48-Lane, 3-Port PCI Express Gen 2 (5.0 GT/s) Switch --> cls [0x604]
...

Basically, I need the hex number after cls in brackets, and also the numbers after size (if size shows up). The changed enum_cards function is on pastebin. All you have to do is open up radeonvolt.c, and replace the existing enum_cards function in there with the one from the pastebin link, and then remake it and run.
full member
Activity: 200
Merit: 100
|Quantum|World's First Cloud Management Platform
Agreed, but the intermediary debugging stuff would clutter the thread up. Once I find a solution I will post the details.
hero member
Activity: 518
Merit: 500
PM'd you

I think it would be more beneficial if you told the rest of the thread people what the solution was.

I was having this same problem with 5870s. Maybe there are some other people with 5970s having problems etc.
full member
Activity: 200
Merit: 100
|Quantum|World's First Cloud Management Platform
sr. member
Activity: 274
Merit: 250
Code:
"radeonvolt.c" 306L, 7316C zapisano
zulus@zulus-zulus:~/ius-radeonvolt-d9e89b5$ make
gcc -O3 -Wall -c radeonvolt.c
gcc -O3 -Wall -c i2c.c
gcc -O3 -Wall -c vt1165.c
gcc -O3 -Wall -lpci -o radeonvolt radeonvolt.o i2c.o vt1165.o
zulus@zulus-zulus:~/ius-radeonvolt-d9e89b5$ sudo ./radeonvolt
[sudo] password for zulus:
zulus@zulus-zulus:~/ius-radeonvolt-d9e89b5$

1. downloaded radeonvolt for the first time
2. edit radeonvolt.c with your function
3. make for the first time
4. no luck Sad

it`s MSI GD70 with sempron 145 2GB ram on xubuntu 11.04, cat 12.2 and SDK 2.6
full member
Activity: 200
Merit: 100
|Quantum|World's First Cloud Management Platform
I just provided the altered function, the rest of the file should be the same. If it's not showing anything, perhaps there are differences between operating systems/hardware that cause the problem? You did remake it after saving, right? It could be the device class has a differing value. One way to narrow down the issue would be to put in some printfs and see what the output is, if you're comfortable enough with C. If not, I could paste the relevant code with printfs.
sr. member
Activity: 274
Merit: 250
Yeah, and tell what to do to get it working :/

EDIT:
I have replaced original radeonvolt.c file with those lines posted by QuantumFoam ( just those fited was removed)
but it does not show my anything.
full member
Activity: 200
Merit: 100
|Quantum|World's First Cloud Management Platform


Output on my xubuntu machine:

Device [8]: Hemlock [ATI Radeon HD 5900 Series]
        Current core voltage: 1.0375 V
        Presets: 0.9500 / 1.0000 / 1.0375 / 1.0500 V
        Core power draw: 57.48 A (59.64 W)
        VRM temperatures: 57 / 61 / 60 C


Device [9]: Hemlock [ATI Radeon HD 5900 Series]
        Current core voltage: 1.0375 V
        Presets: 0.9500 / 1.0000 / 1.0375 / 1.0500 V
        Core power draw: 56.61 A (58.74 W)
        VRM temperatures: 81 / 82 / 82 C


I'm sure there's a better solution than removing the vendor_id and device_id checks, one would need to get the relevant ones for the 5900 series.
donator
Activity: 1218
Merit: 1079
Gerald Davis
I've been messing with the source code so I can see the VRM temps in linux for my 5970. I was able to do this with the commenting out of vendor_id and device_id check as mentioned earlier in this thread, and with a modification to also accept a device class of type PCI_CLASS_DISPLAY_OTHER in addition to PCI_CLASS_DISPLAY_VGA. Without this second change, I was only able to see the VRM temps of one GPU on the 5970. Hope that helps other 5970 linux users, it's a simple code change in the enum_cards function in radeonvolt.c (first if branch below the first for loop).

Could you provide the modified code (pastebin would work fine).  I tried messing around with this but never got it working.
full member
Activity: 200
Merit: 100
|Quantum|World's First Cloud Management Platform
I've been messing with the source code so I can see the VRM temps in linux for my 5970. I was able to do this with the commenting out of vendor_id and device_id check as mentioned earlier in this thread, and with a modification to also accept a device class of type PCI_CLASS_DISPLAY_OTHER in addition to PCI_CLASS_DISPLAY_VGA. Without this second change, I was only able to see the VRM temps of one GPU on the 5970. Hope that helps other 5970 linux users, it's a simple code change in the enum_cards function in radeonvolt.c (first if branch below the first for loop).
hero member
Activity: 518
Merit: 500
An update.

I did all the proper steps and modifications to try and get this working on my reference ATI 5870s.

Will post back later with some results but on first sight the VRM temps are too low to be real and probably just core temps.

Runeks fork was even worse and did not even show anything other than "supported device".

Again, 100% reference ATI branded 5870s here ...

Some VRMs are higher quality and are around GPU temps: say, GPU temp is around 85c, VRMs could be around both 120c for older/shittier VRMs (and be within the spec for the VRMs too), and 85c for newer/less shitty VRMs.

Turns out you were right !

I started mining and the core on one card using a special cooler was at 39 degrees. Radeonvolt reported the VRM temps as 50 on that card so it seems to be working.

Thus, it seems like VRM temps are very close to GPU core temps. Others are having 70 core and VRM at say 90 so that is a +20 difference.

On my other cards the core and VRM are almost the same.

Does that mean that these GPUs were straight out the factory or something Huh
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
An update.

I did all the proper steps and modifications to try and get this working on my reference ATI 5870s.

Will post back later with some results but on first sight the VRM temps are too low to be real and probably just core temps.

Runeks fork was even worse and did not even show anything other than "supported device".

Again, 100% reference ATI branded 5870s here ...

Some VRMs are higher quality and are around GPU temps: say, GPU temp is around 85c, VRMs could be around both 120c for older/shittier VRMs (and be within the spec for the VRMs too), and 85c for newer/less shitty VRMs.
hero member
Activity: 518
Merit: 500
An update.

I did all the proper steps and modifications to try and get this working on my reference ATI 5870s.

Will post back later with some results but on first sight the VRM temps are too low to be real and probably just core temps.

Runeks fork was even worse and did not even show anything other than "supported device".

Again, 100% reference ATI branded 5870s here ...
donator
Activity: 919
Merit: 1000
Talked to a guy that reverse-engineered WiFi chips at register level to develop Linux drivers and realized that it will not be easy.

The approach to hook into the i2c communication and log command sequences is sure the way to go. But it won't be enough to just collect the info per-chip, it most probably will be required to have it per-card. Setting the clocks at controller for one card does not necessarily mean the same for a similar one (assembly options, scaling, offset, etc.). That's possibly the reason why bulanula can't set his params with radeonvolt.

If you string the Afterburner binaries there are IDs for supported cards, most probably they are using card-individual settings. Therefore we basically would have to rewrite AB to get reliable control over our mining cards - it is more a man-year task than a weekend's hack.

Given the remaining lifetime of mining GPUs of say 6-9 months (yeah, see DAT around the corner asking for my cards -- cheap Wink), I'd say it does not pay off to invest that effort.
donator
Activity: 919
Merit: 1000
(OT: Hell, not long ago I bought standby-killers to turn TV off over night instead of letting it consume 5W in standby-mode, now I'm burning kWatts 24/7  Undecided, different story).

New TVs typically use a watts or less on standby, which you exchange for instant on and less wear on the parts. Disabling standby will just kill your TV faster which is more expensive than the electricity it is "wasting".
Hi mod, OP not active for nearly a year, so its ok to hijack his thread I guess...

True, the latest campaigns for energy efficiency and green labels really pushed the manufacturers to save energy. My current 46" plasma is using as little as my previous 24" one during operation and only 0.3W in stand-by. But my other one wastes ~9W plus 12W for the cablecom STB for just being ready to watch the news once a day - just insane! Using a standby-killer for a month I can save enough to power one of my rigs for nearly 18hours -- insane²! (see the irony?).
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
EDIT: Also, not sure if it's relevant, but using Linuxcoin (which uses an older version of the Catalyst driver as far as I'm aware of), I'm able to set both core clock and memory clock to any value I like on both my 5870 and 5770. Not sure if it's related to the Catalyst version or if it's related to the card models (XFX and Sapphire, respectively).
Some very early drivers had limits on what you could try to change, but anything 11.4+ on windows and 11.6+ on linux has none. 5xx0 cards are much more accepting of changes than 6/7xxx though.
legendary
Activity: 980
Merit: 1008

Thanks for the commitment, any support will help. I think most Linux fellows sooner or later feel this desire to move back to Windows, being it that LibreOffice is not able to open some DOCX or your fav game not working under WINE, right Sad

[...]
FYI, Office 2007 works fine under Linux using the latest WINE.
http://imgur.com/dtkXz

EDIT: Also, not sure if it's relevant, but using Linuxcoin (which uses an older version of the Catalyst driver as far as I'm aware of), I'm able to set both core clock and memory clock to any value I like on both my 5870 and 5770. Not sure if it's related to the Catalyst version or if it's related to the card models (XFX and Sapphire, respectively).
hero member
Activity: 714
Merit: 500
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
To resolve the confusion here: my primary goal was not to further OC cards to squeeze out their last kH/s but to maximize the H/J, which with Linux is not possible with the given max delta between mem and engine clocks Con is describing.


Con, you're often not happy with AMD's Linux drivers (who is), but you'd also agree better to live with the limitations at the safe side than having the freedom to kill miner's cards, right?
Indeed but I'm not advocating changes to raise voltage and engine clock speed further. There is no apparent limit to how high you can set engine clock speed with just the ADL support. I want to lower memory clock speed and voltage. Can't say that I've heard of underclocking or undervolting harming hardware.
hero member
Activity: 518
Merit: 500
I still see no resolve for my reading VRM temperatures using Linux.

I have modified this radeonvolt and it still does not appear to list the VRM temperatures but only the core temperatures.

All reference cards, too.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
(OT: Hell, not long ago I bought standby-killers to turn TV off over night instead of letting it consume 5W in standby-mode, now I'm burning kWatts 24/7  Undecided, different story).

New TVs typically use a watts or less on standby, which you exchange for instant on and less wear on the parts. Disabling standby will just kill your TV faster which is more expensive than the electricity it is "wasting".
donator
Activity: 919
Merit: 1000
To resolve the confusion here: my primary goal was not to further OC cards to squeeze out their last kH/s but to maximize the H/J, which with Linux is not possible with the given max delta between mem and engine clocks Con is describing.

Patching BIOS to surpass the absolute max ranges is fine to push the card to the limit (which I won't do any more after bricking one 6950 trying to unlock it  Embarrassed), the 7970s I lastly added to my rig do not really need patching - they just run fine with cgminer (see [1]). But reading people are able to reduce energy consumption by 20% lowering memclock and core voltage makes me wanting go back to Windows.
(OT: Hell, not long ago I bought standby-killers to turn TV off over night instead of letting it consume 5W in standby-mode, now I'm burning kWatts 24/7  Undecided, different story).

My pragmatic idea was to record the i2c-commands issued by Afterburner when controlling popular mining cards and to build up a library for directly accessing the controller chips (like radeonvolt does for the vt1165). But thinking further, with that lib you'd give the user the perfect tool to fry their cards. Counter measures (like use it only for reducing values) are not applicable in the open source world - we'll soon have folks yelling at cgminer/Linux for bricking their cards.

Con, you're often not happy with AMD's Linux drivers (who is), but you'd also agree better to live with the limitations at the safe side than having the freedom to kill miner's cards, right?


OP, sorry for hijacking this thread. Closing here.


[1] https://bitcointalksearch.org/topic/m.824652
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
cgminer is limited by what the bios will accept via the driver. Often it is -way- outside the reported "safe range" that the ATI Display Library tells it. cgminer will allow you to happily ignore the safe range and set whatever you like. Some cards respond to that, some don't, ignoring values you pass to it. On my cards I can overclock my engine to any value I like and same with the memory. But try to set the memory more than 125 below the engine it ignores it (6970). It also happily ignores -any- voltage setting I pass to it. On the other hand, flash the bios on those cards and you can set whatever you like via the ATI Display Library and therefore cgminer. The other tools that hack via i2c and stuff are so device and OS dependent that they'd be a nightmare to write in a general fashion that could be included in cgminer. Sure if someone else did the code, I'd include it. But short of having one of each card, and every possible OS to test it on, I cannot write the code myself.
donator
Activity: 1218
Merit: 1079
Gerald Davis
Thanks for clarification.

Whether ADL is limiting ranges or the BIOS does, effect remains the same: getting full control goes only by bypassing AMD provided interfaces and accessing HW directly (please correct me if I'm wrong assuming those controller chips are I2C accessible).

Well not exactly.  I for example was able to raise the stock voltage on my 5970s by modifying the BIOS.  Had the limit been enforced by ADL I would have no options.

Quote
I doubt that MSI as manufacturer had to reverse engineer to get Afterburner done, but GPU-Z folks for sure had to. That's why I proposed social engineering approach, i.e. maybe some guys from OC scene are also bitcoiners and have access to specs or source code willing to share. Maybe you as one of the technically most competent bitcoiner are the one?

I don't think so.  GPU-Z is useful because nobody else can do it.  The author has indicated he has absolutely no interest in ever providing a GPU-Z for Linux.  He has also indicated he will never release the source code to allow anyone else to write it.  I don't have a link as I researched it well over a year ago and when I saw that I was like "ok guess it won't be happening". Yes a very "non open" attitude but open source isn't embraced by all software developers.
donator
Activity: 919
Merit: 1000
Thanks for clarification.

Whether ADL is limiting ranges or the BIOS does, effect remains the same: getting full control goes only by bypassing AMD provided interfaces and accessing HW directly (please correct me if I'm wrong assuming those controller chips are I2C accessible).

I doubt that MSI as manufacturer had to reverse engineer to get Afterburner done, but GPU-Z folks for sure had to. That's why I proposed social engineering approach, i.e. maybe some guys from OC scene are also bitcoiners and have access to specs or source code willing to share. Maybe you as one of the technically most competent bitcoiner are the one?
donator
Activity: 1218
Merit: 1079
Gerald Davis
I think you are confusing two things.

ADL doesn't access low level components.  Not on 7000 series, not on 5000 series, not on Windows, not on Linux.
ADL simply ASKS the BIOS to make a change.  The BIOS is free to ignore that request (and routinely does). Even the return value is nearly worthless (success simply means ADL request was received by the card "please set voltage to 10,000V".  "success").

cgminer -> ADL -> GPU BIOS -> low level hardware.
ADL functionality is not materially different under Linux compared to Windows and is completely BIOS dependent.

Sadly AMD has crippled ADL access so various utilities complete bypass the ADL and directly read/write from underlying hardware.  IMHO AMD restrictive ADL defeats the entire purpose.  Since it is so painfully limited (for frack sake it doesn't even provide all GPU temp values) 3rd parties go around the entire system and write directly to the hardware which is far more dangerous than simply providing an unlocked ADL library.

Support and capabilities of those tools (GPU-Z, Afterburner, Radeonvolt, etc) are limited to what has been manually hacked together and reverse engineered as it totally bypasses all of AMD drivers and libraries.
donator
Activity: 919
Merit: 1000
After finding no valid solution for Linux I just flashed all my cards with custom BIOS using RBE.  Granted that is not an attractive option for everyone but just pointing out an option does exist.  

*D&T is not responsible for any bricked cards as a result of flawed bios installs.

Are you saying that the limitations with Linux can be bypassed by just patching the BIOS? I understood that the values kept in BIOS are the absolute min/max ratings, but that the ADL imposes additional relative limitations (like delta(core, mem) <= 150MHz on Tahiti).

Other thing is that Linux ADL functionality is probably years behind on what's doable in the Windows world (not because of ATI being more active there but because of manufacturers providing low-level access). Access to 79xx-VRM controller might not find its way into ADL during the use of those cards for mining Sad

Therefore an I2C sniffer would be a valuable one-time-investment to port low-level control functionality over to Linux. Since this should be a commonly useful tool for common devs, I was hoping to find some existing sniffers or bus loggers. No luck so far...
donator
Activity: 1218
Merit: 1079
Gerald Davis
After finding no valid solution for Linux I just flashed all my cards with custom BIOS using RBE.  Granted that is not an attractive option for everyone but just pointing out an option does exist.  

*D&T is not responsible for any bricked cards as a result of flawed bios installs.
donator
Activity: 919
Merit: 1000
Any supporters?
4 BTC commited,  2012 ,,, and this issue is getting more and more ridiculous.

I'm one the verge of going back to windows completely.

My sapphire's 5850 do 960mhz on windows and 840mhz under Linux.
Thanks for the commitment, any support will help. I think most Linux fellows sooner or later feel this desire to move back to Windows, being it that LibreOffice is not able to open some DOCX or your fav game not working under WINE, right Sad

We should at least try to make miners equally happy with Linux.

As per my understanding this will only do wonders for the 7970s and NOT the 5xxx cards.

I have some 5xxx cards myself so I would LOVE for something better than this crappy software.

Even after modifications to the source code I can not get it to report proper VRM temperatures.

Have not tried voltage adjustments but I bet those don't work as well.

No, if we succeeded to reverse engineer the thing we would have some mean to gradually add support for any card that is supported by Windows tools. So assume the outcome of this effort was an I2C-sniffer and you need to get your card supported. You'd just have to switch to Windows, run the sniffer, set some parameters with Afterburner and collect the commands sent over the bus. If communication is not crypted or intentionally crippled, you just take the command sequence over to the Linux library and voila - your card got supported.

Thats without the minor details Wink First step would be to hook into Afterburner and log its access to the I2C display adapter interface (see [1]). Interested?


[1] http://msdn.microsoft.com/en-us/library/windows/hardware/ff567381%28v=vs.85%29.aspx
hero member
Activity: 518
Merit: 500
Any supporters?
4 BTC commited,  2012 ,,, and this issue is getting more and more ridiculous.

I'm one the verge of going back to windows completely.

My sapphire's 5850 do 960mhz on windows and 840mhz under Linux.

As per my understanding this will only do wonders for the 7970s and NOT the 5xxx cards.

I have some 5xxx cards myself so I would LOVE for something better than this crappy software.

Even after modifications to the source code I can not get it to report proper VRM temperatures.

Have not tried voltage adjustments but I bet those don't work as well.
member
Activity: 66
Merit: 10
Any supporters?
4 BTC commited,  2012 ,,, and this issue is getting more and more ridiculous.

I'm one the verge of going back to windows completely.

My sapphire's 5850 do 960mhz on windows and 840mhz under Linux.
donator
Activity: 919
Merit: 1000
Folks,

sorry to bump this almost-dead thread.

It really bothers me to be restricted to fully control my GPUs when driving them with Linux. Most serious miners should agree that operating their GPUs with cgminer under Linux is the ideal setup (free, robust, head-less, efficient, etc.) -- if one could control clocks and voltages as freely as it is doable with Windows tools.

Right now cgminer uses the ADL library to control those values, but it is restricted to what AMD finds to be 'sane' values, that is e.g.: a max clock delta of 125MHz between core and mem clocks for 69xx and 150MHz for 79xx. From manufacturer's perspective there for sure are good reasons to restrict users' access to the controlling chips and prevent him from frying his cards, while OTOH it is completely insane to burn significantly more energy for mining on Linux than on Windows.

The latest Tahiti cards are equipped with the CHiL CHL8228G VRM. MSI Afterburner and GPU-Z recently added support for the latest AMD cards when those devices got supported. For Linux we'd need the same direct I2C-support to all controlling chips. Sadly, for that VRM only a product brief document is available at [1]. I poked around and it is not possible to get the full data sheet, even under NDA. Obviously MSI as card manufacturer has access to the specs to support it in Afterburner, but evenly obvious the GPU-Z folks had to reverse engineer to get their support implemented.

It is illusory to assume Linux folks will get access to the specs, even if we hint that bitcoin miners love ATI cards and we promise to be very careful (its just too dangerous to give users access at that HW level). Therefore, reverse engineering is the way we must take. We could:
  • reverse engineer MSI Afterburner or GPU-Z to unveil the command sequences required for full control
  • hook into Windows I2C driver and trace the I2C commands issued
  • social engineering (know someone at ATI research, MSI, CHiL, etc.)

The first approaches require deep Windows system knowledge, plus some cracking capabilities for the first approach or some DDK experience for the second (assuming there are no I2C sniffer already available).

My active times with Windows passed by long ago, but I know that this is a many-weeks job - no way to get the efforts compensated by bounties or crowd funding. Anyone willing to support here needs to do it for the glory (and you'd help saving the world by greatly reducing mining energy consumption Wink). Ideally we should end up with an ADL replacement that gives unrestricted access to the controlling chips. BTW, my own capabilities are limited to the Linux side, i.e. as soon as I get the specs I would work out a library to be included in cgminer (if OP or runeks aren't to take the glory).

Any supporters?


[1] http://www.irf.com/product-info/datasheets/data/pb-chl8225.pdf
hero member
Activity: 518
Merit: 500
It seems that this does not report VRM temps on a reference 5870 at all.

The values are too close to core temp when VRMs clearly run at 90 degrees or so.

How can I modify the code so that it reports correctly ?

I already modified 1002:6899 to 1002:6898 so that it works with my 5870 and not only 5850s.

Too sad that development is dead because this really could have been heaven for Linux miners like myself !

Anyone managed to solve this issue yet Huh

It really is bothering me I cannot for the life of me read those VRM temperatures.

How can the core be like 70 and VRM just 76 when it clearly should be like 90 etc.

Thanks !

I will try runeks fork ASAP but I doubt that will solve it.

As said before ALL are 100% REFERENCE ATI cards with Volterra chips ( opened them up to check ) so they should work perfectly but since this was designed for reference 5850s then it does not work properly.

AFAIK the 5870s have like 4 phases and 5850s only 3 phases so there also should be an extra VRM that is not getting reported at all.

I really want to avoid Windblows if possible Undecided
legendary
Activity: 980
Merit: 1008
Which Sapphire card do you have? This one:? http://www.newegg.com/Product/Product.aspx?Item=N82E16814102883
In any case, looks like the VRM on the card isn't the Volterra chip that is controllable from software.
sr. member
Activity: 362
Merit: 250
fork:

Code:
kanotix@Kanotix:~/runeksvendsen-radeonvolt-1e7abec$ sudo ./radeonvolt

Device [02]: Cypress [Radeon HD 5800 Series]
             PC Partner Limited

Unsupported i2c device (1a)

Thank you anyway.
sr. member
Activity: 274
Merit: 250
any1 can read VRM temps from 5970 ?? thats more importand to me...
sr. member
Activity: 362
Merit: 250
Code:
kanotix@Kanotix:~/ius-radeonvolt-d9e89b5$ sudo ./radeonvolt

Device [2]: Cypress [Radeon HD 5800 Series]
Unsupported i2c device (1a)

kanotix@Kanotix:~/ius-radeonvolt-d9e89b5$

Reference 5850. 11.10 driver. What I do wrong? and what is the best voltage for this? Sorry for my stupid, where is the fork?
hero member
Activity: 518
Merit: 500
It seems that this does not report VRM temps on a reference 5870 at all.

The values are too close to core temp when VRMs clearly run at 90 degrees or so.

How can I modify the code so that it reports correctly ?

I already modified 1002:6899 to 1002:6898 so that it works with my 5870 and not only 5850s.

Too sad that development is dead because this really could have been heaven for Linux miners like myself !
legendary
Activity: 980
Merit: 1008
Well, it changes voltage for the wrong VID on 5970s by default, on 5970s (and iirc ref 5870s) VID 3 is for performance mode, on ref 5850s it's VID 2.
Are you talking about this line?

Code:
vt1165_set_voltage(&i2c, 2, value);

should that be a 3 for reference 5870 and 5970? I know my non-reference XFX 5870 uses AMDOverdriveCtrl's third profile for the performance mode (VID 2). AMDOverdriveCtrl's output for my 5870 looks very similar to this: http://pastebin.com/JAbqTR1H
hero member
Activity: 714
Merit: 500
Tweaking volt,monitoring VRM temp, Smiley interesting.
sr. member
Activity: 274
Merit: 250
Works for one of my 5850`s - ref.
rest - any sort of 5XXX series GPU - dont recognize anythink.
But that ref 5850 works gr8 Smiley
good stuff.

Does any1 know any other software for linux to see VRM temp ?? Verry important for me - OC with WC :>
sr. member
Activity: 406
Merit: 257
Just a report of partial success.

I have 5 GPUs on a rig :
- 2 on a HIS 5970 2GB
- 2 Sapphire 5850 1GB,
- 1 Sapphire 5830 1GB

Here is what radeonvolt sees
Code:
$ sudo ./radeonvolt

Device [7]: Hemlock [ATI Radeon HD 5900 Series]
            ATI Technologies Inc

        Current core voltage: 1.0750 V
        Presets: 0.9500 / 1.0000 / 1.0750 / 1.0500 V
        Core power draw: 57.48 A (61.80 W)
        VRM temperatures: 84 / 87 / 88 C


Device [8]: Radeon HD 5800 Series (Cypress LE)
            PC Partner Limited

Unsupported i2c device (1a)


Device [11]: Cypress [Radeon HD 5800 Series]
            PC Partner Limited

Unsupported i2c device (1a)


Device [12]: Cypress [Radeon HD 5800 Series]
            PC Partner Limited

Unsupported i2c device (1a)

As you can see I could overvolt the 5970 (or most probably one of the GPUs on it) from 1.0375 to 1.075V (I'm going slow at this). I tuned the frequencies before overvolting and could only push one of the GPUs on the 5970 more (the other failed very quickly - less than 2 hours - with only a 5MHz increase while the first didn't flinch from a 10MHz increase running for 24h thing it couldn't sustain before overvolting).
Well, it changes voltage for the wrong VID on 5970s by default, on 5970s (and iirc ref 5870s) VID 3 is for performance mode, on ref 5850s it's VID 2.
donator
Activity: 1731
Merit: 1008
Had to install the lib, still does not work at all for me.

Code:
root@miner:~/ius-radeonvolt-d9e89b5# ./radeonvolt
root@miner:~/ius-radeonvolt-d9e89b5# ./radeonvolt
root@miner:~/ius-radeonvolt-d9e89b5# /etc/init.d/mine stop
Stopping mining processes...: mine.
root@miner:~/ius-radeonvolt-d9e89b5# ./radeonvolt
root@miner:~/ius-radeonvolt-d9e89b5# ./radeonvolt --device 1
root@miner:~/ius-radeonvolt-d9e89b5# ./radeonvolt /?
Usage: radeonvolt [options]

Optional arguments:
  --device  device to query/modify
  --vcore    set core voltage (in V)

Example: radeonvolt --device 0 --vcore 1.0875
root@miner:~/ius-radeonvolt-d9e89b5# lspci -vd1002
lspci: -d: ':' expected
root@miner:~/ius-radeonvolt-d9e89b5#
full member
Activity: 392
Merit: 100
is there version for windows?

 Smiley
sr. member
Activity: 467
Merit: 250
Grabbed the forked version (*Thank you *) and still none of my cards are supported.. Sad


Sapphire 5830 xTreme:
Quote
Device [10]: Device 689e
             PC Partner Limited

Unsupported i2c device (1a)

Sapphire 5830 xTreme:
Quote
Device [05]: Cypress [Radeon HD 5800 Series]
             PC Partner Limited

Unsupported i2c device (1a)

XFX 6950:
Quote
Device [09]: Device 6719
             XFX Pine Group Inc.

Unsupported i2c device (00)

Diamond 6950
Quote
Device [04]: Device 6719
             Hightech Information System Ltd.

Unsupported i2c device (00)



legendary
Activity: 980
Merit: 1008
For anyone who's interested; I've forked the radeonvolt project and made some cosmetic changes to the code. Well, one functional change in that the program isn't restricted to the HD5850 anymore. It should accept all ATI cards and check to see if the correct VRM chip is in use, and try to proceed if it is.
Also, the subvendor (XFX/ASUS/Sapphire etc.) is now displayed with the device information, and a --debug option has been added that prints out extra (more or less necessary) information.

Github page

I'm still trying to find out if GPU-Z can read the VRM temperatures of my card, because if it is, it should be doable in Linux too.

I have an XFX 5830 that it would be neat to be able to use radeonvolt on, and an asus 5870 (Which I expect less, as I know it is a super-special voltage regulator), and asus/HIS/HIS iceq-x 6870's. Let me know if there is any test info I can provide from them :p
I'd like to point out that I have added no extra features with regards to overvolting capability, mainly because I have no idea how to do it.

At the moment, from the research I have done, it seems like every card that uses the uPI uP6213 VRM controller chip is definitely not overvoltable throught software, and probably not even probable through software (reading voltages, temperatures).

I know the XFX 5870 card I have has a uPI uP6213 VRM controller chip, and radeonvolt reports the following the error when trying to access its VRM controller chip (like many others have reported):

Code:
Unsupported i2c device (1a)

As far as I can tell, the value 1a here, is an ID of the VRM chip (0a is the Volterra VT1165). If this indeed is the case, this means that when radeonvolt reports the above error (with device ID 1a), the card in question has the aforementioned uPI uP6213 VRM controller, which seemingly isn't available via software. But I'm not 100% sure of this yet, so I haven't programmed it into radeonvolt yet (ie. reporting "Unsupported VRM: uPI uP6213" instead of the above message).

hero member
Activity: 896
Merit: 1000
Just a report of partial success.

I have 5 GPUs on a rig :
- 2 on a HIS 5970 2GB
- 2 Sapphire 5850 1GB,
- 1 Sapphire 5830 1GB

Here is what radeonvolt sees
Code:
$ sudo ./radeonvolt

Device [7]: Hemlock [ATI Radeon HD 5900 Series]
            ATI Technologies Inc

        Current core voltage: 1.0750 V
        Presets: 0.9500 / 1.0000 / 1.0750 / 1.0500 V
        Core power draw: 57.48 A (61.80 W)
        VRM temperatures: 84 / 87 / 88 C


Device [8]: Radeon HD 5800 Series (Cypress LE)
            PC Partner Limited

Unsupported i2c device (1a)


Device [11]: Cypress [Radeon HD 5800 Series]
            PC Partner Limited

Unsupported i2c device (1a)


Device [12]: Cypress [Radeon HD 5800 Series]
            PC Partner Limited

Unsupported i2c device (1a)

As you can see I could overvolt the 5970 (or most probably one of the GPUs on it) from 1.0375 to 1.075V (I'm going slow at this). I tuned the frequencies before overvolting and could only push one of the GPUs on the 5970 more (the other failed very quickly - less than 2 hours - with only a 5MHz increase while the first didn't flinch from a 10MHz increase running for 24h thing it couldn't sustain before overvolting).
member
Activity: 77
Merit: 10
For anyone who's interested; I've forked the radeonvolt project and made some cosmetic changes to the code. Well, one functional change in that the program isn't restricted to the HD5850 anymore. It should accept all ATI cards and check to see if the correct VRM chip is in use, and try to proceed if it is.
Also, the subvendor (XFX/ASUS/Sapphire etc.) is now displayed with the device information, and a --debug option has been added that prints out extra (more or less necessary) information.

Github page

I'm still trying to find out if GPU-Z can read the VRM temperatures of my card, because if it is, it should be doable in Linux too.

I have an XFX 5830 that it would be neat to be able to use radeonvolt on, and an asus 5870 (Which I expect less, as I know it is a super-special voltage regulator), and asus/HIS/HIS iceq-x 6870's. Let me know if there is any test info I can provide from them :p
sr. member
Activity: 252
Merit: 250
+1

With Catalyst 11.8 and 'cgminer' I can't overvolt my Gigabyte 5850. It locks at a maximum 1.088V

I'll give a try to this.
legendary
Activity: 980
Merit: 1008
For anyone who's interested; I've forked the radeonvolt project and made some cosmetic changes to the code. Well, one functional change in that the program isn't restricted to the HD5850 anymore. It should accept all ATI cards and check to see if the correct VRM chip is in use, and try to proceed if it is.
Also, the subvendor (XFX/ASUS/Sapphire etc.) is now displayed with the device information, and a --debug option has been added that prints out extra (more or less necessary) information.

Github page

I'm still trying to find out if GPU-Z can read the VRM temperatures of my card, because if it is, it should be doable in Linux too.
legendary
Activity: 980
Merit: 1008
It seems the card I have (non-reference 5870) don't allow voltage regulation via software because it uses the uPI uP6213 voltage controller:

http://benchmarkreviews.com/index.php?option=com_content&task=view&id=491&Itemid=72&limit=1&limitstart=4
http://www.xbitlabs.com/articles/graphics/display/xfx-radeon-hd5830_3.html

The VRM part of the above 5830 board looks exactly like the XFX 5870 I have. Both the capacitors and inductors look exactly the same as on my HD 5870. http://www.coolingconfigurator.com/upload/pictures/XFX-Radeon-HD5870V2-PCB_91777.jpg

So this sort of limits the usability at least for cards like these. Regulating voltages would be nice to have, but reading the VRM temperatures would also be very useful. I presume this isn't precluded just because the voltage isn't controllable via software. Does anyone know if this is the case?

EDIT: I just pulled the 5870 out of the case to confirm that it is indeed a uP6213 voltage controller this card is equipped with. I also confirmed the model number to be HD-587X-ZNFV V1.3, as it said on a little sticker. It seems this VRM controller isn't probable via I2C, its data sheet doesn't mention anything about it at least, while the data sheet of the uP6208 does. So this card in particular doesn't look very promising wrt. to getting VRM/VDDC temps or voltage control via software.
legendary
Activity: 980
Merit: 1008
How about we get together and put up a bounty for whomever writes code to probe VRM temperatures for VRM chip X.

I'll start out:

I'm interested in getting it working for the XFX 5870 (1GB) with the non-reference board (model no. HD-587X-ZNFV).

I'm willing to tear my card apart and take high-res pictures of the board if anyone is willing to make a bid on implementing support for this card.

I'm willing to donate 2 BTC to anyone who implements this. Maybe if more people get in on this, we can increase the bounty to make it interesting?

EDIT: I'm not completely sure the model number is HD-587X-ZNFV. All I know is it looks like this (or at least it did, before I pulled off the stock heatsink and put on an Accelero S1 rev. 2):
sr. member
Activity: 467
Merit: 250

XFX 6950 - Device [4]: Device 6719 - Unsupported i2c device (00)
Sapphire 5830 - Device [4]: Device 689e - Unsupported i2c device (1a)
Sapphire 5850 - Device [5]: Cypress [Radeon HD 5800 Series] - Unsupported i2c device (1a)
sr. member
Activity: 349
Merit: 250
BTCPak.com - Exchange your Bitcoins for MP!
Has anybody had any luck with this to overvolt a Sapphire HD850 Xtreme?  (Or any other utility for that matter besides Trixx in Windows).
newbie
Activity: 47
Merit: 0
Can someone tell me, how i can find out the adresses of alternative VR chips on other cards. I understand the way I2Cs work and i understand the source code, but i don't know how to get the adresses to communicate with the slaves.

Maybe the OP can explain shortly, how he found out the correct values for the VT1165 chip.
hero member
Activity: 556
Merit: 500
Well for the 6950/6970 It appears to use the CHL8228 voltage controller here: http://www.chilsemi.com/wp-content/uploads/chl822x-product-brief.pdf My programming skills are terrible but I'll look at your source and poke around and see what crashes I come up with Tongue
newbie
Activity: 34
Merit: 0
i also would just like to see the reporting features of this and don't have a need to use it to modify anything.  i just have a 4850. i commented out the line "dev->vendor_id == 0x1002 && dev->device_id == 0x6899" and recompiled but i'm still not getting any output.
legendary
Activity: 1148
Merit: 1001
Radix-The Decentralized Finance Protocol
Aye, obviously. My point about being a good sign or a bad one was in regards to updating the address points to get it working with this card, but I don't know what I'm doing so I'm out.

I looked into this a bit. Also looked into the code of RadeonVolt. It turns out the information to get the temperature of the VRM and the current (so you can get the W) is obtained through I2C (http://en.wikipedia.org/wiki/I%C2%B2C). But different models have different parameters, so you need to "talk" to each component differently. This is probably the reason why its not working in your card (and mine) and giving you those numbers.

There is software who has coded the protocol for nearly all the VRM models (that is how GPU-Z gets it), but its not available in Linux. I think its the only thing missing in Linux now.
hero member
Activity: 927
Merit: 1000
฿itcoin ฿itcoin ฿itcoin
It obviously didn't work, there is no way those numbers can be correct.
Aye, obviously. My point about being a good sign or a bad one was in regards to updating the address points to get it working with this card, but I don't know what I'm doing so I'm out.
legendary
Activity: 1284
Merit: 1001
It obviously didn't work, there is no way those numbers can be correct.
hero member
Activity: 927
Merit: 1000
฿itcoin ฿itcoin ฿itcoin
Is there any way to get this working with non ref v1.1 xfx 5870's?
I can view all the information through cpu-z on windows so hopefully its doable.

I'm just desperate to view VRM temps of my cards, couldn't care less about the voltage mod at this point tbh.


There's a spot in the code you need to change to have it match your H/W.

Search in radeonvolt.c for a line that looks like this:

Code:

        for(dev = pci->devices; dev && num_cards < MAX_CARDS; dev = dev->next) {
                if(
                    dev->device_class==PCI_CLASS_DISPLAY_VGA    &&
                    dev->vendor_id == 0x1002                               &&
                    dev->device_id == 0x6899
                )

and feed it the right values (or just plain old comment out
the lines that restrict on vendor_id and device_id).


Awesome, thank you!

I took a few more checks out that restricted me just to see what would happen, not sure if this is a good sign or not.
Code:
Device [8]: Radeon HD 5870 (Cypress)
        Current core voltage: 0.7375 V
        Presets: 0.7125 / 0.7250 / 0.7375 / 0.7500 V
        Core power draw: 88.06 A (64.95 W)
        VRM temperatures: 10 / 10 / 10 C


Device [7]: Radeon HD 5870 (Cypress)
        Current core voltage: 0.7375 V
        Presets: 0.7125 / 0.7250 / 0.7375 / 0.7500 V
        Core power draw: 88.06 A (64.95 W)
        VRM temperatures: 10 / 10 / 10 C


Device [2]: Radeon HD 5870 (Cypress)
        Current core voltage: 0.7375 V
        Presets: 0.7125 / 0.7250 / 0.7375 / 0.7500 V
        Core power draw: 88.06 A (64.95 W)
        VRM temperatures: 10 / 10 / 10 C


Device [1]: Radeon HD 5870 (Cypress)
        Current core voltage: 0.7375 V
        Presets: 0.7125 / 0.7250 / 0.7375 / 0.7500 V
        Core power draw: 88.06 A (64.95 W)
        VRM temperatures: 10 / 10 / 10 C
hero member
Activity: 927
Merit: 1000
฿itcoin ฿itcoin ฿itcoin
Is there any way to get this working with non ref v1.1 xfx 5870's?
I can view all the information through cpu-z on windows so hopefully its doable.

I'm just desperate to view VRM temps of my cards, couldn't care less about the voltage mod at this point tbh.

Heres the output for my cards.. do you need anything else? (yes its a quad gpu system)
Code:
01:00.0 VGA compatible controller: ATI Technologies Inc Radeon HD 5870 (Cypress) (prog-if 00 [VGA controller])
Subsystem: XFX Pine Group Inc. Device 2961
Flags: bus master, fast devsel, latency 0, IRQ 54
Memory at 90000000 (64-bit, prefetchable) [size=256M]
Memory at fe4e0000 (64-bit, non-prefetchable) [size=128K]
I/O ports at 6000 [size=256]
Expansion ROM at fe4c0000 [disabled] [size=128K]
Capabilities: [50] Power Management version 3
Capabilities: [58] Express Legacy Endpoint, MSI 00
Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010
Capabilities: [150] Advanced Error Reporting
Kernel driver in use: fglrx_pci
Kernel modules: fglrx, radeon

02:00.0 VGA compatible controller: ATI Technologies Inc Radeon HD 5870 (Cypress) (prog-if 00 [VGA controller])
Subsystem: XFX Pine Group Inc. Device 2961
Flags: bus master, fast devsel, latency 0, IRQ 55
Memory at a0000000 (64-bit, prefetchable) [size=256M]
Memory at fe5e0000 (64-bit, non-prefetchable) [size=128K]
I/O ports at 7000 [size=256]
Expansion ROM at fe5c0000 [disabled] [size=128K]
Capabilities: [50] Power Management version 3
Capabilities: [58] Express Legacy Endpoint, MSI 00
Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010
Capabilities: [150] Advanced Error Reporting
Kernel driver in use: fglrx_pci
Kernel modules: fglrx, radeon

07:00.0 VGA compatible controller: ATI Technologies Inc Radeon HD 5870 (Cypress) (prog-if 00 [VGA controller])
Subsystem: XFX Pine Group Inc. Device 2961
Flags: bus master, fast devsel, latency 0, IRQ 56
Memory at c0000000 (64-bit, prefetchable) [size=256M]
Memory at feae0000 (64-bit, non-prefetchable) [size=128K]
I/O ports at d000 [size=256]
Expansion ROM at feac0000 [disabled] [size=128K]
Capabilities: [50] Power Management version 3
Capabilities: [58] Express Legacy Endpoint, MSI 00
Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010
Capabilities: [150] Advanced Error Reporting
Kernel driver in use: fglrx_pci
Kernel modules: fglrx, radeon

08:00.0 VGA compatible controller: ATI Technologies Inc Radeon HD 5870 (Cypress) (prog-if 00 [VGA controller])
Subsystem: XFX Pine Group Inc. Device 2961
Flags: bus master, fast devsel, latency 0, IRQ 57
Memory at d0000000 (64-bit, prefetchable) [size=256M]
Memory at febe0000 (64-bit, non-prefetchable) [size=128K]
I/O ports at e000 [size=256]
Expansion ROM at febc0000 [disabled] [size=128K]
Capabilities: [50] Power Management version 3
Capabilities: [58] Express Legacy Endpoint, MSI 00
Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010
Capabilities: [150] Advanced Error Reporting
Kernel driver in use: fglrx_pci
Kernel modules: fglrx, radeon
legendary
Activity: 1428
Merit: 1000
https://www.bitworks.io
Using Radeonvolt to successfully change the voltage on my reference 5970 and 5870 cards, 6 installed in one rig. Will definitely send some coins when I'm in front of the computer with my wallet. I have a non-reference 5870 I would like to sort out at some point, need to get and post information.
newbie
Activity: 41
Merit: 0
is that also working for 6990's cards ? i couldn't manage to get it working
legendary
Activity: 1428
Merit: 1000
https://www.bitworks.io
Great tool...

I spent many hours today searching for a way to underclock my memory however I haven't had any luck without flashing a BIOS to open up the range of supported clock settings. AMDOverdrivectrl does not set anything outside of the stock ranges. On Windows it's possible with numerous tools so I assume the same can be done on Linux, in your work have you seen anything like that?
newbie
Activity: 12
Merit: 0
Hi, we've recently been having some issues overvolting our Sapphire 5850 Xtreme cards, basically having the same issues as some other people in this thread.

I had search around today on the net and found this: http://www.techpowerup.com/forums/showthread.php?p=2308101

Not sure if its right or not, but it looks to be around the right sort of thing you were after? In particular the use of the IC chip used on the card.

Obviously software based overvolting is far more desirable than this risky hardware mod in that thread. Myself and my business partner would be immensely grateful if these issues could be resolved for linux and we'd be willing to throw you a bitcoin or two for your trouble  Grin. We realise its not much, but hopefully others will be able to contribute as well.

Hope this helps, if there's any other information you want us to trawl the web for, we will try and help!
full member
Activity: 302
Merit: 100
Presale is live!
Thanks a lot! small donation comming to your wallet soon Smiley
newbie
Activity: 16
Merit: 0
Tested on XFX 6870 (I believe it's a reference design)

- radeonvolt gives no output
- info from lspci http://paste.pocoo.org/show/404596/
ius
newbie
Activity: 56
Merit: 0
The Asus 5850 DirectCU is non-reference card. Google suggests it's usng a uP6208 controller, but the correct GPIOs for i2c would need to be reverse engineered from a Windows tool supporting voltage modification on this card...
full member
Activity: 184
Merit: 100
It doesn't work with my Asus 5850. It gives "Unsupported i2c device" error. Here is my card:
http://www.newegg.com/Product/Product.aspx?Item=N82E16814121375
Is it possible to fix it?
Thanks for the hard work.

cj
newbie
Activity: 41
Merit: 0
is there any way to probe the card for the info?
ius
newbie
Activity: 56
Merit: 0
I would happily try to add support for the Sapphire card (if it's not too difficult given I don't have the card myself), but I'd need some bits of info.

- GPIOs used for i2c
- The actual VRM chip(s) used. Have you got any idea? Is it vt1165, or something else?
newbie
Activity: 41
Merit: 0
Is that a Sapphire card? Sure it's a reference design? I'm 100% positive they have non-reference designs (thus not going to wrk), not sure if they also have reference ones.

btw, Sapphire 5850 Xtreme cards are popular and cheap nowadays, but they can only be overvolted using Trixx (Sapphire's proprietary Windows tool)
A tool that runs on Linux overvolt these cards (5850 xtremes) would be very useful to the community, and could also gather some donations as well.
I could donate 5 BTC for instance.
full member
Activity: 238
Merit: 100
Is that a Sapphire card? Sure it's a reference design? I'm 100% positive they have non-reference designs (thus not going to wrk), not sure if they also have reference ones.

It's a non-reference Sapphire card.
ius
newbie
Activity: 56
Merit: 0
My card is
Code:
01:00.0 VGA compatible controller: ATI Technologies Inc Cypress [Radeon HD 5800 Series]
        Subsystem: PC Partner Limited Device e140

Is that a Sapphire card? Sure it's a reference design? I'm 100% positive they have non-reference designs (thus not going to wrk), not sure if they also have reference ones.
ius
newbie
Activity: 56
Merit: 0
I did indeed, and forgot to add the HD5870 device id. Should be added now, and I've also increased the sleep after a mmio write just to be sure.

The 0x1A value returned is actually the last value written to the i2c data register. This might be a timeout. You can uncomment line 104 in i2c.c to view the i2c status register state after bytes have been sent (redownload first).

If your 5870 is a reference card it might be very well that it's using different GPIOs for i2c. If that's the case, 5850 reference is the only card on which it should work.
newbie
Activity: 20
Merit: 0
It does not output anything, this is my AMD lspci: http://paste.pocoo.org/show/396385/

You added a device id filter :-P After I removed it, I see some data for my 5970s but nothing for my 5870:


Device [7]: Hemlock [ATI Radeon HD 5900 Series]
        Current core voltage: 1.0375 V
        Presets: 0.9500 / 1.0000 / 1.0375 / 1.0500 V
        Core power draw: 56.61 A (58.74 W)
        VRM temperatures: 84 / 86 / 86 C


Device [15]: Hemlock [ATI Radeon HD 5900 Series]
        Current core voltage: 1.0375 V
        Presets: 0.9500 / 1.0000 / 1.0375 / 1.0500 V
        Core power draw: 53.13 A (55.12 W)
        VRM temperatures: 91 / 94 / 92 C


Device [16]: Radeon HD 5870 (Cypress)
Unsupported i2c device (1a)
full member
Activity: 238
Merit: 100
Does not work:

Code:
Device [1]: Cypress [Radeon HD 5800 Series]
Unsupported i2c device (1a)

My card is
Code:
01:00.0 VGA compatible controller: ATI Technologies Inc Cypress [Radeon HD 5800 Series]
        Subsystem: PC Partner Limited Device e140
        Flags: bus master, fast devsel, latency 0, IRQ 29
        Memory at e0000000 (64-bit, prefetchable) [size=256M]
        Memory at f5000000 (64-bit, non-prefetchable) [size=128K]
        I/O ports at b000 [size=256]
        [virtual] Expansion ROM at f4000000 [disabled] [size=128K]
        Capabilities:
        Kernel driver in use: fglrx_pci
        Kernel modules: fglrx
sr. member
Activity: 406
Merit: 251
- Accesses the Radeon i2c bus by mapping the Radeon i2c controller registers via /dev/mem, .

nice, what else can we do?
newbie
Activity: 20
Merit: 0
It does not output anything, this is my AMD lspci: http://paste.pocoo.org/show/396385/
ius
newbie
Activity: 56
Merit: 0
Good catch on the pciutils/libpci devel package. Arch does not ship headers separately, so I totally forgot about the dependency for other distros.

Regarding your compilation error, it seems I also forgot include guards. I have comitted the fix to Github, if you redownload the source from the same URL you should be able to compile.
newbie
Activity: 20
Merit: 0
Now it fails with ... any idea?

gcc -O3 -Wall -c vt1165.c
In file included from vt1165.h:17:0,
                 from vt1165.c:18:
types.h:19:17: error: redefinition of typedef ‘u8’
types.h:19:17: note: previous declaration of ‘u8’ was here
types.h:20:18: error: redefinition of typedef ‘u16’
types.h:20:18: note: previous declaration of ‘u16’ was here
types.h:21:18: error: redefinition of typedef ‘u32’
types.h:21:18: note: previous declaration of ‘u32’ was here
newbie
Activity: 20
Merit: 0
Needs libpci-dev on debian/ubuntu
ius
newbie
Activity: 56
Merit: 0
So, there are a couple of Radeon monitoring/tweaking tools available for Linux (aticonfig, AMDOverdriveCtrl, glakkeclock). Unfortunately neither of them supports displaying VRM temperatures or core voltage modification (the later can also be achieved by editing your Radeon's bios using a Windows application, but that's not really convenient now, is it?).

As such, I started hacking and came up with a utility of my own. It displays VRM voltages, average current and allows you view and modify the GPU core voltage.

I've tested it on my (single) ATI card, an Asus HD5850 (reference).

Remarks
- Should work on all reference HD5850 cards with a similar Volterra VT1165 VRM setup.
- It should also support multiple cards, but I haven't been able to test it myself.
- Accesses the Radeon i2c bus by mapping the Radeon i2c controller registers via /dev/mem, thus root is required (anyone have a better idea here?).
- Comes without any warranty, use at your own risk, make sure you know what you're doing, etc.
- May even burn your house down. Probably not, though.

Download
Source code
Github

Compiling
Depending on your distro, you may need to install the pciutils development package (Ubuntu/Debian: apt-get install libpci-dev).

Code:
wget https://github.com/ius/radeonvolt/tarball/master -O - | tar xz
cd ius-radeonvolt*
make

Usage examples
Code:
$ sudo ./radeonvolt

Device [1]: Cypress [Radeon HD 5800 Series]
Current core voltage: 1.0875 V
Presets: 1.0000 / 1.0375 / 1.0875 / 0.9500 V
Core power draw: 62.71 A (68.20 W)
VRM temperatures: 100 / 99 / 98 C
 info[/b]

Before attempting to modify the vcore, make sure the values for the 'current voltage' as well as 'presets' look sane.

Code:
$ sudo ./radeonvolt --vcore 1.1000 --device 1
Setting vddc of device 1 to 1.1000 V (0x34)

Device [1]: Cypress [Radeon HD 5800 Series]
Current core voltage: 1.1000 V
Presets: 1.0000 / 1.0375 / 1.1000 / 0.9500 V
Core power draw: 61.84 A (68.02 W)
VRM temperatures: 100 / 99 / 98 C

Please let me know if it works for you (especially non-5850 or multiple cards). If it doesn't, include the output of lspci -vd1002.

If it does work, feel free to send any spare coins to 19kdfgW1KXQgV7SCLEPAojtHxN9xotGkGH.
Jump to: