Author

Topic: Sapphire Xtreme 5850 THREAD!!!!!!!! (Read 3816 times)

brand new
Activity: 0
Merit: 250
July 27, 2011, 05:37:52 PM
#18
Im certain OSX isnt very good for mining. I read somewhere a 5870/5850, cant remember which, does 180mhash in OSX and 400+ in windows. Thats over a 50% hit. Unless he was doing something wrong in OSX thats bad. Linux however isnt far behind, im guessing to it been open source rather than closed on OSX and people have been making the video drivers better all the time for linux, mac, well, its down to them(or amd).
I'd say the reason why OS X is slower on ostensibly the same card than Linux or Windows is that everyone on a PC overclocks their card. The overclock tools don't exist on OS X. The Radeon SDK isn't open source - surely you know that - it's a 'proprietary package' on Linux, just as the Catalyst drivers are on Windows. OpenCL works the same way on all platforms - the kernel will do the same thing, so I'd guess that the reason *normal* Mac results are lower than other systems is because OS X *itself* makes use of OpenCL drawing the screen and using the eye candy...

For a statistically insignificant datum, I am running phoenix 1.50 with the poclbm OpenCL kernel on my Mac Pro, which runs Snow Leopard and has an old Radeon HD4870. Checking out the Mining Hardware Comparison chart, all 4870 cards crank out the MH/s around 78 and 105. The only non-overclocked cards run at 78 and 90. My Mac Pro cranks out over 80. It's variable because the card is also running a lot of windows on an Apple 30" LCD and a secondary old 18" Apple LCD - with 8 'Spaces' (virtual desktops). Due to the way OS X works, all windows are stored as textures in graphics memory so the fancy Exposé and Spaces eye-candy can be done easily... my 4870 only has 512MB of RAM and with hundreds of Safari windows open, there's a LOT of swapping between GPU memory and main RAM... perhaps I ought to try the miner with nothing else running...

So I'd say that OS X isn't that bad. The only genuine Apple 58xx card is the 5870 (and is £377, hence why I'm building Linux boxes again Cheesy ) - perhaps you were comparing non-overclocked Apple results to overclocked results. Let's face it, virtually *everyone* on the other two platforms overclocks their GPU cores and underclocks their VRAM cores for bitcoin mining. Compare like with like, and OS X isn't 'slower' than the others.

Remember that Apple got into GPGPU work before Windows or Linux - mainly because of the need to accelerate all that eye-candy, and the loss of their beloved Altivec instruction set due to the Intel switch. Of course, some idiot at nVidia leaked Apple secrets - and Jobs is such a monomaniac that he immediately ditched them as 'top graphics card partner' and switched to ATI.

There's less R&D money for OS X driver work *at AMD* - but with Apple now on Intel CPUs, a lot of the code can be re-used, and Apple themselves have a BIG interest in making OpenCL both a standard and a big part of OS X. It's even on their website marketing the OS... think about it - a hardcore developers-only deep-internals API being touted to the *public* as a feature... Apple have a LOT of engineers working on OpenCL.

After all, who invented / initially developed OpenCL?Huh? Guess.

Quote from: m3sSh3aD
Im with catfish, OSX is good, apple are the problem these days. Although you gotta give it to there marketing arm, and patent.... Tongue The petent system needs re working too. Noit even going get into monsanto and crew about patenting LIFE! Sick. Its a shame that apple have took the direction they have cause they have a superior OS over windows Sad
We ought to start a separate thread for this... you appear to have a serious grievance with Apple's business direction, but I have to say that you shouldn't be as angry as you sound *unless* you really like the hardware and software - but can't stand the public image that goes with Apple these days. Well join the club matey. The only difference is that I won't avoid the best simply because it has an embarrassing image. Yes, I hate the coffee-shop bourgeois 'pretending to work but really showing off their expensive computer they can't use' horseshit, and the gold-plated iPhones etc. But as I've always believed, deliberately running *against* the herd means that your actions are still dictated by the herd. Just like the sheep. I use Apple products where they're the best. Others (e.g. Linux mining boxes!) where suitable.

And to be frank, I feared for Apple's future until the mad success of the iPod, then the iPhone and the simply unprecedented marketshare-grab from a completely unknown market entrant. I hate Apple's lawyers, respect Jobs massively, and think Jonathan Ive is the best industrial designer EVAR - but we've only still got the Mac *computers* and the brilliant OS X because it's being financed by iPod / iPhone profits. The day that Apple ditches the Mac and only sells consumer electronics will be a very sad day. I don't think it'll happen until Teh Steve dies though, regardless of his obtuse comment about 'dumping the Mac' back in the old days... he was talking about Classic OS and the 'next insanely great thing' - which was OS X...

Quote from: m3sSh3aD
I been out of tech for a decade cause its been very dull and boring been drip fed.
Same here. Used to be 100% an *IT* consultant. Got very bored around 10 years ago and moved into the Evil Empire of quantitative finance. Haven't got so fired up about technology in many years, until I heard about bitcoin!
brand new
Activity: 0
Merit: 250
July 27, 2011, 03:55:55 AM
#15
Is anybody here running linux?  What are you using to adjust the voltage? 
For those with an attention span of a goldfish, the answers are (a) Yes I run Linux wherever I can't use Mac OS X productively; and (b) There are two widely-used Linux tools for tweaking ATI cards - the aticonfig CLI tool supplied by the manufacturer, and AMDOverdriveCtrl, a GUI front end. The AMDOverdriveCtrl GUI app appears to have a voltage adjust bar, so there's your answer. I understand that AMDOverdriveCtrl is just a pretty front end for aticonfig but I haven't found out which options to tweak on the command line yet.

Smiley Decided to put an abstract in after re-reading the random rant below, heh.

All my mining kit runs Linux (apart from the proof-of-concept OpenCL phoenix setup running on my Mac Pro with a laughable 4870 - laughable being the price that I paid for the card back when it was Apple's 'top non-pro-photog GPU' - adding a steady 80 Mh/s, used to develop all the scripts that monitor various outputs (aticonfig on Linux obviously not available on OS X, but the scripts expand to any output) and make my three (soon to be four) miner boxes easy to monitor from one web page).

This is only because I'm a Mac bloke (Apple make nice hardware and I am a Unix guy, and prefer OS X to Linux on their hardware) - hence it's a natural swap from OS X to Linux. I'm too old for OS religious wars, but I'll only use Windows in a VM on one of my Macs for specific business applications. Clearly, running hardware-tweaking tools within a virtual machine is generally unlikely to work Smiley

From a box of bits to a fully running miner with Linux (I'm following the herd and using Ubuntu, of course any distro will work, and anything Debian-based will probably work with exactly the same setup scripts) takes me about half an hour. It's a bit like installing OS X - the Ubuntu lads and ladies wisely used similar user interfaces to the bits of OS X that Apple did well (and wisely avoided the bits Apple screwed right up). Perhaps Windows works as well these days, but it's not an *immediate* straightforward job to write scripts on Windows like it is on Unix systems. Windows is MUCH more GUI-development oriented, and whilst I can't complain about the cost of Microsoft's dev suite any more (to be fair, Visual Studio has always been the best IDE), I really wouldn't want to waste time building a front-end application for this. Equally, the Windows Way tends to require API calls to acquire hardware sensor readings, which is extra code to write. The Unix paradigm has lots of little tools that do one or a few functions, take input (from a file, another program or the command line) and write output (to console, another file, or another program, etc.).

Now you can do the same with Windows. But Unix developers tend to write command-line tools as a matter of course, whereas Windows developers write fancy GUI tools as a matter of course. If you browse back to the thread started by Messhead in Newbies, you'll see his 'Trixx' application - which allows him to do the Radeon tweaks you want. IMO, from a Mac user-interface-design perspective, Trixx is an abomination Cheesy However, it will be accessing a bunch of Radeon APIs to do these changes... so you could write code to do this yourself in VB script, C# or C - you just need to find the API headers in the SDK.

With Linux in particular (as this doesn't apply to OS X), AMD have made available an SDK (for which Linux users should be very thankful!) - and the 'standard Unix' approach of a scriptable command line tool is the main controller for their 'Overdrive' tweaking functions. It's called 'aticonfig' and a search here will find that it's used in all Linux miners to show temperatures, change clock speeds, alter fan performance, etc. There is a GUI wrapper for this tool called 'AMDOverdriveCtrl' for those who have screens connected to their miners (or are exporting X sessions across the network to a controller machine - however I prefer to leave the GUI alone on dedicated headless miner machines). This app has a Voltage tweaking adjuster. One caveat - I haven't ever tried the application, as I currently only use the CLI tool for scripting purposes.

And changing *voltage* is something I've not yet found in the aticonfig tool. To alter fan speed, one uses an undocumented command called pplib-cmd, so the code is something like:
Code:
DISPLAY=:0.${gpuIndex}; aticonfig --adapter=${gpuIndex} --pplib-cmd 'set fanspeed 0 ${fanSpeed}'

Running the help output of aticonfig doesn't mention the pplib-cmd option. So the voltage tweaking option must be similarly undocumented - it's just a question of finding it. Maybe it's a command for the above 'pplib' option. Then again, there's something called the PCS (persistent configuration store) which, presumably, holds things like the 'committed clocks' that you eventually settle on if overclocking. This has a key/value and array storage system and I'm looking into this.

The number of options AMD have made available in the aticonfig tool is vast - so if they removed voltage control then it'd be a deliberate act to hobble Linux tweakers. Since Linux users tend to be tweakers by nature, and the AMD lads / ladies who wrote the Linux code must be Linux types, I bet it's in there somewhere.

I haven't got the balls to over-volt my cards just yet because I'm still struggling with getting stable temperatures in aesthetically pleasing cases in safe environments. Running the logic boards bare in cardboard boxes in my office works very well, but my office is underground with no air conditioning and I'm cooking. The back room is considerably colder than the rest of the building so I want to put them there... but they will need some protection. Equally, I was concerned about massive power usage, but my dual 5850 overclocked box is currently (sorry, no pun intended) drawing 450W *total* from the wall. So I'm taking Messhead's advice on board and researching how to use the CLI tool to over-volt, assuming I can get GPU temperatures consistently below 80˚C - at the moment they're 90˚C stable.

When I find out, I'll post here.
sr. member
Activity: 322
Merit: 250
July 27, 2011, 08:46:30 AM
#14
1.225 is quite common with sapphire cards....

http://forums.extremeoverclocking.com/showthread.php?t=341934

I might see what more i can get out of these. 68 is the top temp and VRMs are below 100. THrottle at 125 anyways so ill see the impact soon as it hits. I got spares so..... Tongue
sr. member
Activity: 322
Merit: 250
July 27, 2011, 06:42:00 AM
#13
Im certain OSX isnt very good for mining. I read somewhere a 5870/5850, cant remember which, does 180mhash in OSX and 400+ in windows. Thats over a 50% hit. Unless he was doing something wrong in OSX thats bad. Linux however isnt far behind, im guessing to it been open source rather than closed on OSX and people have been making the video drivers better all the time for linux, mac, well, its down to them(or amd).

Im with catfish, OSX is good, apple are the problem these days. Although you gotta give it to there marketing arm, and patent.... Tongue The petent system needs re working too. Noit even going get into monsanto and crew about patenting LIFE! Sick. Its a shame that apple have took the direction they have cause they have a superior OS over windows Sad

I like that arm may be in contention to around 2015, maybe later but that will mix things up. Also graphite chips in the future will change the game. Multiple types of memory being developed. I been out of tech for a decade cause its been very dull and boring been drip fed. The system we have today slows technology down cause the company's have to recoup what they've invested. I believe a new way of society is needed with the advances over the last 150 years. Take note that thats the industrial age and the technological/information age. 2 of the most 'advancing' ages of all. In 150 years...... Thats not long at all. the powers that be and all need removing and people need to start working together more. Goverments have lost there way the world over and forgot what they were put there for, Manage there own part of the world, NOT OTHERS!!! Everyones to interested in everyones business instead of looking after there own. I have strong views here, and this isnt the place Smiley

These cards still holding. These converters are co0ming from china or thaiwan, cant remember but there going be end of week/next week. Im skint for next few weeks so Until then i'm running a lonely 3 cards. 1225mhash from them though Smiley
sr. member
Activity: 349
Merit: 250
BTCPak.com - Exchange your Bitcoins for MP!
July 27, 2011, 02:53:40 AM
#12
Is anybody here running linux?  What are you using to adjust the voltage? 
sr. member
Activity: 322
Merit: 250
July 26, 2011, 05:32:09 PM
#11
@mastergamer - the VRMS are safe at 1.250,You can get 400 mhash out of these all day long.

I hope people don't try setting their voltage to 1.25V after reading this.

It's not safe and just about no chip will be able to absorb this in the long run. Sure you might get big hash rates, for a few weeks before the VRAM busts.

1.2V is probably the highest anyone should settle for if they are into mining for the long run rather than benchmarking.

Quote from: Mousepotato
so should I even be worried about voltage at this point?

If you want to be cautious, you can buy a heat sensor for about 10 bucks from a hardware store. Then point it at the VRM, if the temp is at 80-110c you have nothing to worry about.

If it shows 140c+ then it's better to tone down the core frequency or voltage.

Well, that was my bad i meant 1.2 is safe, and i know no ones blown anything around there. 1.25 can be found by a simple search on the tinter web. I got a thermo thing, none of them(vrms) over 100. THey throttle at 125/130 degrees so you will see a decrease in mhash before the card totally bricks. ATI hardware this, Always had better build quality imho. Get sufficent cooling, im not using cases, But tables Smiley
newbie
Activity: 42
Merit: 0
July 26, 2011, 01:24:08 PM
#10
1,25V is safe to the vrams  Shocked

i dont think so, atleast my one cant handel it, above 1,2V i get blackscreen with 100% fan after a while, as higher the voltage is, as earlier the card fails.
I only testet it at one card, the other never failed, but i havent pushed them any further then 960mhz and 1,18V.
I will stay at this clocks, the difference is too less to risk a dead card.

and btw, my 7 year old psu was designed to deliver about 465W, at the wall my pc needs about 540W  Grin

So i think i shouldn overvolt them to 1,25V hehe


1.25V is lower than what could immediately kill the VRMs (Voltage Regulators, not vRAM), but they would likely die from heat long before you discover what's the max you can go with.
sr. member
Activity: 252
Merit: 251
July 26, 2011, 01:12:29 PM
#9
@mastergamer - the VRMS are safe at 1.250,You can get 400 mhash out of these all day long.

I hope people don't try setting their voltage to 1.25V after reading this.

It's not safe and just about no chip will be able to absorb this in the long run. Sure you might get big hash rates, for a few weeks before the VRAM busts.

1.2V is probably the highest anyone should settle for if they are into mining for the long run rather than benchmarking.

Quote from: Mousepotato
so should I even be worried about voltage at this point?

If you want to be cautious, you can buy a heat sensor for about 10 bucks from a hardware store. Then point it at the VRM, if the temp is at 80-110c you have nothing to worry about.

If it shows 140c+ then it's better to tone down the core frequency or voltage.
hero member
Activity: 896
Merit: 1000
Seal Cub Clubbing Club
July 26, 2011, 11:44:30 AM
#8
How are you guys getting 1.25v?  I use AMD GPU Clock Tool to adjust core/memory speeds and while there's a drop-down box to adjust voltage, it doesn't seem to work and my voltage gets auto-adjusted to 1.088v.  I've been running stable @ 1020MHz @ 410-412 MH/s for 3 days now without a problem, so should I even be worried about voltage at this point?
newbie
Activity: 56
Merit: 0
July 26, 2011, 11:30:11 AM
#7
1,25V is safe to the vrams  Shocked

i dont think so, atleast my one cant handel it, above 1,2V i get blackscreen with 100% fan after a while, as higher the voltage is, as earlier the card fails.
I only testet it at one card, the other never failed, but i havent pushed them any further then 960mhz and 1,18V.
I will stay at this clocks, the difference is too less to risk a dead card.

and btw, my 7 year old psu was designed to deliver about 465W, at the wall my pc needs about 540W  Grin

So i think i shouldn overvolt them to 1,25V hehe
sr. member
Activity: 322
Merit: 250
July 26, 2011, 10:58:22 AM
#6
@mastergamer - the VRMS are safe at 1.250, I dont go over 1.200 just to be safe. You can get 400 mhash out of these all day long. Diablo miner has about .3% stail over phoenix's 1.4%, so use diablo with vector 2, worksize 256, agression 13. Should help a bit Smiley

@ Catfish - O mate, My first PC was a DX-66, What a beast. Cant remember if it was 8MB or 16MB Ram. 151mb IBM HDD. Syndicate wars i loved ha ha. X-com - Enemy unknown. Some classics. Sinclair ZX and amiga's and all. Even had a atari jaguer. Think my mum payed about £300 for that and i dont come from a well off family haha Smiley Brother was 10 years older so alot got passed down too. All good. That needs go up on semiaccurate's flashback friday Smiley Although the names on it haha

Right, that new revision that will over volt is sitting pretty at 1000 but wont gop no higher. Seems to run a little hotter, GPU wise.... Bios version is different to in ino on TRIXX

Card A/000SA - 012.020.000.050.037238
Card B/230SA - 012.020.000.055.000000

I want a play with my mates that does 1020@ stock now but i bet it wont work or stop this one from over volting so im going leave alone Smiley The other 2 are going 1015/300 @1.2V, Ill guarentee 1.2V WILL NOT DAMAGE YOUR CARDS! NOR WILL IT GO OVER 205 WATTS. 1.25 wont damage your card but i wont guarentee that. 1 in a million and all Smiley 1224mhash from 3 cards. Need those converters now to add 2 more cards into the mix Tongue

Unfortunitly im no electrician. Im sure theres devises for the wall though. Im intgerested so ill hgave a look when i have time as well. SUre its just a thing you put inbetenn and it measures the current. GPU-Z will tell you the amps, opn the old cards at least, not the new ones.... Smiley
newbie
Activity: 56
Merit: 0
July 26, 2011, 09:28:06 AM
#5
my hd5850 cards (3 units) running on 950/1,18V for a few weeks fine.
I dont want to go higher, dont want to ruin my cards.
They give me 375Mhash, its fine i think?
sr. member
Activity: 322
Merit: 250
July 26, 2011, 08:40:47 AM
#4
You dont need anything processor nor motherboard. I got old athlon duel cores and mobo's. Cheap Cheap Smiley Although, limited choice on motherboard. need look for different combo, but you dont need be spending anything you dont have to. 3 x1 and a x16 for 4 cards and a pci to pci-e converter and thats a 5th card Smiley

This card is clocked at 1000/300 @ 1.193V, This is spec B, the 'UnVoltable' one that suddenly is! Wont go over even with 1.2V and i've set that my limit, way below the 1.25 ive seen. Well, 1.275 :/ Not for me, Not without them zalaman coolers. My others though are 1015/300 1.2V all day long, Spec A ones. This is on a 1200W supply so power QUALITY is important Smiley VERY
full member
Activity: 196
Merit: 100
Oikos.cash | Decentralized Finance on Tron
July 26, 2011, 08:09:33 AM
#3
Sounds good! Where are you finding 5850s?
sr. member
Activity: 322
Merit: 250
July 26, 2011, 04:38:59 AM
#2
UPDATE:

Strangely it is now allowing me to over clock this card i have in this system now. All @ 990/300b @1.187V. I've had this MSI MoBo connected to these cards before, They DIDN'T OVERVOLT! All thats different is a 1200Watt supply hooked up. Surely can't be that. Loosing my Gigabyte board with 4 1x PCI-E sockets sucks. I cant finbd another board with so many on Sad PCI - PCI-E converter are £20 on ebay. Cables are £5. Cablecaurus? YOU MAD catfish Tongue I looked there onced and thought how stupid people are who go there. R.I.P OFF! More money than sense most people though Smiley

600Watt coolermaster will do 2 cards and a system. Green black pins are the ones to connect, that is correct. 205 is my 'personal' TDP of these cards up at 1.2Volts, least with the 000SA varient. Soon as i have converters this 1200 is getting hooked up to 5 cards. If all goes well this 'rig' idea of mine will be 10 card beasts. All in a 70cmx40cmx70cm (roughly) area. 2x 1200/1250 PSU's, 10x 5850's & 2x systems. Easily cooled and will get 4Ghash. So, after playing around abit, this project would have cost me around 2K,BUT..... Now with what i know i can get that price down considerably. What you rekon about 4Ghash machine for £2000. Is that a good price, im certain it is. Ill sell them Tongue haha

PSU = GO BIG & GO QUALITY, its that simple, they provide the cards with a better constant flow. TRY GET MORE RAILS to diserpate the power more evenly across the cards. BeQuiet have OCK, Links all rails together for EXTREMLY HIGH POWER though, Also, its at least a 4 card system to get a better profit. Plus, if this 5 card system works in windows now, THe 5 card system is awesome. Even with extra cost of PCI converter(s). 1400/1500 Watt would be 6 cards. Of cause, you can always jerry rig PSU's Tongue Motherboard sockets then haha

Playing with this overclock anyways. Stuck with 3 cards for now Sad EDIT: 1000/300 @1.193, Looks like PSU is VITAL for getting the most out of your cards. 1200 beast is doing wonders. Although this new card is acting like all the others. Fine by me Tongue
sr. member
Activity: 322
Merit: 250
July 25, 2011, 07:00:15 PM
#1
So continued from the newbie section.....

I just busted my gigabyte mobo, one with 4 1x PCIE and 1 16x Socket Sad

I put my otherboard in which only as 2 1x and 1 x16, 3 cards Sad BUT............ It looks like i can overclock it..... Im updating to 11.6b ATI driver as it will have 5x5850's as ive hooked a 1200 Watt PSU upand the new drivers support more than 4 cards in windows. Got to get some PCI-PCIE converters though. 1 ordered but need more now that board died onb me Sad Cut a X1 socket up to so no refund Sad O well.... Tongue

These drivers are taking AGES install though, 20 minutes so far but it is going. I will report back about this card LETTING ME okver volt it, at least in trixx. Its 1am and this is what i love Tongue
Jump to: