@Inaba - yup, mine grew organically from my first efforts to get phoenix up and running on my Mac Pro, which got me very interested in Bitcoin, an attempt at relearning basic woodworking skills (you've probably seen the Catfish Mining Shelf, which *appears* laughably primitive but only works due to a 5-page thread involving fluid dynamics with a nice chap from MMC).
The Mac Pro still runs the hacked-together hand coded stuff, but your pool has been rock solid, and OS X is reliable too (regardless of being my main workstation, and handling a couple of 30" Apple screens plus 6 virtual desktops that I flip between regularly - remember OS X uses the GPU for all its eye candy - including the notorious Exposé, which is a slideshow on my machine with 50+ windows open on each 2560x1600 screen) so I haven't bothered automating it. The Mac just works in general, and as a main desktop box, monitoring it isn't much of a hardship.
Regarding the other miners - well, for cost reasons I like to keep a modular miner structure - with *cheap* 4-slot logic boards that don't stress the PCIe power distribution systems too much (I'm pretty sure now that the issues most of us have with >4 GPUs per logic board, connected using PCIe extenders, come down to the logic boards not being designed to supply meaningful tens of watts each to more than 4 of the PCIe slots themselves). So one Catfish Mining Shelf contains 3 Linux machines, and 12 GPUs, hence 12 instances of phoenix.
The only reason I use phoenix is that it was the first available code to link to OS X OpenCL libraries rather than ATI's APP SDK. Then I found that phatk and its modifications were *significantly* faster on my overclocked cards under Linux. It makes quite a difference to the bottom line - I simply expect a 5770 to run 220 MH/s, a 5830 290 MH/s, a 5850 390 MH/s and a 6950 around 400 MH/s. That's quite a jump from the 'standard' expectations, so I was reluctant to give it up! And as you know, once you get a nice stable software environment, you tend to keep the core architecture as it is, and just fiddle around the edges to make whatever the original hacked-together infrastructure was actually look elegant and 'properly designed'
Of course, the proper approach is to start with the architecture but I gave IT tech architecture and project management up years ago
When you've got a few identical boxes (maybe with differing types of card), and the project is 'hobby' level, then a lot of free time can be wasted on OS installs and juggling unreliable software combinations. So I'm trying to get a script that does it ALL - regardless of whether you've got 4870s to 6990s, multiple GPUs, whatever, and can be kicked off after an install that *also leaves a usable auxiliary Linux box* for whatever you may also want to use.
Dedicated miners are all very well and good but a total investment in hardware for one job. I admit that I *do* underclock and cripple my CPUs - they're mostly Sandybridge Intel CPUs that *could* give me a decent distributed CPU horsepower if I needed - but I've still got a 'grid' of reasonably standard unix machines that I can delegate work to, when required. Linuxcoin isn't ideal for me since these boxes live on my network and I have no requirement for 'non-persistent' systems, plus once I've tailored each box to be able to do most unixy jobs, I want to be able to set each one up very quickly.
As to your questions regarding adding stats to the EMC webpages - it'd be hard, because the critical things I'm interested in are GPU health, so I want to see the temperature readings, the clocks, and the status messages from phoenix etc. I suppose I could rely on *your* MH/s figures, and forget the status messages, but GPU temperature is critical, especially when you've got kooky wooden air-cooled rigs like me. If the extractor fans were to fail, 12 GPUs would spike up over 90˚C within a minute... I want to *know* about that!!! Having the system spot it itself, and restart, would just result in a perpetually rebooting box and wasted energy. I don't know how your webcode could report individual worker GPU temperatures without distributing OS-specific code to each miner in your pool... sounds like a lot of work for you and a lot of hassle.
Finally - yeah you've sorted the AMD download in your script
But like when the questionnaire appeared, if AMD change things again, the script will need changing - and you don't want constant noob questions do ya?
@cengique - all my machines are headless. My initial experiments used a virtual desktop on my Mac and 8 terminal windows open (hell, if you've got a couple of 30" displays, flaunt it eh?) watching the output scroll by. But it's not easily accessible on the move - my iPhone can run ssh and screen, so *yes* I can connect to 8 separate machines and watch the output of 4 'screens' each - but it costs a LOT more in mobile data access. Letting each miner report a small XML file to my Mac server and the Mac turn it into a single webpage table - that's more efficient and allows my iPhone to see the entire farm in one page (though the number of miners means I have to scroll around now).
The next stage is to add links to the webpage that allow me to force-restart either an individual GPU (if the connection to the pool has been lost) or the entire box (if one of the GPUs has locked up - in general the only way out is a reboot). That'd be neat. From all my Apple devices (iPhone, iPad - I was the first in the UK to hack both IIRC) I can do most standard terminal-based unix stuff, but as anyone who uses the unix CLI knows, the shell makes heavy use of symbol characters. And iWhatever text input isn't optimised for symbol input. It's bloody slow to do anything meaningful at the bash prompt on my iWhatevers, however easy it is, it takes time to input the non-alpha characters. So hacking together a simple web interface seemed the most obvious thing to do.
The issue for me will be maintaining hacker-grade security. As soon as anything *inside* fortress catfish can be rebooted by hitting a web link from outside on the Internet... I'm taking big risks. Which is why I haven't implemented anything like this *yet*. A *long* time ago, I was on the 'other side' and I know damn well all about getting into random peoples' systems 'for teh lulz' - not that we used the phrase 'lulz' back then, of course. Not anything to be proud of and I was always more cat-like when padding around other peoples' systems, making as little footprint as possible. So a publicly-accessible webpage with buttons saying 'reboot me' would obviously be like a £50 note lying on the floor for these types... and I firmly believe that keeping people *out* of your network is more important than trying to secure each and every box inside. If someone gets inside, then mapping the topology and eventually finding a vulnerability is guaranteed assuming it's a hacker and not a skiddie. I'd rather keep people out - hell, I've got network cameras providing security functions that I sure as hell *don't* want other people controlling. You can learn a LOT from looking around peoples' homes!
Erm, anyway. If you know any way to recover a Linux mining rig without rebooting when one of the GPUs has crashed (i.e. ASIC hang reported in dmesg) then I'm ALL EARS - it'd make things VERY easy...
Whilst I'm still job-hunting, I can devote a LOT of time to this
Got to get it finished soon because I won't have the time to spare when I'm working flat out... this turned up at just the right time