Author

Topic: [1200 TH] EMC: 0 Fee DGM. Anonymous PPS. US & EU servers. No Registration! - page 175. (Read 499592 times)

sr. member
Activity: 266
Merit: 254
I'm also considering switching to Poolserverj when that is a bit more stable.  Although if it ain't broke, I shouldn't fix it.  But I always do.

Hmmm... I'd refer you to BurningToad's comments re: poolserverj and arsbitcoin.com. I'm an old Mac Ach forum guy (as in back in the G4 days) so went with arsbitcoin after MMC closed their doors due to terrorist threats (regardless of arsbitcoin not being formally associated with ars technica, etc.) and it appears that poolserverj requires a fair amount of keeping on top of.

Let's put it this way - I wish them the best of fortune (because BT seems to be dealing with external issues that are of a higher priority) - but my miners were running at an average of 2-3 GH/s - and that's *with* me watching like a hawk and restarting miners when necessary. BT admitted that poolserverj and other software locked up after a while (memory leak sounds normal) and needed restarting, which he didn't always get informed of.

Fair enough if external issues are the main priority. But your system is clearly stable as a rock, and if the flakiness of arsbitcoin was down to poolserverj... I'd rather you not bother unless you are satisfied that it meets your currently (obviously very high) standards.

Catfish I didn't follow the Ars issues in closely because I was flat out with building merged mining support at the time.  But I would like to set a few things straight.

Firstly BT changed to PSJ for precisely those reasons of instability that Ars was experiencing with pushpool.  Whether it improved greatly or not I'm not sure, I know the stale rate dropped dramatically, however, the miner load compared to the server spec was pretty close to the limit for both PSJ and pushpool at the time.  I'll point out that BTC Guild was running double the hashrate on a single server using the same version of PSJ and had no stability or memory leaking issues.  Admittedly it was a higher hardware spec.

I will certainly concede that that the merged mining version of PSJ was wildly unstable for quite a period though it was clearly tagged as pre-alpha.  That was partly due to the fact I was offered a huge bounty for getting it out by a certain deadline so I did things in ways I would have preferred not to.  The new WorkMaker edition has only been released for a few days and despite the expected teething troubles is working in production at high loads with massively reduced resource usage.  It now several times faster than pushpool (and the 0.3.0 version of PSJ) on every metric and more than an order of magnitude faster on some.  Not to mention having some unique features specifically aimed at stability, i.e. able to continue operating seamlessly if the database goes down.

My point is, don't write it off due the Ars experience.  The evidence suggests the problems were not PSJ specific and unfortunately due to being so flat out with merged mining development I was not able to get as involved in working out the Ars issues as I would have liked.  I suspect if BT tried the new version and had the time to iron it out to to a working config he'd find a lot of his problems which are probably related to near-limit CPU usage would go away.  If stability is the prime directive of EMC then it is probably premature to look at PSJ workmaker edition due to it's very recent release and some known bugs. 

All of the issues reported so far with workmaker edition are typical  'dev forgot to account for this weird combo of config settings' none of them are stability related.  In fact PSJ has all sorts of code in it to deal with unexpected circumstance.  e.g. an aux daemon going down means the pool will revert to non-merged-mining mode.  Bad config combos will usually result in the server refusing to start with a warning message rather than allowing it continue into a potential fail scenario.  Failures in internal components are usually detected and trigger a restart of them if they are internal or a failback mode if they are external.  Of course it will go through periods of less than ideal stability during heavy innovation but overall it is designed and geared toward being able to achieve stability and has a good track record of that.
member
Activity: 78
Merit: 10
At the moment EMC is well over 200 GH/s. I have never seen it break 200 and I hope it lasts/keeps going up. Would be nice to have a little less variance sometimes.
brand new
Activity: 0
Merit: 250
Shit - was the miner that got the big block *me* Huh??

My stats are telling me that I've found a block for EMC. Haven't found a block *that* quickly after moving pools before...
legendary
Activity: 1260
Merit: 1000
Fixed block 121.  Not sure why it thought the block started on the 3rd, but I think it may have had to do with the fact that was the day Meni found the bug and I probably left a stale share in the DB fixin' things. 

Glad to see you MMC folks!  If you (or anyone) has any suggestions or requests, I'm all ears.  I've got a long list of stuff, but I'm always up for more.  BF3 is cuttin' in the the dev time a little bit, though Smiley

brand new
Activity: 0
Merit: 250
finally, that was brutal.
what was I saying about 'run of easy blocks after the initial 'catfish arrived' evil block' ? Cheesy

Mwuahahahahahaaaa.

That should jinx it for a while Wink

After all, we all believe it's a stochastic model, right? Wink

Time to finish off this pile of 5850s on the floor and get ALL my potential miners mining. Just need to find something to use as hard drive for the last rig... and need some more wood.
member
Activity: 78
Merit: 10
Block 121 says it took 3 days 17 hours and 47 minutes? Shouldn't it be 6 hours 20 minutes and 24 seconds or so?
full member
Activity: 226
Merit: 100
Hi peeps. Looks like there is more and more MMC refugees coming over, great to see that. Been with this pool since MMC went down, and I cannot complain, stability, features, communication are all excellent.
sr. member
Activity: 313
Merit: 251
Third score
Another MMC refugee here. Just like catfish, I really like medium-size pools. And PPS is boring (no names called) !!!!

Payout model looks very stable and fair. Still trying to work out the math though. I still mine GG from time to time, so I'm not a 24x7 miner, but the model seems to be very fair to such on/off behaviour. No pool hopping advantage, and no quick discounting of shares (like Slush as an example). And the more I mine on a block, my eventual share of the reward increases slowly. All this helps to give a good overalll feeling and help the emotions through the long rounds.

Very hot web design and theme. Really looks modern and "techy". To me at least it's much more closer aesthetically to the term "crypto currency mining" than most other pools Smiley.

I am getting an extremely low percent of rejected shares (eu server as primary, us server as 1st backup).
.
Thanks Inaba for your great work. I just increased my Donation %  Smiley


brand new
Activity: 0
Merit: 250
@Inaba - yup, mine grew organically from my first efforts to get phoenix up and running on my Mac Pro, which got me very interested in Bitcoin, an attempt at relearning basic woodworking skills (you've probably seen the Catfish Mining Shelf, which *appears* laughably primitive but only works due to a 5-page thread involving fluid dynamics with a nice chap from MMC).

The Mac Pro still runs the hacked-together hand coded stuff, but your pool has been rock solid, and OS X is reliable too (regardless of being my main workstation, and handling a couple of 30" Apple screens plus 6 virtual desktops that I flip between regularly - remember OS X uses the GPU for all its eye candy - including the notorious Exposé, which is a slideshow on my machine with 50+ windows open on each 2560x1600 screen) so I haven't bothered automating it. The Mac just works in general, and as a main desktop box, monitoring it isn't much of a hardship.

Regarding the other miners - well, for cost reasons I like to keep a modular miner structure - with *cheap* 4-slot logic boards that don't stress the PCIe power distribution systems too much (I'm pretty sure now that the issues most of us have with >4 GPUs per logic board, connected using PCIe extenders, come down to the logic boards not being designed to supply meaningful tens of watts each to more than 4 of the PCIe slots themselves). So one Catfish Mining Shelf contains 3 Linux machines, and 12 GPUs, hence 12 instances of phoenix.

The only reason I use phoenix is that it was the first available code to link to OS X OpenCL libraries rather than ATI's APP SDK. Then I found that phatk and its modifications were *significantly* faster on my overclocked cards under Linux. It makes quite a difference to the bottom line - I simply expect a 5770 to run 220 MH/s, a 5830 290 MH/s, a 5850 390 MH/s and a 6950 around 400 MH/s. That's quite a jump from the 'standard' expectations, so I was reluctant to give it up! And as you know, once you get a nice stable software environment, you tend to keep the core architecture as it is, and just fiddle around the edges to make whatever the original hacked-together infrastructure was actually look elegant and 'properly designed' Wink

Of course, the proper approach is to start with the architecture but I gave IT tech architecture and project management up years ago Smiley When you've got a few identical boxes (maybe with differing types of card), and the project is 'hobby' level, then a lot of free time can be wasted on OS installs and juggling unreliable software combinations. So I'm trying to get a script that does it ALL - regardless of whether you've got 4870s to 6990s, multiple GPUs, whatever, and can be kicked off after an install that *also leaves a usable auxiliary Linux box* for whatever you may also want to use.

Dedicated miners are all very well and good but a total investment in hardware for one job. I admit that I *do* underclock and cripple my CPUs - they're mostly Sandybridge Intel CPUs that *could* give me a decent distributed CPU horsepower if I needed - but I've still got a 'grid' of reasonably standard unix machines that I can delegate work to, when required. Linuxcoin isn't ideal for me since these boxes live on my network and I have no requirement for 'non-persistent' systems, plus once I've tailored each box to be able to do most unixy jobs, I want to be able to set each one up very quickly.

As to your questions regarding adding stats to the EMC webpages - it'd be hard, because the critical things I'm interested in are GPU health, so I want to see the temperature readings, the clocks, and the status messages from phoenix etc. I suppose I could rely on *your* MH/s figures, and forget the status messages, but GPU temperature is critical, especially when you've got kooky wooden air-cooled rigs like me. If the extractor fans were to fail, 12 GPUs would spike up over 90˚C within a minute... I want to *know* about that!!! Having the system spot it itself, and restart, would just result in a perpetually rebooting box and wasted energy. I don't know how your webcode could report individual worker GPU temperatures without distributing OS-specific code to each miner in your pool... sounds like a lot of work for you and a lot of hassle.

Finally - yeah you've sorted the AMD download in your script Smiley But like when the questionnaire appeared, if AMD change things again, the script will need changing - and you don't want constant noob questions do ya? Wink

@cengique - all my machines are headless. My initial experiments used a virtual desktop on my Mac and 8 terminal windows open (hell, if you've got a couple of 30" displays, flaunt it eh?) watching the output scroll by. But it's not easily accessible on the move - my iPhone can run ssh and screen, so *yes* I can connect to 8 separate machines and watch the output of 4 'screens' each - but it costs a LOT more in mobile data access. Letting each miner report a small XML file to my Mac server and the Mac turn it into a single webpage table - that's more efficient and allows my iPhone to see the entire farm in one page (though the number of miners means I have to scroll around now).

The next stage is to add links to the webpage that allow me to force-restart either an individual GPU (if the connection to the pool has been lost) or the entire box (if one of the GPUs has locked up - in general the only way out is a reboot). That'd be neat. From all my Apple devices (iPhone, iPad - I was the first in the UK to hack both IIRC) I can do most standard terminal-based unix stuff, but as anyone who uses the unix CLI knows, the shell makes heavy use of symbol characters. And iWhatever text input isn't optimised for symbol input. It's bloody slow to do anything meaningful at the bash prompt on my iWhatevers, however easy it is, it takes time to input the non-alpha characters. So hacking together a simple web interface seemed the most obvious thing to do.

The issue for me will be maintaining hacker-grade security. As soon as anything *inside* fortress catfish can be rebooted by hitting a web link from outside on the Internet... I'm taking big risks. Which is why I haven't implemented anything like this *yet*. A *long* time ago, I was on the 'other side' and I know damn well all about getting into random peoples' systems 'for teh lulz' - not that we used the phrase 'lulz' back then, of course. Not anything to be proud of and I was always more cat-like when padding around other peoples' systems, making as little footprint as possible. So a publicly-accessible webpage with buttons saying 'reboot me' would obviously be like a £50 note lying on the floor for these types... and I firmly believe that keeping people *out* of your network is more important than trying to secure each and every box inside. If someone gets inside, then mapping the topology and eventually finding a vulnerability is guaranteed assuming it's a hacker and not a skiddie. I'd rather keep people out - hell, I've got network cameras providing security functions that I sure as hell *don't* want other people controlling. You can learn a LOT from looking around peoples' homes!

Erm, anyway. If you know any way to recover a Linux mining rig without rebooting when one of the GPUs has crashed (i.e. ASIC hang reported in dmesg) then I'm ALL EARS - it'd make things VERY easy...

Whilst I'm still job-hunting, I can devote a LOT of time to this Smiley Got to get it finished soon because I won't have the time to spare when I'm working flat out... this turned up at just the right time Smiley
member
Activity: 64
Merit: 10
Time to finish off this pile of 5850s on the floor and get ALL my potential miners mining. Just need to find something to use as hard drive for the last rig... and need some more wood.
Tell me about it.. I still have two 5850s lying on the floor, too. To get them to work, I have to replace the capacitors on one motherboard  Undecided

You need wood? You wanna go out chopping some?  Tongue
sr. member
Activity: 270
Merit: 250
finally, that was brutal.
member
Activity: 100
Merit: 10


Why not use linuxcoin? It's based on a minimal Debian system, and it is really minimal.


linuxcoin won't run on a 32bit system.  that's why i can't use it on 'little rigs'.

it does work fine on 64bit capable systems.

'monkey
member
Activity: 64
Merit: 10
[...]I've trialled the latest (AFAIK) phoenix - 1.62 - and it works but doesn't offer any performance improvements to me, nor does it change the crazy monster-stdout verbosity (I redirect phoenix output to a file and then use that to extract performance statistics, after a week it's over a gigabyte per instance), so I haven't bothered updating globally.

I use screen to keep a tab on my miners, so no need to save them to files.

It won't be for everyone, but after installing my miners on full Ubuntu Desktop installs with all the patches, I was wondering why the hell I needed compiz, apparmor, and a load of fancy GUI eye candy (that *does* affect the 'first' processing GPU). I'm henceforth working on trying to get a happy, compatible miner build that starts with the Ubuntu Minimal installation. I've got a couple of GH/s lying idle whilst I faff about - hashing power that should be working for the pool - so I'm on the case...

Why not use linuxcoin? It's based on a minimal Debian system, and it is really minimal. I do understand that you already made a lot of scripts that work and you just want to share them. I do feel the same way, now that I have things running quite stable. However, like Inaba, I don't have automated startups and I like to watch my rigs manually. Not sure if many people would like that. Too much automation and things go out of hand. Although, it *is* on my to-do list.

Where is Inaba's guide by the way?
legendary
Activity: 1260
Merit: 1000
Catfish:

Very interesting about Poolserverj.  I will keep an eye on it.  I would need to do a lot of work on the internals to make it compatible with DGS and the way I have things running, so it's not something that would happen in the next week or two anyway. 

As far as "old" configs.  I use poclbm for all my miners, believe it or not.  None of this fancy phoenix or cgminer stuff!  None of my stuff starts up automatically, either... heh.  It's something that's been on my to-do list since April.  I do like that you have stats output in Pheonix, though... but I try to incorporate as much as that kind of stuff that I want to see into the pool itself.  What do your pages show that I might be able to add to the pool?

I'd be interested in your minimal install as well.  I almost tried to use the headless bitcoin miner guide someone else had posted, but there was a ton of extra, superfluous stuff that I didn't feel it offered a whole lot more than just a straight install.  Although, the last part of the guide did explain how to disable the desktop basically, which accomplishes what you're talking about. I may actually incorporate that into the guide.

AMD does have that questionnaire, but if you look at the actual download link once it starts downloading, you can script the download from AMD.  In fact, don't I already have it scripted from the new guide?
legendary
Activity: 1260
Merit: 1000
The stats show your Proportional Differential on each block... basically what you would have recieved under a proportional system as opposed to the DGS.  It's listed in % and not absolute numbers, so just do the math and you'll have your BTC answer. 

Here's a last few blocks from my stats:

118   151835    2011-11-04 09:53:47   17:47:27   1822456   +51.43%   1203462   Valid    191909    5.46598395
(+3.67%)    0.00000000
117   151729    2011-11-03 16:06:20   19:15:37   2113683   +75.63%   1203462   Valid    219798    4.58462896
(-13.41%)    0.00000000
116   151627    2011-11-02 20:50:43   05:04:13   541474   -55.01%   1203462   Valid    54190    5.66392898
(+11.65%)    0.00000000
115   151600    2011-11-02 15:46:29   21:21:47   1948138   +61.88%   1203462   Valid    241323    6.08546998
(-1.78%)    0.00000000
114   151460    2011-11-01 18:24:42   02:32:38   269230   -77.63%   1203462   Valid    26681    5.58464397
(+11.27%)    0.0000000

Under the next to last column, the number in % is the prop differential.  I have miners go up and down on occasion, either I'm working on something, playing a game, etc... so sometimes I get negative differential.  However, block 117, Meni found a bug in  how stats are transferred from one block to the next and that block got recalculated without the "phatom shares" that were following some people, so it's showing lower than actual.  Going back to fix it would result in some people losing BTC, so I opted to leave it in, all other blocks than 117 are accurate.


I'd be happy to provide internal stats for whatever is needed (unless it compromised security or privacy).

donator
Activity: 2058
Merit: 1054
damn damn damn these rounds from hell!

Does anyone have a feel for how double geometric scoring affects round size based variance?

Out of interest, have you compared your average payout/share to a theoretical prop payout/share for the same round?

If anyone's not sure what I mean then just pick a previous long round and divide payout by your submitted shares. Then divide the Difficulty for that round by the total shares for that round (theoretical proportional pool payout). Compare and contrast. If you can be bothered I'd like to see the results.

You might need to compare a few sequential rounds rather than just one round. To do this, divide your actual total payout by the total shares you submitted. Then (assuming Difficulty didn't change between the rounds you are looking at) compare with (Difficulty*number of rounds)/(total shares in rounds).



It depends on the value of parameter o which every double geometric method pool chooses for his payout model:

Quote
o - Cross-round leakage. Increasing o reduces participants' share-based variance but increases maturity time. When o=0 this becomes the geometric method. When o->1 this becomes a variant of PPLNS, with exponential decay instead of 0-1 cutoff (note that "exponential" does not mean "rapid", the decay can be chosen to be slow). For o=1, c must be 0 and r (defined below) can be chosen freely instead of being given by a formula.
https://bitcointalksearch.org/topic/m.481864

Maybe Inaba will make the parameter o - cross-round leakage public.

If you are interested how round size based variance goes with o = 0.5 send me a pm!

Thanks urstroyer - but I was actually after what miner's experiences had been in the short term. In appendix D of Analysis of bitcoin pooled mining reward systems Meni shows how to calculate the variance and maturity time in the DGS payout system, but that doesn't give me the same insight as actual results do.

So I was interested in seeing what recent historical variance results had been after reading a post from someone complaining about a long round - had anyone seen a reduction in variance compared to a proportional payout?
EMC uses c=0.01, o=0.99 which means there is very little reduction in pool-based variance. In AoBPMRS I've only done Geometric so far, not DGS, and anyway I've only calculated share-based variance - deriving pool-based variance is much harder.
donator
Activity: 2058
Merit: 1007
Poor impulse control.
damn damn damn these rounds from hell!

Does anyone have a feel for how double geometric scoring affects round size based variance?

Out of interest, have you compared your average payout/share to a theoretical prop payout/share for the same round?

If anyone's not sure what I mean then just pick a previous long round and divide payout by your submitted shares. Then divide the Difficulty for that round by the total shares for that round (theoretical proportional pool payout). Compare and contrast. If you can be bothered I'd like to see the results.

You might need to compare a few sequential rounds rather than just one round. To do this, divide your actual total payout by the total shares you submitted. Then (assuming Difficulty didn't change between the rounds you are looking at) compare with (Difficulty*number of rounds)/(total shares in rounds).



It depends on the value of parameter o which every double geometric method pool chooses for his payout model:

Quote
o - Cross-round leakage. Increasing o reduces participants' share-based variance but increases maturity time. When o=0 this becomes the geometric method. When o->1 this becomes a variant of PPLNS, with exponential decay instead of 0-1 cutoff (note that "exponential" does not mean "rapid", the decay can be chosen to be slow). For o=1, c must be 0 and r (defined below) can be chosen freely instead of being given by a formula.
https://bitcointalksearch.org/topic/m.481864

Maybe Inaba will make the parameter o - cross-round leakage public.

If you are interested how round size based variance goes with o = 0.5 send me a pm!

Thanks urstroyer - but I was actually after what miner's experiences had been in the short term. In appendix D of Analysis of bitcoin pooled mining reward systems Meni shows how to calculate the variance and maturity time in the DGS payout system, but that doesn't give me the same insight as actual results do.

So I was interested in seeing what recent historical variance results had been after reading a post from someone complaining about a long round - had anyone seen a reduction in variance compared to a proportional payout?
full member
Activity: 142
Merit: 100
damn damn damn these rounds from hell!

Does anyone have a feel for how double geometric scoring affects round size based variance?

Out of interest, have you compared your average payout/share to a theoretical prop payout/share for the same round?

If anyone's not sure what I mean then just pick a previous long round and divide payout by your submitted shares. Then divide the Difficulty for that round by the total shares for that round (theoretical proportional pool payout). Compare and contrast. If you can be bothered I'd like to see the results.

You might need to compare a few sequential rounds rather than just one round. To do this, divide your actual total payout by the total shares you submitted. Then (assuming Difficulty didn't change between the rounds you are looking at) compare with (Difficulty*number of rounds)/(total shares in rounds).



It depends on the value of parameter o which every double geometric method pool chooses for his payout model:

Quote
o - Cross-round leakage. Increasing o reduces participants' share-based variance but increases maturity time. When o=0 this becomes the geometric method. When o->1 this becomes a variant of PPLNS, with exponential decay instead of 0-1 cutoff (note that "exponential" does not mean "rapid", the decay can be chosen to be slow). For o=1, c must be 0 and r (defined below) can be chosen freely instead of being given by a formula.
https://bitcointalksearch.org/topic/m.481864

Maybe Inaba will make the parameter o - cross-round leakage public.

If you are interested how round size based variance goes with o = 0.5 send me a pm!
sr. member
Activity: 270
Merit: 250
I thought the stats already told us that?
brand new
Activity: 0
Merit: 250
Thanks catfish.  There's a lot that can be done with the guide for mining in particular.  I just put together the guide for the easiest/quickest/simpliest of methods to get it up and running, but as for optimization, it's definitely not a good guide for that.

As far as python-jsonrpc - I don't think you need it anymore and if I recall, it's not in the guide.  Only pyopencl is required now.

Anyway, the pool is pretty stable unless I am poking at it with a stick, then it gets uppity... but I haven't had to poke any sticks into the pool in a while.  I do need to make some adjustments to the internals of the getwork server now that we are no longer doing proportional... it's still got the proportional code running but doing nothing. 

I'm also considering switching to Poolserverj when that is a bit more stable.  Although if it ain't broke, I shouldn't fix it.  But I always do.

Hmmm... I'd refer you to BurningToad's comments re: poolserverj and arsbitcoin.com. I'm an old Mac Ach forum guy (as in back in the G4 days) so went with arsbitcoin after MMC closed their doors due to terrorist threats (regardless of arsbitcoin not being formally associated with ars technica, etc.) and it appears that poolserverj requires a fair amount of keeping on top of.

Let's put it this way - I wish them the best of fortune (because BT seems to be dealing with external issues that are of a higher priority) - but my miners were running at an average of 2-3 GH/s - and that's *with* me watching like a hawk and restarting miners when necessary. BT admitted that poolserverj and other software locked up after a while (memory leak sounds normal) and needed restarting, which he didn't always get informed of.

Fair enough if external issues are the main priority. But your system is clearly stable as a rock, and if the flakiness of arsbitcoin was down to poolserverj... I'd rather you not bother unless you are satisfied that it meets your currently (obviously very high) standards.

To me - I don't care about the actual technology / language used to run the pool server - only that my electricity bill represents accepted work from the pool, and that my miners aren't sitting there 20% of the time burning kilowatts but with no work from the pool.

I'm sure that you won't deliver a half-baked solution though - doesn't look like your style. Let us all know if old configs like mine (11.04 ubuntu natty minimal, 11.6 catalyst, 2.4 APP SDK, 1.50 phoenix, various phatk kernels Wink and a load of really shoddy catfish scripting in ruby and bash) are potentially going to be unsupported though. I've trialled the latest (AFAIK) phoenix - 1.62 - and it works but doesn't offer any performance improvements to me, nor does it change the crazy monster-stdout verbosity (I redirect phoenix output to a file and then use that to extract performance statistics, after a week it's over a gigabyte per instance), so I haven't bothered updating globally.


OTOH - my aim with the Linux Minimal build isn't to try to do any better than you - your guide works and is perfect for anyone who has 10 minutes or more shell experience. Mine is just an attempt at giving something back - all my homebrew code that starts up and stops arbitrary numbers of phoenix miners on each rig (mine tend to have 4 GPUs per logic board, but there's no limit other than ATI's), and periodically updates an html page with formatted data for each instance. Consolidating the instances to one webpage (as I do) is left to the user - it's tricky on OS X Server due to various idiosyncrasies of the apache version used, but really it's just a question of linking the HTML from each mining rig into one webpage. It works with SSI but obviously you need to be careful with security - it's simple data though with no scripting other than 'include' for each miner instance's table rows.

It won't be for everyone, but after installing my miners on full Ubuntu Desktop installs with all the patches, I was wondering why the hell I needed compiz, apparmor, and a load of fancy GUI eye candy (that *does* affect the 'first' processing GPU). I'm henceforth working on trying to get a happy, compatible miner build that starts with the Ubuntu Minimal installation. I've got a couple of GH/s lying idle whilst I faff about - hashing power that should be working for the pool - so I'm on the case...

All of my scripts are commented to hell (as has always been my style) and your guides are cited and credited, but I'll run my proposed release past you beforehand. After all, they're all effectively the same, it comes down to whether you download source and compile, use apt packages, downloaded deb packages, and/or tarballs that used to be hosted somewhere but aren't anymore. I will probably get the security-paranoid jump down my throat for potentially distributing malware... so I'll probably end up having to provide MD5 hashes for the recommended binary / deb downloads. Now AMD require a questionnaire to be answered in order to download a non-current (e.g. 11.6, which works) proprietary driver, it's not a question of scripting wget any more. And expecting new Linux users to play AMD's silly games is too much - though I'm sure that if I host the entire dependency file group (including ATI drivers) then I'll get aggro for distributing copyrighted material... sigh...
Jump to: