thank you for your response and sharing what you have learned!
I was talking about a CCD200/400, yes. I'm more the software guy so I figured creating my own board, which I very wanted to do, would not make sense at all. Too much to learn from scratch and not enough time to do so. :|
I've followed your thread and posts over the last months and read that you we're trying to create such an autotune tool on your own, but more low level than I'm trying right now. As far as I remember you wanted to tune the chips individually, which would get the most out of them. I cannot do that at all, so I try to tune the individual boards inside the CCD instead, which should be easier and still would be better than nothing.
For that I though about using some existing high-level software to set a clock speed and voltage, let it hash for X minutes and evaluate the shares/stales/hw errors - and watch the temperature of course. With that, go through a list of clock speeds and voltage combinations to check where I get the best share/error/power consumption. I already did this manually to some degree, but it is very time consuming, so I wanted to automate that.
The basic idea was to use cgminer to set those settings to the CCD modules individually, in my CCD400 there are two of them.
As far as I can see, I can only set 1 clock and 1 voltage settings for the whole CCD, but I'd need to set those individually for the two modules, as they behave different - one generally needs more voltage than the other to not produce too much HW errors with the same clockspeed. Right now I have to use the conservative settings that make the "bad" module run at an acceptable performance, but it wastes potential on my second module, which could do much more.
But this is only possible if I can set the voltage and clock speed individually for each board in a CCD, using cgminer, which I think is not possible right now? Or maybe it's even not possible by design?
I tried to have a look at the A1* files in git, but I don't understand a thing of the low level stuff being done there.
The approach you are describing is similar to what I implemented as a calibration tool. It consists of a dummy cgminer that feeds the chains with reference test vectors and collects per-chip statistics over the complete parameter range. Ideally, the output of such a calibration run would provide you the optimal settings for each chip for a given power consumption / hashrate target. Unfortunately those statistics showed the unreliable characteristics I described above.
As for your high-level approach, two components to consider:
a) voltage control is completely done outside of cgminer over i2c-tools. Accessing the voltage regulator gives means to damage the chips, so for liability reasons I need to check first if and under which conditions I can publish the relevant information.
b) you are right, there is no per-board or per-chip configuration support in the A1 driver currently. This is considered useless for now, since it would lead to a configuration hell (like you don't know which slots are populated in your product and how to assign them individual configs).
If you are familiar with the cgminer source code, you can help yourself with b) easily by implementing an extension to the --bitmine-a1-options parser to consider sys_clock not as a global value but as a comma separated array and provide individual clocks for each board. For this you obviously need to be able to build your own cgminer binary, i.e. replace Bitmine's FW with a full RPi distribution including build tools. With some related howto posts in this forum this should be easy.
Good luck.