I'm not arguing that a super high density board would be the ultimate in high density mining. Point is there is a sweet spot for a distributor between low volume sales and profit. With this board design they can make a high volume of boards, and reduce their per-unit overhead. Meaning better profit margins. A super high density board would allow better density, but would likely have much lower sales volumes (requiring thicker markup to make back their R&D overhead and such). Also manufacturing yield is easier to control on smaller boards.
Wrong. These manufacturers are already building 8x and 16x boards at _less_ cost per FPGA. Manufacturing largely has a problem where the cost of manufacturing a populated board effectively costs the same no matter how big the board is. In our case, a 16 FPGA board is not going to cost significantly more to manufacture than a 4 FPGA board.
Lastly, customer hardware failures are mitigated more with many small boards versus one large board. (if you blow a couple FPGAs on your 64x FPGA board what do you do about it?)
Who said 64x board? I said 64 FPGAs in 4u. Thats four 16x boards in 4U.
My ideas are purely within the confines of the cairnsmore1 product. How to pack as many into a rack as possible. Not a hypothetical new board. (enterpoint has said they may consider additional boards later depending on the success of the cairnsmore1, for now this is what we have to work with).
That said, you have a very good point about card height. Looking again at the mechanical drawings, once you consider motherboard thickness, connector height, and raised MB plate thickness the board likely won't fit in a 3U. (it would be damn close though, 3U is 133MM but internal space will be less, this board is 126.4 so likely won't fit in the end).
But I don't see why you think a "blade server" type approach is inappropriate? It offers high density (at least close to that of a super high-density board) and it offers modularity, easy maintenance (swap boards out), lower risk in the event of failure, and most importantly to many, smooth scalability (easy to keep buying small volumes of boards and expanding rather than having to drop $30K at a time).
Its an unusual design and connectors love to break off of boards. Its not a good idea and it costs money. There is the exact same risk in the even of failure: some of your FPGAs stop working.
Now considering 4U cases, an ideal option would be something like
http://www.newegg.ca/Product/Product.aspx?Item=N82E16811165475Remove the motherboard mounting plate, and mount the power supply where the 3x 5.25" bays are in the front. That would allow a full case width and a flat mounting surface in the back for the cards. You can easily fit 8 cards wide with plenty of room for heatsinks, and should be able to fit 2 cards deep in that config. This gives the same yield of FPGAs/U that your idea has, but is overall cheaper (1U rackmount cases are more expensive generally. or custom rackmount enclosures are also expensive, meaning per 1U you would be looking at probably $200-$300 per case if you include a power supply, making for $800-$1200 per 4U) in my setup that's $300 for a 4U case, $200 for PSU, so $500 total in case cost per 4U, same total number of FPGA, and I still have room to grow in the front half of the case (could always remove the hotswap drive cage and mount more cards up there, or have room for an added controller for standalone mining or whatever).
I think considering these cards specifically, that type of solution is the best way to go. Too bad it's so close to the 3U spec. If it would fit in a 3U that makes it more dense, which is even better.
Why are you so intent on essentially hacking this up like an idiot? Thats expensive to mass produce, stop that.
Lets try to figure out the total cost of making these the napkin way, and a quad costs $640 to buy.
My way:
per 16 FPGA/1U: $2310 for a 16 FPGA board, $90 for a Norco case that fits EEB and has 5 40mm fans, $50 for a Athena Power 1U 300w PSU
total cost per 4U: $9800 for 64 FPGAs, or $153 per FPGA
if something goes wrong: I lose 16 FPGAs minimum and maximum.
Your way:
Impossible to calculate because you're talking nonsense. What rack mountable case is going to fit 24 inches of cards? And don't say "just make a 8 FPGA 24 inch card" or something. The reason I specifically picked PCI-sized boards is because it is cheaper to produce PCI-sized boards due to the entire industry based around making them.
So, sure, lets use rainbows and unicorns and say you can fit 24 inch cards in cases that are somewhere around 23 to 31 inches deep inside without modifying the case (removing unused drive bays COSTS MONEY, changing the case design at all COSTS MONEY). And each one of those cards costs about, oh, $1220 to produce, and you can fit 7 cards in there, so thats $8540 for 56 FPGAs, plus who knows what for some controller board (lets say $100) because you just really really want one instead of just using SATA plugs like BFL did for the minirig.
per 56 FPGA/4U: $8540 for 7 56 FPGA boards, $100 for a board that does nothing but route serial connections, $70 for a Norco 4U case that fits EEB and has 2 80mm fans (and we need more than that so theres even more money wasted), and $160 for a NZXT Hale 90 750.
total cost per 4U:$8870 or $158 per FPGA.
if something goes wrong: You lose 8 FPGAs minimum, 58 maximum.
So not only do I get more density per 4U, my solution comes in cheaper.