Author

Topic: Cairnsmore1 - Quad XC6SLX150 Board - page 116. (Read 286370 times)

full member
Activity: 196
Merit: 100
May 18, 2012, 03:54:43 PM
people want to buy BFL finished products not hack together an ugly solution.

I'm just trying to help Enterpoint produce a product that people will want. Something that drops right into a generic 1U case would be a pretty valuable product... I imagine even non-mining FPGA customers would be interested in it.

While a case maybe be nice for those that want it I have no problems with an open air solution in fact I prefer that option not only does it save me money on something I don't want/need to have my boards will run cooler due to the lack of a case. Plus could be a nice business to someone like yourself that wants case get some made and if most people are thinking like you then they can buy them from you.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
May 18, 2012, 03:43:26 PM
sr. member
Activity: 407
Merit: 250
May 18, 2012, 03:05:39 PM
Excellent, I was hoping for 2" (50mm) or less spacing Smiley (without fans, I'll be using push/pull)
sr. member
Activity: 462
Merit: 251
May 18, 2012, 03:03:13 PM
First picture of the first manufactured Issue 1.1 now on http://www.enterpoint.co.uk/cairnsmore/cairnsmore1.html.

On stacking the boards the initial shipments will have a 33mm high heatsink which with thickness of chip, fan and fixings makes about 60-62mm above board surface. Add about 5mm for board thickness and rear solder joint protrusion. Probably add 10mm+ above fan to let air flow and you get your simple stacking height of about 80mm. After the first 100 units the heatsink is 10mm lower.

The fan can also be removed and used for side blowing if you really want to pack them in and if you really want to keep them cooler use a push pull fan arrangement. In this way you might get down to a 40-50mm stack spacing. With the right r/a bracket, and suitable stack spacing, the fan holes on the PCB can be used to hold on 120mm side fans in a stack.
sr. member
Activity: 407
Merit: 250
May 18, 2012, 02:57:05 PM
It makes more sense, but it makes less economical sense. In the long run, I only care about cost per FPGA, and a 16x board should cost less than four 4x boards or eight 2x boards or whatever.

I agree completely, but only under the circumstance that the "better" option is available immediately (or available at a MUCH better cost/performance ratio)

If I have funds to buy say 8U filled with Cairnsmore1 right now (32 boards) and they are available in the next 30days, yet it would be 2-3 months before even the possibility of a higher density better solution, I'd rather be mining for that 2-3 months on the slightly less optimal Cairnsmore1 option, and re-evaluate when the available options change.

Also considering the smaller boards let me grow in a more fluid fashion, I can increase my hashing power in a fairly aggressive curve.

IE: 32 boards at 4x FPGA each, lets just for round math assume 800MHash even per board. So 25.6Ghash from that cluster, that lets me mine around 475BTC per month at current difficulties. That 475 can be re-invested in roughly 2 more boards (At the inflated "full" price). those 2 new boards generate about 30BTC a month, So month 2 I have 505BTC, buy 2 more boards, then I have 60 spare next month (plus the 30 from previous month) buy 2 more, now I Have 90 spare, plus the 90 from the previous 2 months, now that's 180 on top of the base 475, which buys me 3 boards. and so on. Typical exponential growth curve, allowing me to add quite a bit more hashing power in the timeframe it would take just waiting for the new boards to possibly come out.

I realize my math above may be skewed or off (it's all off the top of my head) But I've also done forecasts based on this (with real numbers in extensive spreadsheets). Looking at smaller boards versus larger boards with slightly higher performance/$ ratio, and the smaller boards won out in longterm ROI. (because they grew faster due to the fluid expansion). Especially when the larger option wouldn't be available initially for some time.

This is of course offset if the higher end board is much lower cost per MHash but I would be surprised if doing a much larger board dropped more than 10-15% off the price per chip. (just personal opinion though lol so take with a grain of salt) Smiley
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
May 18, 2012, 02:14:41 PM
sr. member
Activity: 407
Merit: 250
May 18, 2012, 01:54:17 PM
Impossible to calculate because you're talking nonsense. What rack mountable case is going to fit 24 inches of cards? And don't say "just make a 8 FPGA 24 inch card" or something. The reason I specifically picked PCI-sized boards is because it is cheaper to produce PCI-sized boards due to the entire industry based around making them.

So, sure, lets use rainbows and unicorns and say you can fit 24 inch cards in cases that are somewhere around 23 to 31 inches deep inside without modifying the case (removing unused drive bays COSTS MONEY, changing the case design at all COSTS MONEY). And each one of those cards costs about, oh, $1220 to produce, and you can fit 7 cards in there, so thats $8540 for 56 FPGAs, plus who knows what for some controller board (lets say $100) because you just really really want one instead of just using SATA plugs like BFL did for the minirig.

per 56 FPGA/4U: $8540 for 7 56 FPGA boards, $100 for a board that does nothing but route serial connections, $70 for a Norco 4U case that fits EEB and has 2 80mm fans (and we need more than that so theres even more money wasted), and $160 for a NZXT Hale 90 750.
total cost per 4U:$8870 or $158 per FPGA.
if something goes wrong: You lose 8 FPGAs minimum, 58 maximum.

So not only do I get more density per 4U, my solution comes in cheaper.

I'm not sure where you're getting the 24" card thing from?

I'm not trying to be an ass, I just think we're perhaps arguing 2 different points. You seem to be arguing the "best case optimal high density mining" solution, and I'm talking about "How can I best mount the cairnsmore1 cards in a rack". Smiley

I'm planning on mounting them similar to this:
https://bitcointalksearch.org/topic/m.844733
(that's my current icarus setup)

But using a 4U actual rackmount case rather than a wooden 3U case.

Mounted vertically the cards will need about 2" wide, a rackmount case is about 17" wide on the outside (so at least 16" - 16.5" of usable inside room side to side) and about 14" deep for the motherboard compartment (12" - 13" for the actual motherboard plate). These cards are like 6" - 7" deep so I can stack 8 of them side by side, 2 deep in the same arrangement I have in that photo. Filling the back of the case (like I said redirect the powersupply to the front of the case in place of the 5.25" drive bays).

This gives me room for 16x cairnsmore1 cards in a 4U rackmount case, with tech available right now (not a hypothetical future product). so a total of 16 FPGAs per Unit of rack space.

I'm not talking about an "optimal end solution for best case high density bitcoin mining" that's obviously going to require some higher density special purpose hardware, like the Merrick3 adjusted for higher power density, and put into a standard PCIe motherboard. OR the Merrick1 modified for a few less chips (maybe half, like 50 chips) and higher power density. I agree completely, a 100% custom 1U case with a custom board (or boards) holding a lot of chips will result in the best density. But that product doesn't exist, and won't exist for at least a few months (and that's if someone starts working on it right now).

The cairnsmore1 will be available in a couple weeks, so I'm talking about the best way to mount THOSE boards in a rackmount enclosure.

Also to note, I'm not talking about custom cases. I'm talking about an off the shelf case, which has removable drive cages already, you just unscrew it and remove it. Problem solved. Sure you need some custom brackets/clips to mount things, not a problem. That's what a 3D printer is for Wink

Also yeah I planned on using the HALE90 as well in my build.

Does that make a little more sense? Smiley
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
May 18, 2012, 12:42:30 PM
I'm not arguing that a super high density board would be the ultimate in high density mining. Point is there is a sweet spot for a distributor between low volume sales and profit. With this board design they can make a high volume of boards, and reduce their per-unit overhead. Meaning better profit margins. A super high density board would allow better density, but would likely have much lower sales volumes (requiring thicker markup to make back their R&D overhead and such). Also manufacturing yield is easier to control on smaller boards.

Wrong. These manufacturers are already building 8x and 16x boards at _less_ cost per FPGA. Manufacturing largely has a problem where the cost of manufacturing a populated board effectively costs the same no matter how big the board is. In our case, a 16 FPGA board is not going to cost significantly more to manufacture than a 4 FPGA board.

Quote
Lastly, customer hardware failures are mitigated more with many small boards versus one large board. (if you blow a couple FPGAs on your 64x FPGA board what do you do about it?)

Who said 64x board? I said 64 FPGAs in 4u. Thats four 16x boards in 4U.

Quote

My ideas are purely within the confines of the cairnsmore1 product. How to pack as many into a rack as possible. Not a hypothetical new board. (enterpoint has said they may consider additional boards later depending on the success of the cairnsmore1, for now this is what we have to work with).

That said, you have a very good point about card height. Looking again at the mechanical drawings, once you consider motherboard thickness, connector height, and raised MB plate thickness the board likely won't fit in a 3U. (it would be damn close though, 3U is 133MM but internal space will be less, this board is 126.4 so likely won't fit in the end).

But I don't see why you think a "blade server" type approach is inappropriate? It offers high density (at least close to that of a super high-density board) and it offers modularity, easy maintenance (swap boards out), lower risk in the event of failure, and most importantly to many, smooth scalability (easy to keep buying small volumes of boards and expanding rather than having to drop $30K at a time).


Its an unusual design and connectors love to break off of boards. Its not a good idea and it costs money. There is the exact same risk in the even of failure: some of your FPGAs stop working.

Quote

Now considering 4U cases, an ideal option would be something like http://www.newegg.ca/Product/Product.aspx?Item=N82E16811165475
Remove the motherboard mounting plate, and mount the power supply where the 3x 5.25" bays are in the front. That would allow a full case width and a flat mounting surface in the back for the cards. You can easily fit 8 cards wide with plenty of room for heatsinks, and should be able to fit 2 cards deep in that config. This gives the same yield of FPGAs/U that your idea has, but is overall cheaper (1U rackmount cases are more expensive generally. or custom rackmount enclosures are also expensive, meaning per 1U you would be looking at probably $200-$300 per case if you include a power supply, making for $800-$1200 per 4U) in my setup that's $300 for a 4U case, $200 for PSU, so $500 total in case cost per 4U, same total number of FPGA, and I still have room to grow in the front half of the case (could always remove the hotswap drive cage and mount more cards up there, or have room for an added controller for standalone mining or whatever).

I think considering these cards specifically, that type of solution is the best way to go. Too bad it's so close to the 3U spec. If it would fit in a 3U that makes it more dense, which is even better.

Why are you so intent on essentially hacking this up like an idiot? Thats expensive to mass produce, stop that.

Lets try to figure out the total cost of making these the napkin way, and a quad costs $640 to buy.

My way:
per 16 FPGA/1U: $2310 for a 16 FPGA board, $90 for a Norco case that fits EEB and has 5 40mm fans, $50 for a Athena Power 1U 300w PSU
total cost per 4U: $9800 for 64 FPGAs, or $153 per FPGA
if something goes wrong: I lose 16 FPGAs minimum and maximum.


Your way:
Impossible to calculate because you're talking nonsense. What rack mountable case is going to fit 24 inches of cards? And don't say "just make a 8 FPGA 24 inch card" or something. The reason I specifically picked PCI-sized boards is because it is cheaper to produce PCI-sized boards due to the entire industry based around making them.

So, sure, lets use rainbows and unicorns and say you can fit 24 inch cards in cases that are somewhere around 23 to 31 inches deep inside without modifying the case (removing unused drive bays COSTS MONEY, changing the case design at all COSTS MONEY). And each one of those cards costs about, oh, $1220 to produce, and you can fit 7 cards in there, so thats $8540 for 56 FPGAs, plus who knows what for some controller board (lets say $100) because you just really really want one instead of just using SATA plugs like BFL did for the minirig.

per 56 FPGA/4U: $8540 for 7 56 FPGA boards, $100 for a board that does nothing but route serial connections, $70 for a Norco 4U case that fits EEB and has 2 80mm fans (and we need more than that so theres even more money wasted), and $160 for a NZXT Hale 90 750.
total cost per 4U:$8870 or $158 per FPGA.
if something goes wrong: You lose 8 FPGAs minimum, 58 maximum.

So not only do I get more density per 4U, my solution comes in cheaper.
sr. member
Activity: 407
Merit: 250
May 18, 2012, 11:21:45 AM
I'm not arguing that a super high density board would be the ultimate in high density mining. Point is there is a sweet spot for a distributor between low volume sales and profit. With this board design they can make a high volume of boards, and reduce their per-unit overhead. Meaning better profit margins. A super high density board would allow better density, but would likely have much lower sales volumes (requiring thicker markup to make back their R&D overhead and such). Also manufacturing yield is easier to control on smaller boards. Lastly, customer hardware failures are mitigated more with many small boards versus one large board. (if you blow a couple FPGAs on your 64x FPGA board what do you do about it?)

My ideas are purely within the confines of the cairnsmore1 product. How to pack as many into a rack as possible. Not a hypothetical new board. (enterpoint has said they may consider additional boards later depending on the success of the cairnsmore1, for now this is what we have to work with).

That said, you have a very good point about card height. Looking again at the mechanical drawings, once you consider motherboard thickness, connector height, and raised MB plate thickness the board likely won't fit in a 3U. (it would be damn close though, 3U is 133MM but internal space will be less, this board is 126.4 so likely won't fit in the end).

But I don't see why you think a "blade server" type approach is inappropriate? It offers high density (at least close to that of a super high-density board) and it offers modularity, easy maintenance (swap boards out), lower risk in the event of failure, and most importantly to many, smooth scalability (easy to keep buying small volumes of boards and expanding rather than having to drop $30K at a time).

Now considering 4U cases, an ideal option would be something like http://www.newegg.ca/Product/Product.aspx?Item=N82E16811165475
Remove the motherboard mounting plate, and mount the power supply where the 3x 5.25" bays are in the front. That would allow a full case width and a flat mounting surface in the back for the cards. You can easily fit 8 cards wide with plenty of room for heatsinks, and should be able to fit 2 cards deep in that config. This gives the same yield of FPGAs/U that your idea has, but is overall cheaper (1U rackmount cases are more expensive generally. or custom rackmount enclosures are also expensive, meaning per 1U you would be looking at probably $200-$300 per case if you include a power supply, making for $800-$1200 per 4U) in my setup that's $300 for a 4U case, $200 for PSU, so $500 total in case cost per 4U, same total number of FPGA, and I still have room to grow in the front half of the case (could always remove the hotswap drive cage and mount more cards up there, or have room for an added controller for standalone mining or whatever).

I think considering these cards specifically, that type of solution is the best way to go. Too bad it's so close to the 3U spec. If it would fit in a 3U that makes it more dense, which is even better.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
May 18, 2012, 10:46:44 AM
Yes you can. Smiley

If you use a motherboard design (with sockets to vertically mount the cards). The new 1.1 rev cards have a high current power connector mounted vertically, and the ribbon connectors for data up/down link from each card, so you can mount the cards vertically plugged into a power distribution backplane, and daisy chain several together using short ribbon jumpers (like crossfire bridges).

Even without the motherboard, you can do this with brackets to hold the cards vertically.

You loose the on-card fan and use push/pull through the case as I suggested.

Vertically the cards are just right to fit on a board in a 3U case. and this way they can be laid side by side to fill an EATX board (2 rows).

And yeah I know not all rackmount support EATX but many do, I've already priced out several with plenty of room. (yeah I know EATX might not have room for 8 cards by 2 rows deep might only be able to do 12 cards, but the cases I'm looking at relocate the PSU to the front of the case, allowing the full 17" wide back half for motherboard. That will easily fit 8 cards, 2" each for plenty of airflow. 2 rows deep.

You're trying to go for a modminer "blade server" kind of design, which isn't entirely appropriate either. Lets try this instead.

Lets say you use a 4U case (3U won't fit full height cards) and used full sized PCI-shaped cards and no backplane at all (power and USB/GPIO (say, using a SATA plug instead like BFL is doing) is on the end of the card like GPUs), and you put 4 FPGAs on the card. The card will have to be 107mm by 312mm (or less: many cases, even rackmount, won't fit a full length card; 6990s and 7970s are full length), so you're fitting the FPGAs in a straight line instead of a quad configuration.

Now, because of the heatsink requirements, you're going to have to fit these in double thick configurations (using, I assume, pure copper 2U northbridge heatsinks and forcing airflow through them from the front of the case), so you're putting around 16 FPGAs in a case (7 case slots, 1 overlaps with the case so it needs a 1 slot bracket)

Now, if you pack these in even harder using 1U northbridge heatsinks, that'd be 4*7, powered using 7 PCI-E 6/8 plugs, or 28 FPGAs in 4U.

Now, my 16 FPGA in a 1U or 2U case? In 2U, thats 32 FPGA per 4U, in 1U, thats 64 FPGA per 4U. I think my idea wins.
legendary
Activity: 1378
Merit: 1003
nec sine labore
May 18, 2012, 10:41:22 AM
Glasswalker,

do you mind sharing some link? Smiley

thanks.

spiccioli
sr. member
Activity: 407
Merit: 250
May 18, 2012, 10:26:48 AM
Yes you can. Smiley

If you use a motherboard design (with sockets to vertically mount the cards). The new 1.1 rev cards have a high current power connector mounted vertically, and the ribbon connectors for data up/down link from each card, so you can mount the cards vertically plugged into a power distribution backplane, and daisy chain several together using short ribbon jumpers (like crossfire bridges).

Even without the motherboard, you can do this with brackets to hold the cards vertically.

You loose the on-card fan and use push/pull through the case as I suggested.

Vertically the cards are just right to fit on a board in a 3U case. and this way they can be laid side by side to fill an EATX board (2 rows).

And yeah I know not all rackmount support EATX but many do, I've already priced out several with plenty of room. (yeah I know EATX might not have room for 8 cards by 2 rows deep might only be able to do 12 cards, but the cases I'm looking at relocate the PSU to the front of the case, allowing the full 17" wide back half for motherboard. That will easily fit 8 cards, 2" each for plenty of airflow. 2 rows deep.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
May 18, 2012, 10:19:32 AM
The board width is almost perfect for a 3U rackmount case though, so with a motherboard/backplane to carry/power them, and the daisy chained comms, you could do the motherboard to EATX standards, and probably fit 16 miners (4 FPGAs each) so total 64x FPGAs to a 3U rackmount case, which would draw in total around 800W so with a decent highend PSU that should work nicely for density. Allowing push/pull airflow past them, decent power density, and easy modular maintenance.

Also leaves the rest of the case (drive bays and so on) to mount a small host system, LCD status displays, or whatever else is needed. (or they could build the host onto the motherboard which would be really ideal). With that kind of setup you could grab a decent rackmount case for $100, a PSU for another $200 so at MOST (including tax/shipping) you're talking $500 per 3U 18board cluster in supporting bits. (I guess that doesn't include the cost of the (so far fictional) motherboard though).

Anyway that's the direction I'm thinking anyway.

Quite a few rackmount cases will fit EATX (330x305), thats 4.18 times more area, so that could theoretically fit 16 FPGAs (assuming ATX fit 12), and it'll draw about 200 watts (assuming 4x Spartan 6 uses 50 watts with the new ~250 mhash bitstreams). You can't just fit 4x existing boards in manually because you can't fit 376x252 inside of 330x305 (and EATX is pretty cramped inside cases as it is), so they'll have to place stuff differently on a new board.
sr. member
Activity: 462
Merit: 251
May 18, 2012, 09:10:37 AM
Some mechanical details now available on the temporary webpage http://www.enterpoint.co.uk/cairnsmore/cairnsmore1.html.
sr. member
Activity: 407
Merit: 250
May 18, 2012, 08:45:25 AM
The board width is almost perfect for a 3U rackmount case though, so with a motherboard/backplane to carry/power them, and the daisy chained comms, you could do the motherboard to EATX standards, and probably fit 16 miners (4 FPGAs each) so total 64x FPGAs to a 3U rackmount case, which would draw in total around 800W so with a decent highend PSU that should work nicely for density. Allowing push/pull airflow past them, decent power density, and easy modular maintenance.

Also leaves the rest of the case (drive bays and so on) to mount a small host system, LCD status displays, or whatever else is needed. (or they could build the host onto the motherboard which would be really ideal). With that kind of setup you could grab a decent rackmount case for $100, a PSU for another $200 so at MOST (including tax/shipping) you're talking $500 per 3U 18board cluster in supporting bits. (I guess that doesn't include the cost of the (so far fictional) motherboard though).

Anyway that's the direction I'm thinking anyway.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
May 18, 2012, 08:38:47 AM
I couldn't easily find this, but what are the dimensions of the board?

Since I know I wasn't the only one curious about it's dimensions.
Yohan confirmed these details via email when I ordered.

"Board is 126.492mm x 188mm give or take manufacturing tolerance. Main
mounting holes are 5mm (x) and 5mm (y) in from the corners."

Huh, thats actually bigger than miniITX which is 170x170. ATX is 305x244, which could probably fit 12 FPGAs with room to spare.

Maybe one of the manufs will start making 12 FPGA boards with the four outer mount holes fit for ATX cases and power itself off a P4 (16a 12v) or EPS12v (28a 12v) plug?
sr. member
Activity: 462
Merit: 251
May 18, 2012, 03:29:03 AM
The initial performance numbers are high on our list now to do. This week has been a slow week and was planned that way so we could clear critical aspects of other projects and do other things that are ongoing.

We are expecting the Issue 1.1 PCB in today and once that is assembled we will do a quick basics check again and then move onto the integration and performance testing aspects of the project. It is then a how long question but if it goes well we will know a lot in about 1 week. If it doesn't go so well we may have to modify the PCB design and that is about a 1 week slip. That's the joy of engineering.
sr. member
Activity: 476
Merit: 250
Keep it Simple. Every Bit Matters.
May 18, 2012, 03:17:31 AM
I couldn't easily find this, but what are the dimensions of the board?

Since I know I wasn't the only one curious about it's dimensions.
Yohan confirmed these details via email when I ordered.

"Board is 126.492mm x 188mm give or take manufacturing tolerance. Main
mounting holes are 5mm (x) and 5mm (y) in from the corners."
hero member
Activity: 560
Merit: 500
May 17, 2012, 10:14:27 PM
Is there a date for when the test data will be released? Or is it a "when it's done" thing for now?
sr. member
Activity: 407
Merit: 250
May 17, 2012, 11:22:54 AM
It's not a strange question and we didn't design it for any specific format. This time we just wanted to do something simple andthat is what Cairnsmore1 is.

There are 2 things under consideration here.The first is that for the next generation we do design either with a specific case or racking in mind. We are also considering what we might do do the current Cairnsmore1 and we might do a simple case for these. Worst problem is then is the cost of shipping such a case.

You could just design the components to build a rackmount case type system (backplanes, communications busses, power distribution, mechanical mounting) and work with a few well placed systems integrators to actually build systems and ship them. If you design your solution to work with existing COTS gear (for example design your backplane to support the EATX standard) then you could use conventional off the shelf rackmount cases.

Come up with LCD display solutions that clip into 5.25 drive bays and so on. sell these components, and let people buy them to assemble their own solutions, or work with systems integrators to deal with shipping completed clusters (in which case if the SIs are local, you can cut down shipping of the heavy bits)
Jump to: