Pages:
Author

Topic: Cairnsmore2 - What would you like? - page 7. (Read 11585 times)

sr. member
Activity: 476
Merit: 250
Keep it Simple. Every Bit Matters.
May 20, 2012, 11:45:59 AM
#9
We are not likely to use PCIe in this way but it is a possibility for a quick picture have a look at http://www.schroff.co.uk/internet/html_e/index.html. So what we are taling about is a rack with card guides and a backplane at the back of the rack. The cards slide in and connect to the backplane.Another example http://uk.kontron.com/products/systems+and+platforms/microtca+integrated+platforms/om6060.html.

The backplane standard could be an industry standard or just something we do to suit the purpose. The main thing is that the metal work and supporting guides are standard things. With a full height 19" rack we might be looking at fitting up to 4-8 sub racks depending on height we adopt on the sub-rack and what we have in power supplies. So we might be able to do an entire rack with 0.5-1 TH/s if I can add up correctly. Then it is just a case of adding racks. In data centres you might find hundreds of these sorts of racks.

So rather than a single big board, it's more like it's own dedicated 3-4u rack rig, that had multiple boards inside contained in it's own chassis?
I'm interested.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
May 20, 2012, 11:23:34 AM
#8
We are not likely to use PCIe in this way but it is a possibility for a quick picture have a look at http://www.schroff.co.uk/internet/html_e/index.html. So what we are taling about is a rack with card guides and a backplane at the back of the rack. The cards slide in and connect to the backplane.

The backplane standard could be an industry standard or just something we do to suit the purpose. The main thing is that the metal work and supporting guides are standard things. With a full height 19" rack we might be looking at fitting up to 4-8 sub racks depending on height we adopt on the sub-rack and what we have in power supplies. So we might be able to do an entire rack with 0.5-1 TH/s if I can add up correctly. Then it is just a case of adding racks. In data centres you might find hundreds of these sorts of racks.

I think those URLs are not the URLs you meant.
sr. member
Activity: 462
Merit: 251
May 20, 2012, 11:20:54 AM
#7
We are not likely to use PCIe in this way but it is a possibility for a quick picture have a look at http://www.schroff.co.uk/internet/html_e/index.html. So what we are taling about is a rack with card guides and a backplane at the back of the rack. The cards slide in and connect to the backplane.Another example http://uk.kontron.com/products/systems+and+platforms/microtca+integrated+platforms/om6060.html.

The backplane standard could be an industry standard or just something we do to suit the purpose. The main thing is that the metal work and supporting guides are standard things. With a full height 19" rack we might be looking at fitting up to 4-8 sub racks depending on height we adopt on the sub-rack and what we have in power supplies. So we might be able to do an entire rack with 0.5-1 TH/s if I can add up correctly. Then it is just a case of adding racks. In data centres you might find hundreds of these sorts of racks.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
May 20, 2012, 11:02:51 AM
#6
sr. member
Activity: 462
Merit: 251
May 20, 2012, 08:25:25 AM
#5
Lets start by clarifying that it won't be a single big board. Those are actually expensive to make and there is no logic in this design that needs that approach. The architecture is more of a controller card with processing cards linked by a backplane that wires it all together.The backplane should let us have 12 working boards maybe up to 19 in one run depending on design decisions made.

Similarly PCIe is a bit of overkill for this one internally but could be used to link a rack to a PC or several levels of rack. We might do a PCIe card but that is a different project.

28nm at the moment may not be viable especially Artix that is probably 6 months away. 28nm may also be expensive initially. With what we are doing on bitstreams, and partner offerings, might mean FPGA type might be relatively irrelevant. It's more about the system cost than anything. 45nm may still be the best option today but that is one of the things that we are looking at.

Voltage that is used for whatever FPGA we use is being looked at. We might put in a VRM but that is probably more complicated than is necessary and has it's own cost. There are other ways to do this.

Historically fans have been a reliability thing and we might put in monitoring or PWM but these features do have a cost themselves both in materials and electricity. That needs to considered given the fans we are currently using on Cairnsmore1 have 100K+ Hrs lifetime (11-12+ years) and have a 6 year warranty. The floating bearings that have come in the last few years are a lot to do with this reliability and alternative approach might be a planned fan replacement maintainence schedule.

A fan tray may be the way we do cooling for this design and that would have whatever fan headers are needed. We have some other ideas here as well and more on those when we have thought them through as little better to see if they are viable.

Power wise this will take power from the backplane and there won't be a choice there. The processing card isn't a replacement for Cairnsmore1 but for the bigger rack market. Different backplanes are a possibility including a mini setup maybe with a cut down number of slots but that is for later after we get the big solution out in the wild.

Yohan
sr. member
Activity: 476
Merit: 250
Keep it Simple. Every Bit Matters.
May 20, 2012, 08:11:30 AM
#4
Diablo brings up very good points.
Having FPGA make use of lower nm tech would make an already huge gap in electrical costs even further down compared to normal GPU's.
Also it has proven that they overclock/undervolt better so making the most of that as Diablo said would be best for the continued success of moving FPGA forward.
You've already got multiple power connectors there, maybe too many. Is it worth making a few versions, one with a molex, one with a 6 pin, etc. Instead of giving multiple options in one board, would it make it any more cost effective and/or smaller, to only use the one.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
May 20, 2012, 07:38:35 AM
#3
Ok I will start this one off by saying that we are looking at a range of FPGA technologies to base the product on not that any of you might think we would start putting GPUs in our products although you never know. Some of our decisions will be based on how we do this week with more fully loading the Cairnsmore1.

We are probably thinking of 19" rack as a basis for this product and we already have some backplane designs that we did previously that might be useful either directly or in an adapted form. This also fits well with power supply availability and so on. This would also allow a modular purchase of a system that would be easy to upgrade and add to as time goes on.

One of our aims will is to be a very competitive FPGA solution in the market for large scale mining.

We have an initial individual card target of 4-5 GH/s+.

We are looking at our cooling technology and we are testing a new idea this week in a different product that might get adopted into Cairnsmore2.

Timeline - we are likely to limited by FPGA lead time which is typically 6-8 weeks so August-September is likely to be the initial availability of this system.

What we would like to hear from you guys is what you would like in interfaces? USB?, Ethernet - 100/1G/10G?, Cable PCIe?

And any particular features you might think we need to include?

Yohan

Well, imo, all future boards from any manufacturer of any kind needs two functions: a fan controller and enough fan headers so that it will ramp fan speed to keep constant chip temp to prevent both fan failure (running fans at 100% load is generally bad, even industrial fans often fail after 2 years) and chip failure (due to thermal cycling).... and the other function is software programmable VRM so FGPAs can be underclocked+undervolted on demand so people can keep mining as the difficulty rises thus extending the life of the hardware for another 2-3 years.

The only real request I have beyond those two mandatory features is 28nm on some mining industry agreed upon FPGA (it seems everyone is leaning towards the largest artix 7). Continued 45nm usage seems to be dead, it just isn't cost effective enough.
sr. member
Activity: 476
Merit: 250
Keep it Simple. Every Bit Matters.
May 20, 2012, 06:45:25 AM
#2
Good to hear you are already thinking bigger, so 19" rack design, so you going for a more standard motherboard sized PCB with merrick1 like amount of  processors on-board?

USB has some perks by being an interface users know. As long as a single usb could handle a larger board, without any downsides it would be ideal.
Otherwise I would consider looking at Ethernet port if it helps eliminates any downsides to using a single usb for a large scale FPGA board. Modded Routers after all are apparently starting to being used as a means to interface with FPGA's.
A bigger board could be a problem for a PCI-E slot most likely, so I can't see that as a good choice, though it is often requested to have one. If it was appropriate in size for a normal PCI-E board it would be popular, however since your planning to go bigger with the #2 I find it hard to believe it would be.

Have you got your own Mining software for these yet, or working closely with those that can work to optimise for the Cairnsmore series?
sr. member
Activity: 462
Merit: 251
May 20, 2012, 05:55:31 AM
#1
Ok I will start this one off by saying that we are looking at a range of FPGA technologies to base the product on not that any of you might think we would start putting GPUs in our products although you never know. Some of our decisions will be based on how we do this week with more fully loading the Cairnsmore1.

We are probably thinking of 19" rack as a basis for this product and we already have some backplane designs that we did previously that might be useful either directly or in an adapted form. This also fits well with power supply availability and so on. This would also allow a modular purchase of a system that would be easy to upgrade and add to as time goes on.

One of our aims will is to be a very competitive FPGA solution in the market for large scale mining.

We have an initial individual card target of 4-5 GH/s+.

We are looking at our cooling technology and we are testing a new idea this week in a different product that might get adopted into Cairnsmore2.

Timeline - we are likely to limited by FPGA lead time which is typically 6-8 weeks so August-September is likely to be the initial availability of this system.

What we would like to hear from you guys is what you would like in interfaces? USB?, Ethernet - 100/1G/10G?, Cable PCIe?

And any particular features you might think we need to include?

Yohan
Pages:
Jump to: