Pages:
Author

Topic: Minimalist Spartan6-LX150 board - page 2. (Read 49998 times)

legendary
Activity: 1666
Merit: 1057
Marketing manager - GO MP
November 10, 2011, 01:22:13 AM
What the hell, with a high speed CPLD you can probably bitbang pcie, last time I checked those cost around $5.
Some of those can interface with almost and logic level & impendace.

The thing what concerns me is: Why use a PC anyway? We could possibly implement the mining software on a cheap ARM, pair it with a ethernet MAC ic and be done with it.
rph
full member
Activity: 176
Merit: 100
November 10, 2011, 01:15:14 AM
PCI-e is just a pain in the ass for mining. You need gold finger controlled impedance PCBs, a
flash PROM to configure the FPGA before the PC BIOS scans the bus, much more complex
interface logic in the FPGA, more complex SW, and possibly a $$$$ high speed scope if
something goes wrong, just to provide insane amounts of bandwidth you don't even need.

Plus, putting 16 $150 BGAs onto a single giant PCB is a bad idea. 1 out of every 20-30 boards will have
a critical defect under 1 of the FPGAs, costing you either $2400 to scrap the board, or a bunch of time and
money to try to remove/reball/repair it. There's a good manufacturability reason to use only 1-2 FPGAs per PCB -
if something goes very wrong you're out $300, not $2000.

-rph
donator
Activity: 1218
Merit: 1079
Gerald Davis
November 10, 2011, 01:06:27 AM
Have you priced out any of those backplanes?

Have you priced out the non-PCIe boards.  Hell the dual Spartan-6 150 module (just the FPGA and a couple MB of RAM) is $4000.  By that logic nobody can make a mining board for less than $2K. 

If a NIC can implement a PCIe interface for the same price as a USB interface then obviously there isn't anything magical that makes FPGA add $1500 to the cost on the board.  The prices a high because the margins are insane in semi-custom FPGA land.  The margins are equally high on that companies USB, compact flash, and serial modules too.  That doesn't tell us anything about the actual cost to implement. 
donator
Activity: 1218
Merit: 1079
Gerald Davis
November 10, 2011, 12:12:02 AM
implementing the pcie endnote according to free xilinx instructions, a level shifter ic and a few CLBs

Yes, on the *T model chips that have an on board pci-e interface and which cost something like 15% more.

Then you're stuck attaching the damn things to a motherboard... and confined to the computer cases which are available or can be constructed.

Far better to have a USB (or ethernet, or RS422, or anything _not_ pcie) interface — then you can build an enclosure which meets your density and cooling needs optimally— e.g. nice linear airflow because your system is a single plane, and you can service hundreds of miner chips off a pair of standard rackmount servers.

I doubt you are going to build an enclosure including all data transfer, cooling, and power cheaper than bulk purchased 2U & 4U rackmount chassis.

Density is a non-issue if using PCIe cards.  You can radiate about 150W from an expansion slot using chassis mounted fans and passive heat sink.  That is now NVidia cools Tesla cards.  FPGA are so power efficient that 150W is something like 3GH/s.  

So there is no need to build an enclosure to "meet your needs".  The enclosure that meets your needs is a standard 4U server.  Put 16 FGPA on a single PCIe card and you got 3.2GH/s per expansion slot.  4 cards per server works out to "only" 12.8GH per 4U server.  Standard data center rack could hold 128GH.  If for some reason you needed more capacity they make Single Board Systems with 12+ PCIe expansion slots in standard rackmount chassis.  So you got standardized and commoditized expansion, cooling and power distribution options.

Granted USB is easier to get working but eventually cost effective PCIe cards will come along.  PCIe isn't that expensive.  



If a $14 NIC can implement PCIe eventually someone will figure out how to cost effectively add PCIe support to $2000 worth of FPGAs.
legendary
Activity: 1666
Merit: 1057
Marketing manager - GO MP
November 09, 2011, 05:54:09 PM
Why aren't there any pcie cards with only a few fpgas & some power converters on it?

That would be the most cost effective solution as modern fpgas have native pcie endnodes and pcie even has a jtag interface built in. All we need is drivers, a 2 layer pcb and a panel sheet to mount it.

Why use PCIE interfaces? USB1.1 is much better.

How better?
With pcie you could spare:
USB chip, jtag connector, power connector, programming cable
With USB you spare:
implementing the pcie endnote according to free xilinx instructions, a level shifter ic and a few CLBs
legendary
Activity: 1666
Merit: 1057
Marketing manager - GO MP
November 09, 2011, 05:40:15 PM
Something like that?

http://jchblue.blogspot.com/2009/08/pico-computing-fpga-cluster.html

16 Xilinx Spartan XC3S5000 FPGAs



These with  16 Spartan-6 LX150s would have to come close to 3 GH/s, a miners wet dream if they were affordable. But most likely they have a huge premium on the chips themselves.
hero member
Activity: 518
Merit: 500
November 09, 2011, 05:22:32 AM
For a PCI expansion chassis with 13 connections over 2mil dollars ... we have a problem, we would have to design ourselves all hardware including PCI.

Another solution I can think of is to design an FPGA circuit that is powered by USB, one PC card for pci usb expansion could feed many circuits with multiple FPGAs



This is a quick sketch




Don't quite understand what you are saying. Why not just use USB FPGAs and USB hubs etc. I though bitcoin mining is supposed to be "ghetto".
aTg
legendary
Activity: 1358
Merit: 1000
November 09, 2011, 04:50:31 AM
For a PCI expansion chassis with 13 connections over 2mil dollars ... we have a problem, we would have to design ourselves all hardware including PCI.

Another solution I can think of is to design an FPGA circuit that is powered by USB, one PC card for pci usb expansion could feed many circuits with multiple FPGAs



This is a quick sketch


hero member
Activity: 720
Merit: 528
November 09, 2011, 03:35:16 AM
Kinda hard to tell from the photo but the FPGA are on daughter boards.  There are 6 expansion slots on each side (2 are populated).  Now granted they are high end Virtex FPGA, onboard memory, and PCIe 16x connector so not economical for mining.  Still the same kind of concept could be done for lower end chips. A single large PCIe board with room for up to 12 daughter cards.  You can buy 1 board and 1 to 12 FPGA depending on your budget.

I actually looked into that board. They sell a similar one with Spartan 6 LX150s on it actually. The daughter boards have 2 FPGAs each. The price per daughter board? Around $3000!

Here's the pic:


The backplane card itself is about $1,500. That means the fully loaded card costs about $20,000. That's 12 FPGAs, or 1.5-2 GH/s or so. All told, $10 / MH/s! If you only bought one daughterboard, it would be $4500 for only about 300 MH/s!!

Finding this out was the final straw that drove me to start working on an FPGA board specifically for bitcoin mining. It just seemed insane that they could get away with charging so much. I knew that it could be done for less. Unfortunately, part of the cost savings was eliminating the PCIe interface.

For us, USB is a natural decision, but I'm not opposed to other designs. If you really want to build a PCIe based system like that, and think others would be interested in it, I'd be happy to work with you to design it together. Let's talk!
donator
Activity: 1218
Merit: 1079
Gerald Davis
November 08, 2011, 04:35:45 PM
I researched this and I think it is quite nice solution but not that cheap unfortunately. Eg will the costs be insane ? Power distribution in backplane ? Bandwidth in backplane etc. ? Thanks !

Yes cost will be high.  Backplanes tend to run $300 to $1000+.  The single board computer (mother board equivalent) runs another $200.  The way GPU economics work there really is no advantage to putting more than 6 or so GPU per board.  The main advantage of a backplane would be ability to put it all in a chassis but the thermal load of 10+ GPU makes that totally impossible so there really is no point.
legendary
Activity: 1029
Merit: 1000
November 08, 2011, 04:30:56 PM
Nice looking monster Wink Price is propably somwhere on stratosphere Wink And performance rather poor. That single Spartan 3S5000 maybe can reach 50MH/s (*16=800MH/s).
donator
Activity: 1218
Merit: 1079
Gerald Davis
November 08, 2011, 04:24:54 PM
Something like that?

http://jchblue.blogspot.com/2009/08/pico-computing-fpga-cluster.html

16 Xilinx Spartan XC3S5000 FPGAs


I think something more like this ...


Kinda hard to tell from the photo but the FPGA are on daughter boards.  There are 6 expansion slots on each side (2 are populated).  Now granted they are high end Virtex FPGA, onboard memory, and PCIe 16x connector so not economical for mining.  Still the same kind of concept could be done for lower end chips. A single large PCIe board with room for up to 12 daughter cards.  You can buy 1 board and 1 to 12 FPGA depending on your budget.

Look I hope the FGPA authors/designers don't take this as bashing.  What they have accomplished is amazing. It has really brought the pricepoint for FPGA from "pie in the sky" to expensive but viable.  I am just saying long term scalability will matter.  A board like this would allow someone to go from 1 to 48 FPGA in a single server (200 MH/s to 9600 MH/s).  Everything other than the card will be standard data center stuff.
donator
Activity: 1218
Merit: 1079
Gerald Davis
November 08, 2011, 04:18:10 PM
Nobody seriously looking to buy FPGA is looking to buy a single board and leave it at that.  Just like nobody runs a hashing farm today with a single GPU.  I mean a single board running @ 200MH will generate roughly $0.60 per day.  Hardly something to get excited over. It will take tens of thousands of those FPGA to displace GPU as the dominant technology.  That won't be done by fifty thousand people buying a single board it will be done by couple hundred people by hundreds of chips and that is easier done with more scalable solutions.

Regardless if you have 1 FPGA or a dozen of them you still need a host, you still need a power supply, you still need to monitor it (time value).  It is a lot easier to justify one's time if generating $60 a day than $0.60 per day.  Like I said I understand WHY FPGA is at the stage it is today.  You have to start somewhere and Serial/USB single chip FPGA boards are an easier place to start.  However just because we start here doesn't mean we will end up here.
aTg
legendary
Activity: 1358
Merit: 1000
November 08, 2011, 04:14:40 PM
Something like that?

http://jchblue.blogspot.com/2009/08/pico-computing-fpga-cluster.html

16 Xilinx Spartan XC3S5000 FPGAs

hero member
Activity: 518
Merit: 500
November 08, 2011, 04:10:34 PM
#99
I was thinking exactly that, but we could not start from here with that standard design for a rack?
I think that having many small modules with a single FPGA is not efficient for spending on individual fans and especially because a single USB controller could handle an entire plate of FPGA's so each module in the rack may be connected via USB to a hub and a computer within the same cabinet.

Agreed but PCIe "solves" 3 solutions

1) power distribution
2) data connectivity
3) standardized mounting
4) server sized cooling not individual board cooling

Sure you could have larger boards, and figure out a way to rig usb cables to a hub to the host, run custom power lines to each of them, and then figure out some non-standard method to securely mount and cool them  However using PCIe allows you to leverage existing technology like chassis with redundant midplane cooling, backplanes for securely mounting cards, ATX motherboards for connectivity and power.  I don't think we will see PCIe solutions anytime soon but on the other hand I can't imagine if Bitcoin is around in 5 years that the "solution" is a bunch of usb boards jury rigged inside a case connected to usb hub.

For example take a look at this "industrial chassis"
http://www.adlinktech.com/PD/marketing/Datasheet/RK-440/RK-440_Datasheet_1.pdf

Notice the midplane fans designed to cool expansion cards and the 18 expansion slots.  It uses a "single board computer" where the "motherboard" is actually mounted perpendicular to a backplane just like any other expansion card.  This is the kind of setup that is used for other "industrial" servers like cable video mulxiplexing, high speed network switching, digital signal processing, etc. 

Hey D&T :

Since you seem very knowledgeable on these damn backplates I have one question for you. Do you think they can be used to mine with them etc. ?

I researched this and I think it is quite nice solution but not that cheap unfortunately. Eg will the costs be insane ? Power distribution in backplane ? Bandwidth in backplane etc. ? Thanks !
legendary
Activity: 1029
Merit: 1000
November 08, 2011, 04:08:25 PM
#98
FPGA chip that can give resonable MH/$ (1?) cost at least 150$. If you want to put 6 of them to one card that gives 900$ for chips only. PCB and other necessary parts that would be 300$ more. And 300$ for manufacture cost and spedition. Thats 1500$ per card that can only mine and achive ~1.2GH/s (using ~50W of power). Thats quiet a big price... When I've started my adventure with bitcoin I spend 1000$ for PC that can produce 800MH/s and that wasn't easy decision... I've done that becuse I'm using PC also for work (writing programs). My child needs new pair of boots, so I will never spend 1500$ for some card that can be worthless in few months... Propably most of bitminers are in familiar situation... Thats why there's no such card, to small demand. Give me 2k$ and i will design such a card and make one prototype...
donator
Activity: 367
Merit: 250
ZTEX FPGA Boards
November 08, 2011, 04:06:00 PM
#97
I don't think we will see PCIe solutions anytime soon but on the other hand I can't imagine if Bitcoin is around in 5 years that the "solution" is a bunch of usb boards jury rigged inside a case connected to usb hub.

Development costs are much higher (driver development, ...). Due to the small amounts sold this results in significant higher prices of such boards.

However future solutions will look like, unless you do not want to invest $100000s it will either be ugly or cheap.

donator
Activity: 1218
Merit: 1079
Gerald Davis
November 08, 2011, 03:48:12 PM
#96
I was thinking exactly that, but we could not start from here with that standard design for a rack?
I think that having many small modules with a single FPGA is not efficient for spending on individual fans and especially because a single USB controller could handle an entire plate of FPGA's so each module in the rack may be connected via USB to a hub and a computer within the same cabinet.

Agreed but PCIe "solves" 3 solutions

1) power distribution
2) data connectivity
3) standardized mounting
4) server sized cooling not individual board cooling

Sure you could have larger boards, and figure out a way to rig usb cables to a hub to the host, run custom power lines to each of them, and then figure out some non-standard method to securely mount and cool them  However using PCIe allows you to leverage existing technology like chassis with redundant midplane cooling, backplanes for securely mounting cards, ATX motherboards for connectivity and power.  I don't think we will see PCIe solutions anytime soon but on the other hand I can't imagine if Bitcoin is around in 5 years that the "solution" is a bunch of usb boards jury rigged inside a case connected to usb hub.

For example take a look at this "industrial chassis"
http://www.adlinktech.com/PD/marketing/Datasheet/RK-440/RK-440_Datasheet_1.pdf

Notice the midplane fans designed to cool expansion cards and the 18 expansion slots.  It uses a "single board computer" where the "motherboard" is actually mounted perpendicular to a backplane just like any other expansion card.  This is the kind of setup that is used for other "industrial" servers like cable video mulxiplexing, high speed network switching, digital signal processing, etc. 
aTg
legendary
Activity: 1358
Merit: 1000
November 08, 2011, 03:38:43 PM
#95
I am thinking scalability and density.  Evetually Bitcoin will move beyond hobbyist and open boards into high density datacenter designs.  Getting large number of GPU in a rack mount server is simply impossible due to the thermal load.  FPGA make that possible someday. I see that as the endgame for FPGA.  A PCIe board can supply power and data over single connector.  It also make a convinent way to mount multiple FPGA inside a standardized chasis.  I would love someday to put a FPGA array in a co-location datacenter to reduce risk of loss due to theft, power, fire, damage.  

I was thinking exactly that, but we could not start from here with that standard design for a rack?
I think that having many small modules with a single FPGA is not efficient for spending on individual fans and especially because a single USB controller could handle an entire plate of FPGA's so each module in the rack may be connected via USB to a hub and a computer within the same cabinet.
hero member
Activity: 518
Merit: 500
November 08, 2011, 03:34:33 PM
#94
Why aren't there any pcie cards with only a few fpgas & some power converters on it?

That would be the most cost effective solution as modern fpgas have native pcie endnodes and pcie even has a jtag interface built in. All we need is drivers, a 2 layer pcb and a panel sheet to mount it.

Why use PCIE interfaces? USB1.1 is much better.

I am thinking scalability and density.  Evetually Bitcoin will move beyond hobbyist and open boards into high density datacenter designs.  Getting large number of GPU in a rack mount server is simply impossible due to the thermal load.  FPGA make that possible someday. I see that as the endgame for FPGA.  A PCIe board can supply power and data over single connector.  It also make a convinent way to mount multiple FPGA inside a standardized chasis.  I would love someday to put a FPGA array in a co-location datacenter to reduce risk of loss due to theft, power, fire, damage.  

I full length board would be able to mount maybe 5 FPGA for half-height board and maybe 10 for full height board.  That creates some interesting datacenter quality arrays.  2U server could mount 4 boards or 20 FPGA total for ~4GH/s using maybe 300W for entire system  (at the wall).  A standard datacenter rack could hold a 80GH and run on a single 30A 208V power connection.  The higher density would make things like remote power control and KVM over IP economical.

Too bad the demand is too low now. I think BFL labs is scam too. I mean why go through all that development when the price of BTC can crash any day now and people will stop buying mining equipment etc. Even in other industries FPGA is almost never heard of. I never heard about FPGAs until Bitcoin etc.
Pages:
Jump to: