Pages:
Author

Topic: Modular FPGA Miner Hardware Design Development - page 33. (Read 119276 times)

member
Activity: 70
Merit: 10
Given the title of this thread this might be a bad question, but here goes: why make the design modular at all? I understand the intention of keeping cost down, especially in the case of upgrading of hardware. But wouldn't different kinds of hardware (=different FPGAs) require different power lines, possibly different methods of uploading a bitstream?

If one truely wants to define a universal bus tht connects different FPGA daughter boards, then the bus connector on the main board could provide some common voltages and a common (probably serial) interface. But it may still be necessary for the FPGA board to produce other voltages locally and if the bus protocol is not compatible to the FPGAs bitstream loading, then an additional CPLD on each daughter board is needed.

Let's put numbers to the case. The prices I give are in EUR for prototype quantities (=1). I may not have found the best supplier, either.

1x Xilinx Spartan 6 XC6SLX75FGG484  = 68EUR
1x LMZ12002TZ-ADJ for 5V rail       =  8EUR
2x LMZ10503TZ-ADJ for 1.2V and 2.5V = 17EUR total
various small parts                 ~ 5-10EUR

So the power supply for a single daughter board can be up to 50% of the price for one simple FPGA. Daughter boards with multiple FPGAs are therefore more cost efficient.

In light of that I would propose to make simple boards that contain one host interface (of a to be specified kind) and as many FPGAs as there is money. This is basically only making daughter boards and no main boards at all. The Pros and Cons:

Cons:
  • If no additional power supplies are needed on the daughter boards, then my argument from above does not hold. In this case, the modular approach can offer the cost advantages promised. But this can only be true for a specific class of daughter boards.
  • The host interface is needed for every board (cost can be around 5-15EUR, maybe)

Pros:
  • No error-prone and/or costly connectors (3EUR per connector)
  • No need to specify a universal interface that works across different FPGAs or later ASICS: just rewrite the firmware of the host interface for each type of board
  • No additional adapter logic on daughter board (maybe 5EUR)
  • Quicker design time: no motherboard needed

Before this thread was started, I looked into making something like this. While the original suggestion (http://forum.bitcoin.org/index.php?topic=9047.msg280549#msg280549) was based on a PIC18F97 because of its Ethernet capability, I thought of an FT2232H because it can operate as both a JTAG and an SPI master and needs minimal programming. So the board is not completely stand alone but a USB slave. The thing is work in progress. I started out with a board for a single FPGA to test if I can get it to work. More FPGAs can later be added to the design: they are connected in two serial chains via JTAG and SPI. The SPI bus is used both for configuration and later for communication.

Design software: Cadsoft Eagle 5.1
Current status: Schematic finished, routing started, not tested at all!
URL: https://rapidshare.com/files/534181114/single.zip

Comments are very welcome!

PS: I looked at  the datasheet of the PIC18F97 and it is making me very sure that I would want someone else to develop the code.
legendary
Activity: 1270
Merit: 1000
Well, so-dimm connector do not need drilling holes to the pcb, thats correct but i guess the connectors in turn are more expesive, the same goes to PCI-Express-sockets an so on. Look on ebay on the lowest cost boards, the use standard idc connectors, i think that will the most economic solution. Using receptangles with long pins would allow stacking of thore than one board as on the PC104 standard.

By the way, there are PCI-express interface chips from gennun for $15 or so but that would require the PC to run, a standalone solution could reduce power consumption further, even if the atom need only 10 watt, i have a ARM board that draws less than 2,5 watt running linux.

Edit: Even if the full design does only used 5 watt, 95% or so of the power  will heating the FPGA. to get you an example of how much this is, take a little light bulb wit hhust 5 Watt and try to cool it down so you could touch the glas ... there should at least a passive heat sink, better in combination with some air flow.  This would on the plus side some more MHz. Of course no cooling monster as for GPU is needed ...
sr. member
Activity: 410
Merit: 252
Watercooling the world of mining
Ok fine so that is cleared

As you say DIMM slots  are les robust but i would prefer the many lanes over this fact.

As far as i know the DE-115 board with a full sha-256 core uses somethin around 5 W   so cooling should be only a minor problem.
hero member
Activity: 784
Merit: 1009
firstbits:1MinerQ
No, I'm just talking about using the same connectors on the main board to mate with the plugins.

I'm not saying they would be actual PCIe standard, or bridge to PCIe or anything. Just using these connectors as they are more robust than DIMM slots. PCI is wider with more pins the PCIe x1 so it may provide more free pins for power.

Edit: Of course, I really like also the idea of using a surface mount connector to get away from thru holes which could push the cost up. It would be easier if we actually knew how much cooling and how much bulk a plugin board would demand.
sr. member
Activity: 410
Merit: 252
Watercooling the world of mining
Maybe either PCI or PCIe 1x connectors would be better.

For small light boards they can stand free but for heavier board with fans they could be mounted in a case (similar to ATX or not). PCIe 1x probably have enough signals for a bus here but it may be good to have more pins for power.

Maybe i got you wrong here. For the mainboard a PCIe connector might be a solution. But assuming a Altera Cyclone III (80k cells for one full unroled SHA-256 cycle) is 180 € , a SO-DIMM board plus passive components produced in china is something like 30 € , but a local bus to PCIe bridge would be additional  ~80€ so i think we should stick to a classical Bus system for the daughtercards.

I case of power i'm quite sure it is feasible to transport the current over the bus system or additional pins on the card.
hero member
Activity: 784
Merit: 1009
firstbits:1MinerQ
Maybe either PCI or PCIe 1x connectors would be better.

For small light boards they can stand free but for heavier board with fans they could be mounted in a case (similar to ATX or not). PCIe 1x probably have enough signals for a bus here but it may be good to have more pins for power.

I'm not sure that the FPGAs are going to need such big fans but it sounds like a good idea to have the ability to be more structurally sound if need be. We should stick with connectors that are mass produced and easy to get for low cost.
legendary
Activity: 1270
Merit: 1000
Well, the XC2V1000 were great parts in their time, but now they are a little outdated and with the current open source design they dont perfom well. Besides that, Xilinx dropped  support for them with their software, not to mention your would require a full version that is $2000 or so ...

The next point is, you need not a chip which can perform a fully unrolled loop, rather than a chips that have the best MHash/s per $ ratio, with the PCB-space and supportchips included. Then invest some time by adding some extra connectors and/or peripheral chips, so after the minig is over, you could sell the boards as overstock rather than only for scapping the chips. As you point out, time is a factor, and buying semiconductor Chips in quantities may have long lead times.

While the SO-DIMM form factor looks nice, i would think a bigger form factor will be more apropriate. Using high performance Chips, will require cooling equipment which has an impact on the mechanical stability.
newbie
Activity: 25
Merit: 0
Nice idea. I would be interested in buying such a thing. Every week earlier this is ready is money, so we need to speed up i guess.

I'm no expert though.. Do you have some pieces in mind? I would guess that maybe some "cheap" sha256 chips exist, although hard to find?

Edit: Searched a bit for modular fpga units and found this at ebay Cheesy http://cgi.ebay.com/XILINX-XCV1000-4BG560C-FPGA-Virtex-LOT-60-PIECES-/170507774436 Too bad I don't know the fpga-scene at all...
sr. member
Activity: 410
Merit: 252
Watercooling the world of mining
So the first step will be to determine the needs of this system to give a platform to the " Open Source FPGA Miner ".


Therefore we should determine wich amount of LE's or wich chips from wich manufatures would be needed to run the miner at minimum in his full unrolled stage.

And wich additional hardware components  ( flash memory, EEPROM, power supply,Bus system) are needed for the operation of the Mainboard an the daughtercards.



This approach shall use standart stock components an no customised FPGA or ASIC chips in order to get a prototype at reasonable costs.
Introducing Chips customised for Mining might be an addition to this concept in the very long run.   
sr. member
Activity: 410
Merit: 252
Watercooling the world of mining
Hello everybody.

This threads purpose is to develop a modular FPGA board system customised for the demands of the" Official Open Source FPGA Bitcoin Miner " http://forum.bitcoin.org/index.php?topic=9047.0

Basically this system should consist of one "Mainboard" housing the Bus system an the IO connections.
And then a variable amount of daughtercards using a standart format like SO DIMM or similar housing one or multiple FPGA chips.

This daugtherboards should be designed to be variable in the FPGS chips used,  so there might at first be a budget low performance Line ,later a Midrange card and a high performance range in a final stage.


This modular system would allow a wide range of Bitcoin participants to start with a minimum setup with the Mainboard and one FPGA card and increase the number of FPGA cards according to their needs and financial capabilities.


So in order to bring this concept into reality i invite everybody interested to contribute to this development process.

I hope we may be abled to get to a prototyp stage this year.


Status:Design activ

Currently discussed: Layout of the first testboard for debuging.
                                        Firmware for MSP430 IC  



Design features wich have been decided on

Gerneral

FPGA
(Currently a modyfied layout of Olaf.Mandel is to be used)


- The FPGA used on the prototype is the Xilinx Spartan 6 LX 150 FGG 484

- The current design is going to use  2 FPGA's per DIMM PCB, at maximum.

- The prototype motherboard will hold 5 DIMM boards


Power supply

(Currently Power supply by li_gangyi is to be used)


- Each DIMM board features an wide range input ~11-20V over either a Molex 8981 or a Barrel 2.5/5.5 connector.

- In addition the DIMM socket provides a 12 V rail supplied by an ATX PSU via the Mainboard if in Modular use.

- The voltage regulation providing the voltages needed for the components on the DIMM is located on the DIMM itself.
 

Communication

- Each board uses one USB mini B connection in standalone or the BUS system via DIMM pins in Modular operation.

- The MSP430 IC will be used for BUS (SPI) and communication via USB(both located on Daughter- and Motherboard).


Table of FPGA Performance data (estimated)(by Olaf.Mandel)

New version of the table with lower Altera prices (assuming 1USD=0.6891EUR):

ChipRate [MHash/s]Power [W]Price [EUR]Rate/Price [MHash/s/EUR]Rate/Power [MHash/J]
Altera EP4CE75F23C7N109.29-156.750.697-
Altera EP4CE115F23C7N804.4271.790.29418.2
Altera EP4CE115F23C7N109-271.790.401-
Xilinx XC6SLX75-3CSG484C??67.29??
Xilinx XC6SLX100-3CSG484C??83.86??
Xilinx XC6SLX150-3CSG484C??120.47??
Xilinx XC3S500E-5CPG132C3.1250.7820.380.1534
Xilinx XC5VLX110-1FFG676C120-1126.510.107-





If you have any objections and improvements in mind, please tell me
 

Pages:
Jump to: