Pages:
Author

Topic: Minimalist Spartan6-LX150 board - page 3. (Read 49998 times)

donator
Activity: 1218
Merit: 1079
Gerald Davis
November 08, 2011, 12:13:57 PM
#93
Why aren't there any pcie cards with only a few fpgas & some power converters on it?

That would be the most cost effective solution as modern fpgas have native pcie endnodes and pcie even has a jtag interface built in. All we need is drivers, a 2 layer pcb and a panel sheet to mount it.

Why use PCIE interfaces? USB1.1 is much better.

I am thinking scalability and density.  Evetually if Bitcoin grows and flourishes, mining will move beyond hobbyist and garages full of open rigs full of noisy cards into high density datacenter capable designs.  Getting large number of GPU in a rack mount server is unviable due to the thermal load.  FPGA make that possible someday.  A PCIe board can supply power and data over single connector which makes deployment easier.  More importantly it provides a way to securely mount multiple FPGA using existing standards (off the shelf motherboards, rackmount chassis, ATX power supplies, etc). 

I would love someday to be able to put a FPGA array in a co-location datacenter to reduce risk of loss due to theft, power, fire, damage. A full length board would be able to mount maybe 5 FPGA for half-height board and maybe 10 for full height board.  That creates some interesting datacenter quality arrays.  A 2U server could mount 4 boards, 20 FPGAs (or more).  That is ~4GH/s on 300W in a 2U space.  A standard datacenter rack could hold a 80GH/s of hashing power, run on a single 30A 208V power connection and make things like remote power control, KVM over IP, and enterprise grade redundant power supplies more economical.



hero member
Activity: 592
Merit: 501
We will stand and fight.
November 08, 2011, 12:02:24 PM
#92
Why aren't there any pcie cards with only a few fpgas & some power converters on it?

That would be the most cost effective solution as modern fpgas have native pcie endnodes and pcie even has a jtag interface built in. All we need is drivers, a 2 layer pcb and a panel sheet to mount it.

Why use PCIE interfaces? USB1.1 is much better.
donator
Activity: 980
Merit: 1004
felonious vagrancy, personified
November 07, 2011, 05:30:15 PM
#91
Why aren't there any pcie cards with only a few fpgas & some power converters on it?

Lack of demand.

Other than bitcoin, there are not many uses for large FPGAs outside of (1) development boards and (2) integration into a product of some kind (like a router) in which the user is not even aware that FPGAs are involved.  The second one is where Xilinx's large-device profits come from.

If you're buying a dev board, you're either an academic (in which case Xilinx cuts you a special deal) or you don't mind paying $4,000+ for a card with all sorts of doo-dads you'll never use since odds are the development costs of the product you're working on make expenses like this irrelevant.  That's why with each generation of chips you see Xilinx (or one of its partners) produce some uber-board with everything-and-the-kitchen-sink on it.  FWIW, many of these "kitchen sink" boards have PCIe interfaces.
legendary
Activity: 1666
Merit: 1057
Marketing manager - GO MP
November 07, 2011, 04:54:00 PM
#90
Why aren't there any pcie cards with only a few fpgas & some power converters on it?

That would be the most cost effective solution as modern fpgas have native pcie endnodes and pcie even has a jtag interface built in. All we need is drivers, a 2 layer pcb and a panel sheet to mount it.
donator
Activity: 1218
Merit: 1079
Gerald Davis
November 07, 2011, 04:45:31 PM
#89
FPGA is much better for me than GPUs because less heat and noise right now but the price and performance leaves much to be desired.

Guess you can say the GPUs are old, inefficient, powerful gas guzzler motors while the FPGAs are new electric vehicles to keep the car analogy going Smiley

However they are getting close.  While GPU may be cheap there is a limit on how many can be powered by a single rig and high efficiency power supplies aren't cheap either.  A good price point for a GPU rig is $1 per MH/s.   FPGA are getting closer every day.

GPU rig $1 per MH & 2MH/W @ $0.10 per kWh.
1GH rig = $1000 hardware cost + $438 per year.  Total cost over 2 years = $2314.

FPGA Rig (22MH/W @ $0.10 per kWh)
1GH rig (@ $2.50 per MH) = $2500 hardware costs + $40 per year.  Total cost over 3 years = $2620.
1GH rig (@ $2.00 per MH) = $2000 hardware costs + $40 per year.  Total cost over 3 years = $2120.
1GH rig (@ $1.50 per MH) = $1500 hardware costs + $40 per year.  Total cost over 3 years = $1620.

Given FPGA massively lower operating costs if they even get close to GPU they are the smart place to deploy new hardware.
hero member
Activity: 518
Merit: 500
November 07, 2011, 04:30:36 PM
#88
FPGA is much better for me than GPUs because less heat and noise right now but the price and performance leaves much to be desired.

Guess you can say the GPUs are old, inefficient, powerful gas guzzler motors while the FPGAs are new electric vehicles to keep the car analogy going Smiley
donator
Activity: 1218
Merit: 1079
Gerald Davis
November 05, 2011, 02:30:52 PM
#87
Well I am not looking to buy until January.  I am disappointed you aren't looking to make a commercial run.  Still hopefully you make your personal run, learn some things and come back with a "round 2" offering.

When it comes to mining I always think big.  I replaced all my hodgepodge collection of GPU with nothing but 5970s because I like the density (2.2GH/rig without extenders or dual power supplies).  That kind of thinking let me get 10GH/s in my garage.  I like your board design because once again ... density.

10GH would be ~ 50 FPGA boards.  Now I have no intention on buying 50 boards all at once but I also like to plan for the "end game".  50 boards lying around and connected by a rats nets of USB cables doesn't seem appealing to me.  Maybe it is my times working in a datacenter or maybe it is just OCD but I like to see everything in their place.

Your design seems to have higher density and provide for more efficient racking.  One backplane w/ 6 cards ~= 1.2GH/s.  If you ever offered a larger backplane of say 10 cards powered by a single PCIe connector well that is even more interest.  Take a 4U server case put 2 backplanes and a tiny ITX motherboard in it.  A low end 500/600W power supply w/ 2 PCIE power connectors could power the entire thing.  20 cards or ~4GH/ in a single 4U case.   Power density would even be low enough to rack these things up in a standard datacenter rack.

Anyways even if you don't sell any in the near future I hope you keep on the project. If you make changes for "round 2" think about density.  It is the advantage you have over other designs, an advantage some would be willing to pay a slight premium for.  Open air rigs (either GPU or FPGA) are fine for now but the "end game" goal would be high hashing density in standardized server cases.  Nobody wants a garage or office full or whirling, noisy open air rigs they just happen to be the most efficient.  GPU likely will never work in standard case do to high thermal load but FPGA ... might.
donator
Activity: 980
Merit: 1004
felonious vagrancy, personified
November 05, 2011, 01:54:11 PM
#86
So what is release date?

Sorry, I wasn't aware that the title could be altered!  I've updated it now.

Got a quesiton about the backplane.  You say it has a 48W powersupply

I've switched to a 72W supply.

but also it has a SATA power connector?  By PSU do you mean it steps down the voltage from SATA power connector what is required for each board?

Exactly.  Technically it is a "DC-to-DC point of load switching regulator."

Nice part is that the 72W supply can feed from either +5V or +12V (the old 48W supply could only use +12V).

Any discounts if someone buys a backplane + 6 boards?

At the moment I am neither taking orders nor announcing a ship date nor guaranteeing that either of these things will happen.  If you have immediate need for an FPGA mining device I suggest you look into the fine offerings by fpgaminer/ngzhang or ztex (or rph although I think he said he's not selling his).

How heavy are the boards?  Your photo has the backplane "down" and the cards plugged into it.  Would there be an issue if the backplane was mounted vertically and the cards acting like "shelves" or would that put too much pressure on the connector.

That works fine.  Those connectors are seriously heavy-duty stuff.  Unfortunately they're expensive too: even in qty50 I still had to pay $2.60 per board for each pair of connectors (male+female).  But there's almost no voltage drop across the gigantic pins and they can carry more current than I'll ever need.

donator
Activity: 1218
Merit: 1079
Gerald Davis
November 04, 2011, 08:22:23 AM
#85
So what is release date?  You may want to update title so people don't think the project is dead.

Got a quesiton about the backplane.  You say it has a 48W powersupply but also it has a SATA power connector?  By PSU do you mean it steps down the voltage from SATA power connector what is required for each board?

Any discounts if someone buys a backplane + 6 boards?
How heavy are the boards?  Your photo has the backplane "down" and the cards plugged into it.  Would there be an issue if the backplane was mounted vertically and the cards acting like "shelves" or would that put too much pressure on the connector.  Just getting some ideas on density and mounting.  I would want to have them enclosed in a case.
staff
Activity: 4284
Merit: 8808
November 04, 2011, 08:15:03 AM
#84
You know FPGA mining is becoming legit, when 2-3 vendors are trying to snipe customers from each others' threads.  Roll Eyes

I prefer the kind of evidence where the margins get down to 20% over COGS. Wink
donator
Activity: 1218
Merit: 1079
Gerald Davis
November 04, 2011, 07:55:45 AM
#83
Haha, very true! Catfish, the truth is that all of the FPGA mining products you see here can be run by anyone who has managed to mine on a GPU. In fact, I think they are even easier to use (less complicated driver installs, overclocking, fan speeds, etc., and no need to even open up your tower to install it).

Yeah, I'd agree with that. I just hope you guys don't sell so many that the difficulty becomes driven by FPGAs instead of GPUs.
Create some OPEC-style quotas or something..

-rph


Well given that FPGA have a long hardware payoff period I don't see FPGA putting much downward pressure on prices.

It will however put a floor on hashing power.  GPU are very electrical dependent.  At $3 per BTC @ current difficulty translates into roughly $0.15 per kWh on even the most efficient GPU.  Thus people who's electrical price is above the break even tend to quit and push hashing power & difficulty down.

FPGA however once bought are a sunk cost and have an electrical cost of <$0.50 per BTC meaning that they will continue to run likely no matter what.  The FPGA portion of hashing power already purchased (currently ~0%) will be "immune" to price changes.  What that means is as the FPGA portion grows the relationship between price/difficulty and hashing power will be come less linear. 

Even if prices spike I don't see a massive rush to buy FPGA but rather a slow continual rollout.  The long hardware payback period will make miners more cautious.  As an example when BTC prices hit $30 at 1.5M difficulty it became a no brainer to buy more GPU.  Even unsustianble as that would be.  The payback period was like 40 days.  If you mined for 40 days you could payoff a card.  FPGA however would still need a significant period of time to payoff the hardware so price spikes will have less influence on sales.


It will be an interesting dynamic to watch because I am sure 2012 will be the year of the FPGA.
rph
full member
Activity: 176
Merit: 100
November 04, 2011, 03:39:22 AM
#82
Haha, very true! Catfish, the truth is that all of the FPGA mining products you see here can be run by anyone who has managed to mine on a GPU. In fact, I think they are even easier to use (less complicated driver installs, overclocking, fan speeds, etc., and no need to even open up your tower to install it).

Yeah, I'd agree with that. I just hope you guys don't sell so many that the difficulty becomes driven by FPGAs instead of GPUs.
Create some OPEC-style quotas or something..

-rph
hero member
Activity: 720
Merit: 528
November 03, 2011, 10:40:10 PM
#81
You know FPGA mining is becoming legit, when 2-3 vendors are trying to snipe customers from each others' threads.  Roll Eyes

Haha, very true! Catfish, the truth is that all of the FPGA mining products you see here can be run by anyone who has managed to mine on a GPU. In fact, I think they are even easier to use (less complicated driver installs, overclocking, fan speeds, etc., and no need to even open up your tower to install it).
rph
full member
Activity: 176
Merit: 100
November 03, 2011, 10:13:57 PM
#80
You know FPGA mining is becoming legit, when 2-3 vendors are trying to snipe customers from each others' threads.  Roll Eyes

-rph
donator
Activity: 367
Merit: 250
ZTEX FPGA Boards
November 03, 2011, 09:33:58 AM
#79
But I am *more* than interested in acquiring a board filled with FPGAs (i.e. 5 daughterboards in the backplane?) - under the conditions that:

Maybe this is what you are searching for: https://bitcointalksearch.org/topic/ztex-usb-fpga-modules-115x-and-115y-215-and-860-mhs-fpga-boards-49180

Quote
a) The kit is assembled to the point where the end-user (i.e. me) doesn't need to do any soldering more complicated than, say, splicing a custom connector to a PC standard PSU. I'm not an EE, not even an electronics hobbyist, and do NOT want to fuck up $1k with a clumsy soldering iron;

No soldering is required. A description about how a standard ATX PSU can be modified (without soldering Smiley) for powering a rig can be found in the initial post of the topic mentioned above.

Quote
b) Getting the FPGAs mining away (pool or solo) is easy enough for a general-purpose software hacker and doesn't require EE knowledge. I mainly run Macs (because they're Unix with MS Office and work well) but all my mining rigs are Linux. I'd like to have my FPGA rig controlled by a Mac Mini or my old G4 Cube (CPU arch may cause problems if x86 libs are needed, though). I've only got 30 years coding experience but the lowest level code I know is C - unrolling loops and VHDL are *well* outside my skillset and I don't have time to learn;

The software (see http://www.ztex.de/btcminer) is ready-to-use and runs on Linux. Rigs can be controlled by a single instance using the cluster mode. Hot-plugging is supported too.

Quote
c) Apart from the peripheral software, everything is pre-loaded and coded. I am not familiar with FPGAs but know that the dev kit for the units talked about here costs a fortune. I won't be tuning the code and re-loading it onto a set of 5 FPGAs, so I don't want or need that cost, but I need it to run as soon as I plug in a cable and ping some control commands down the cable;

Bitstream (and Firmware) is compiled and ready-to-use. Firmware and Bitstream are uploaded by the software through USB. No JTAG programming cables or so are required.

Quote
d) The code loaded onto the FPGAs is *reasonably* efficient and not hugely sub-optimal. I don't want to spend a grand, and then find out in a couple of months about new bitstreams for the FPGAs I own... which would double my hashrate if I could re-program the things. I don't know how to do that, and I assume the SDK is needed too. From what I've read, this will not be a problem as the FOSS logic and all the proprietary optimisations aren't miles away from each other in speed?

The software typically achieves 190MH/s per XC6SLX150-3 FPGA.

Quote
d) ALL necessary cables are included - if they're custom then I'm happy to make them, but you HAVE to include the plugs / sockets because they may not be easily available to me in the UK (and if the connectors have 10+ pins then I'd prefer to pay for pre-made cables);

Only standard cables (which can be purchased in internet) are required.

Quote
e) You are happy to ship to the UK. I will assume trust once I've spoken to you via email so am happy to provide payment up-front so long I feel everything is legit. I won't waste your time.

Article location is Germany, i.e. unless you have not valid VATIN you have to pay 19% German VAT. (But if you import from outside the EU you also have to pay UK import VAT.)
brand new
Activity: 0
Merit: 250
October 31, 2011, 11:51:17 AM
#79

You are missing my point - I have no idea why the developers have not develop good CUDA code - I only speculated to one of the possible reasons.


You are missing the point.  There are VERY GOOD CUDA miners.  It is unlikely any future CUDA miner would get more than 10% more performance out of existing cards.

Nvidia hardware just happens to be ill-suited for integer math (the math used in hashing).
Quite. The CUDA developers *have* developed good code. The hardware architecture is simply not as well-suited to the application as the ATI hardware architecture.

It's a bit like saying back in the pre-GPU days that my old quad G5 was feck-off fast at the FFTs done by the Seti project because only the best programmers could afford PowerMac G5 Quads. Yeah, those machines were silly-money, but the best programmers go where the best pay is (unless they already have enough and work for fun), and optimising code for voluntary projects on minority platforms like the old pre-Intel Mac is *not* where the big money was...

It's off-topic and potentially flamebait, but it appears that people who know more about code and hardware architecture than I do rate Nvidia more highly (elegance, quality drivers, etc.) than ATI/AMD. Given the appalling issues I've had with ATI GPUs in building my little bitcoin farm (one vendor, in three purchases totalling 10 cards, managed to send one DOA card each purchase, and one of the originally-working cards has now died. These were *all* XFX brand, so perhaps the XFX versions of Nvidia GPUs may be of similarly poor quality), I really can't tell whether ATI have bad driver code and poor hardware design, or whether OEMs are making a sow's ear out of a silk purse. I don't have any Nvidia kit - even my many Macs use ATI GPUs now.

The nightmare of ATI's Linux drivers and the 6950 cards showed that there's some funny business in the drivers - funny business that is developer time better spent on fixing bugs and increasing reliability. But the AMD hardware approach is **SO** much more appropriate to bitcoin mining OpenCL kernels that the whole ATI/Nvidia thing boils down to one thing.

Luck. Bitcoin mining is the 'killer app' for ATI's stream processor approach (at least in the 5xxx and 6xxx cards). That's just luck - there's nowhere NEAR that disparity in performance between the two platforms on their intended applications - games - otherwise Nvidia would be out of business. And if AMD's new 7xxx cards move away from simple-but-plentiful massively-parallel stream processors, you'll find that the older cards are STILL faster than the new ones. So far, I'm getting better performance from my 'outdated' 5850 cards than even the fastest 6950 I own, and the 6950 required jumping through LOADS of hoops. Oddly enough, the 'obsolete' 5850 cards are still being sold new in the UK for well over £200 - that's 'new release' pricing...
donator
Activity: 1218
Merit: 1079
Gerald Davis
October 31, 2011, 12:41:46 PM
#78
Not speaking on the architectural differences between NVidia and AMD but XFX are generally lower cost OEM.  Higher DOA don't surprise me.  They have good warranties though.  My impression (via dead cards and sometimes illogical bioses) is they are the Kia motors of videocards.
brand new
Activity: 0
Merit: 250
October 31, 2011, 03:31:43 AM
#78
"My FPGAs won't lose 30% overnight due to some Goldman Sachs bullshit."

I am tempted to make this my new .signature
GS *own* the US treasury and whilst the USD is accepted as global reserve ccy (and energy aka oil is priced in said dollars), your FPGAs could be made utterly *useless* if GS decided the world's financial system needed revolutionary change... don't underestimate 'em.

Anyway that was entirely off-topic. I'm running an inefficient-ish but awfully good fun 7 GH/s system made from DIY store £12 flat-packed shelving units. I have not needed to turn on the central heating boiler in my UK house because the mining rigs are behaving like large-format fan heaters Cheesy

I could fit a LOT of your FPGA boards onto one of my shelf rigs. I doubt I could afford to - looks like I could get 120 of the FPGA units plus power and cooling done elegantly on the shelf unit!

But I am *more* than interested in acquiring a board filled with FPGAs (i.e. 5 daughterboards in the backplane?) - under the conditions that:

a) The kit is assembled to the point where the end-user (i.e. me) doesn't need to do any soldering more complicated than, say, splicing a custom connector to a PC standard PSU. I'm not an EE, not even an electronics hobbyist, and do NOT want to fuck up $1k with a clumsy soldering iron;
b) Getting the FPGAs mining away (pool or solo) is easy enough for a general-purpose software hacker and doesn't require EE knowledge. I mainly run Macs (because they're Unix with MS Office and work well) but all my mining rigs are Linux. I'd like to have my FPGA rig controlled by a Mac Mini or my old G4 Cube (CPU arch may cause problems if x86 libs are needed, though). I've only got 30 years coding experience but the lowest level code I know is C - unrolling loops and VHDL are *well* outside my skillset and I don't have time to learn;
c) Apart from the peripheral software, everything is pre-loaded and coded. I am not familiar with FPGAs but know that the dev kit for the units talked about here costs a fortune. I won't be tuning the code and re-loading it onto a set of 5 FPGAs, so I don't want or need that cost, but I need it to run as soon as I plug in a cable and ping some control commands down the cable;
d) The code loaded onto the FPGAs is *reasonably* efficient and not hugely sub-optimal. I don't want to spend a grand, and then find out in a couple of months about new bitstreams for the FPGAs I own... which would double my hashrate if I could re-program the things. I don't know how to do that, and I assume the SDK is needed too. From what I've read, this will not be a problem as the FOSS logic and all the proprietary optimisations aren't miles away from each other in speed?
d) ALL necessary cables are included - if they're custom then I'm happy to make them, but you HAVE to include the plugs / sockets because they may not be easily available to me in the UK (and if the connectors have 10+ pins then I'd prefer to pay for pre-made cables);
e) You are happy to ship to the UK. I will assume trust once I've spoken to you via email so am happy to provide payment up-front so long I feel everything is legit. I won't waste your time.


I can see how this technology may be a bit of a ball-ache to sell to a 16-yr-old Windows PC 'extreme gaming' enthusiast (no offence to said group, of course) due to the level of support required. However, if it's 'plug and play' to the extent that a reasonably old hacker can get working without ever getting into electronics, please let me know the price.

If running a grid of these FPGAs on your cool backplane (with gold anodised heatsinks, or anything that takes my fancy) gets a respectable hashrate (let's be very pessimistic and say 100 MH/s per FPGA, so half a gig for the rig) then I want one purely for the cool-factor...


Incidentally, whilst this will get the real EEs sneering at me here, what made me post up a firm request for quote (and if you want to sell me one, because you think I'll be able to get it running without drowning you in support emails, then I am a serious buyer) was how the design LOOKS. Yes, a competitor has questioned one aspect of the design on technical terms. I'm not qualified to comment, but the board looks tidy, elegant and with that heatsink, just really cool.

The ultra-low-cost FPGA solution (bake your own in a skillet!) thread impressed me hugely, but the complete solution is a mess of boards and wires. At the prices being quoted for these kits (you're all stuck by the cost of one major component), elegance is a massive value-add for anyone who considers industrial design important.

Hell, I'd put one board horizontally in the viewable area underneath my G4 Cube if I could cool the whole thing (and the Cube is souped up).


The only questions still vexing me are whether you'd sell one to the UK, whether that damn SDK is required (I can't call myself an academic, unless you consider professional financial qualifications 'academia'), and whether it really is just a case of plugging everything together, sticking a USB cable into a spare Mac or Linux box, and writing some code to send commands down the USB cable. If so, I'm in.

(and if I get stuck, my ex-VHDL-consultant mate would probably help, he's got a Stratix 3 dev board at home for teh lulz)
donator
Activity: 1218
Merit: 1079
Gerald Davis
October 11, 2011, 12:12:40 PM
#77

You are missing my point - I have no idea why the developers have not develop good CUDA code - I only speculated to one of the possible reasons.


You are missing the point.  There are VERY GOOD CUDA miners.  It is unlikely any future CUDA miner would get more than 10% more performance out of existing cards.

Nvidia hardware just happens to be ill-suited for integer math (the math used in hashing).
sr. member
Activity: 404
Merit: 250
October 11, 2011, 11:28:10 AM
#76
This is all true. Just look at the shader counts between NVidia and AMD cards and you have your answer. The processors (shaders) have to do the work, and NVidia cards don't have as many.
Pages:
Jump to: