Pages:
Author

Topic: Cairnsmore2 - What would you like? - page 6. (Read 11573 times)

legendary
Activity: 1162
Merit: 1000
DiabloMiner author
May 21, 2012, 10:23:02 PM
#29
I agree with a modular design that can be sold and then upgraded or added to. The $15,000 BFL entry point is just focking stupid and actually it's bullshit because it kills the democratic/decentralized component of mining. You are now completely pushing out small scale miners and any form of ingenuity. This $15,000 bullshit is just a money box and will be worth .50 if bitcoin goes to shit. It's the nuclear weapon of bitcoin, once it exists you just up the ante too much and screw everyone who can't afford one.

If you guys can produce a 4-5 Gh/s base product which could be upgraded to 20-25 with additional modules added to the unit over time that would be ideal.

Please beat out BFL so we can piss on their poor business practices and customer service.  Mining is speculative enough without being forced to put out your money for months in advance and then still get delayed.

OTOH, that $15k entry point means nothing if they're getting that high of a mh/$. That said, for a large scale farm, a $5k box or a $15k box, there is little inbetween. Especially since they've already sold like 25 of them and no one even has one yet.
hero member
Activity: 535
Merit: 500
May 21, 2012, 08:48:28 PM
#28
I agree with a modular design that can be sold and then upgraded or added to. The $15,000 BFL entry point is just focking stupid and actually it's bullshit because it kills the democratic/decentralized component of mining. You are now completely pushing out small scale miners and any form of ingenuity. This $15,000 bullshit is just a money box and will be worth .50 if bitcoin goes to shit. It's the nuclear weapon of bitcoin, once it exists you just up the ante too much and screw everyone who can't afford one.

If you guys can produce a 4-5 Gh/s base product which could be upgraded to 20-25 with additional modules added to the unit over time that would be ideal.

Please beat out BFL so we can piss on their poor business practices and customer service.  Mining is speculative enough without being forced to put out your money for months in advance and then still get delayed.
member
Activity: 86
Merit: 10
May 21, 2012, 11:10:51 AM
#27
1) Cost mega hash per $
2) Back plane
3) Ethernet or USB box doesn't need to be self sustaining, controlling computer can live somewhere else.

I am not sold on the idea of it needing to fit in a computer rack. Allot of people use shelving units, floors, and benches to hold their computers. An open air cage would be fine in my opinion.
hero member
Activity: 518
Merit: 500
May 21, 2012, 07:21:52 AM
#26
LOL at the guys talking about 120V Roll Eyes

For the US you have BFL and their crappy delays and excuses

For the EU we have yohan and this real company that delivers

Too bad we cannot import / export due to outrageous costs / VAT / duty and other gubbment crap.

I'd like to see someone in the UK import a BFL rig and somebody in the US import one of yohan's rigs.
sr. member
Activity: 476
Merit: 250
Keep it Simple. Every Bit Matters.
May 21, 2012, 04:28:02 AM
#25
I agree to the sentiments that a blade and/or modular like design should work out quiet well.
I did see some examples on this forum of someone who did something like that.
Not sure if it went it to big production, but it certainly looked good and impressed many.

Upgrades and replacements could become a more viable thing, allowing a low starting cost to upgrade later.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
May 20, 2012, 11:39:59 PM
#24
BTW, re programmable VRMs. You're right, Yohan, they're expensive.

So, what about just having a jumper that we can change to swap out some of the power circuitry (resistors, etc) so that it'll lower (Spartan 6 example voltages) from 1.2 to, say, 1.0 and cut the clock rate just as much (assuming FPGAs are stable at low voltages, I have no experience with this on FPGAs, only on CPUs and GPUs).
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
May 20, 2012, 11:32:23 PM
#23
My advice to you, yohan:

Size each unit so that its fully-loaded configuration is about 960W. This will allow users to put exactly 2 of these units per 120V-20A circuit, or 4 per 240V-20A circuit, or 9 per 208V-30A 3-phase circuit (while not exceeded 80% of its maximum current rating). Rationale: in datacenter environments, users pay a fixed monthly price per circuit. A circuit not used at its maximum capacity (or a config not fully loaded to not over-consume) is wasted money. BFL failed to follow this advice of mine when designing their 1250W mini rig.

This so much. Ye typical DC gives you two 120v 20a circuits per rack and you have to pay for more at prices that are insane (but still cheaper than residential and small business electrical prices), and from what I've seen they max out at four 120v circuits at a rack. So, if all we have is 80a of 120v, we have to make the most of it as much as possible.

Given 1 units per 10a, each unit is, say, 4u big, a 42u rack will hold 8 of these units if we put 1u space between each one (which I've been told is normal for high density units so they don't cook each other) (= 5u), 8 * 10 = 80a, so we should be fine in an average DC, plus we have 2U to spare.
mrb
legendary
Activity: 1512
Merit: 1027
May 20, 2012, 11:21:24 PM
#22
My advice to you, yohan:

  • Definitely use Ethernet. Not USB: the maximum cable length is only 5 meters, and large scale miners have racks spaced out my much more than 5 meters. Not cable PCIe: it is overkill, too expensive, and its extra bandwidth unnecessary.
  • If you put an embedded PC in the unit, use USB to link it to the internal FPGA boards. You don't necessarily have to use USB cables, instead you may want to design a backplane populated with SATA data and power connectors repurposed to carry the USB signal (over the SATA data connector) and power to the FPGA boards (over the SATA power connector). The boards would be plugged into the backplane, much like a SATA drive is plugged into the SATA backplane of an x86 server. Put a USB hub controler IC on the backplane to make it a USB switch (1 upstream link to the embedded PC, multiple downstream links to the FPGA boards). A SATA power connector is only rated 4.5A for the 3.3V line, 4.5A @ 5V, and 4.5A @ 12V, therefore I would suggest to repurpose the useless 3.3V and 5V rails to 12V, giving you a total of 13.5A for 12V, or 162W per connector, which should be sufficient for a board with up to 16 LX150's. BFL seems to be following a similar idea by carrying USB signals over SATA cables (not backplane connectors though) in their mini rig.
  • Use 19" rackable chassis. And make them at least 2U. Rationale: easier to cool, and bigger more efficient fans can be installed in them, compared to the 41 mm fans in 1U chassis (this is why Facebook uses 1.5U chassis in their Prineville datacenter instead of 1U: http://opencompute.org/wp/wp-content/uploads/2011/07/Server-Chassis-Specifications.pdf )
  • Use commodity ATX power supply units. And allow users to purchase your chassis without PSUs. A lot of miners like myself have invested in efficient PSUs for their GPU rigs, and would like to re-use them. The 20/24-pin connector can power the embedded PC, while the 6- or 8-pin PCIe power connectors can power the backplane, which powers the FPGA boards.
  • Keep it simple stupid. Fewer components means less chance of hardware failure, reduced costs, and reduced time-to-market. Particularly: (1) no LCD, and (2) no Wifi. Rationale: (1) I want to remotely configure and monitor the FPGA unit over a web interface, I don't want to deal with an LCD display and buttons that require me to be physically present in front of the unit, and (2) large miners are likely to already have cat5 deployed in their datacenters and Wifi is unreliable in some of these environments.
  • Temperature probes for each FPGA.
  • Fan speed monitoring (however don't necessarily make them PWM controllable)
  • Easily replaceable fans (like some 1U chassis where fans are not screwed in, but can be slided in and out of a plastic frame, with rubber to absorb vibrations).
  • Size each unit so that its fully-loaded configuration is about 960W. This will allow users to put exactly 2 of these units per 120V-20A circuit, or 4 per 240V-20A circuit, or 9 per 208V-30A 3-phase circuit (while not exceeding 80% of the circuit's maximum current rating). Rationale: in datacenter environments, users pay a fixed monthly price per circuit. A circuit not used at its maximum capacity (or a config not fully-loaded to not over-consume) is wasted money. BFL failed to follow this advice of mine when designing their 1250W mini rig.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
May 20, 2012, 09:39:53 PM
#21
We are looking to make this rack stand alone and yes it is up against Butterfly's larger products. We would like to make this run free of a host PC. This is very much a big system concept. I hope we do a better product that the competitors but time will tell on that point. It's very unlikely we will do a PCIe backplane for this although we are looking at these for our general HPC products. The bought in industrial backplanes are usually very expensive so it's unlikely we would but that in. However we can use one of our standard ones that we have designed or even a derivative of one of them. The cost is quite reasonable doing it this way.

I would forget any secondary value on any sort of mining kit. If you are banking on that your equations will be wrong. FPGA families have replacements on average every 2 years and the old family will have limited value even still as chips never mind in a system that has either to be reused or silicon recovered. GPUs are even worse for this. Try selling a 2-3 year old GPU. It might have cost £500 but in 2/3 years you can buy a brand new board of equivalent performance usually for less than £100. Second hand maybe it goes for £30. I for one would not want to buy a ex-mining GPU given the stress put on them but of course most people don't mention that on Ebay.

The only thing you need to focus on is total cost of ownership per MH without lowering the product quality like BFL has. A 4U rig with boards that can be installed after purchase would get a lot of people on board: however, 4U high density implies 2000w redundant 208/240v-only PSUs, and a lot of people in the US and a lot of DCs in the US simply do not have access to that.

So, maybe a 2U rig with half-height boards in a custom case that does 1200w max? As long as the density is as high as possible, you will have customers.

Plus, if you make this a generic rig and sell different sorts of boards for this, you essentially have an FPGA blade server. You could sell mining boards along with non-mining boards that your non-mining customers buy, and non-mining customers could very well buy mining oriented boards to combine them with non-mining boards to handle whatever they need.

Mining FPGAs, as far as I can tell, are largely just FPGA designs with high amp VRMs and no external memory chips and no high speed IO wired in. If you just need computation power without local memory or mass IO, mining boards really would be useful to you.

Centralizing all your FPGA products into a single FPGA-blade design would lower the cost for all customers theoretically.
full member
Activity: 196
Merit: 100
May 20, 2012, 09:03:03 PM
#20
Something modular with a buy-in price that isn't $10-20k USD. $500 chassis and $1-2k blades make it possible to have high density but gradual expansion for small guys. Big guys can just buy a chassis with all the blades populated.

Ethernet + simple configuration via USB. Controlling software to set IP address and mining information. http server running that displays the health/output of the chassis. Perhaps consider a small embedded linux controlling system so advanced users can SSH in to poke around and run custom scripts.

This with a definite on the embedded linux so you could even run your own/different from shipped miner if wanted.
hero member
Activity: 697
Merit: 500
May 20, 2012, 05:22:40 PM
#19
Something modular with a buy-in price that isn't $10-20k USD. $500 chassis and $1-2k blades make it possible to have high density but gradual expansion for small guys. Big guys can just buy a chassis with all the blades populated.

Ethernet + simple configuration via USB. Controlling software to set IP address and mining information. http server running that displays the health/output of the chassis. Perhaps consider a small embedded linux controlling system so advanced users can SSH in to poke around and run custom scripts.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
May 20, 2012, 04:17:53 PM
#18
This is awesome, I love the way you guys think.

If you were going to consider a PCIe based modular system, note that PICMG 1.3 is an industry standard already, and seems to me that it would suit. This would allow modules to be swapped out when the future performance increases come along. You could cut costs by using a single cheap PCIe switching/fanout chip and splitting the lanes so that each slot would only get 1x electrical connectivity. This is plenty for this application, and would even work for some video and GPU compute applications such as regular video card based bitcoin mining.

There are cases and power supplies already designed around the standard too, so that is an advantage.
sr. member
Activity: 476
Merit: 250
Keep it Simple. Every Bit Matters.
May 20, 2012, 04:11:21 PM
#17
We are looking to make this rack stand alone and yes it is up against Butterfly's larger products. We would like to make this run free of a host PC. This is very much a big system concept. I hope we do a better product that the competitors but time will tell on that point. It's very unlikely we will do a PCIe backplane for this although we are looking at these for our general HPC products. The bought in industrial backplanes are usually very expensive so it's unlikely we would but that in. However we can use one of our standard ones that we have designed or even a derivative of one of them. The cost is quite reasonable doing it this way.

I would forget any secondary value on any sort of mining kit. If you are banking on that your equations will be wrong. FPGA families have replacements on average every 2 years and the old family will have limited value even still as chips never mind in a system that has either to be reused or silicon recovered. GPUs are even worse for this. Try selling a 2-3 year old GPU. It might have cost £500 but in 2/3 years you can buy a brand new board of equivalent performance usually for less than £100. Second hand maybe it goes for £30. I for one would not want to buy a ex-mining GPU given the stress put on them but of course most people don't mention that on Ebay.

Sounds like a good plan, not having to put a computer together as a host for it, certainly would make the overall costs lowered for us, assuming an existing one wasn't able to be used.

I would not try to resell an old mining gpu or rig, that wouldn't be fair to the poor sod who got it after 2 or 3 years of abuse. Old computers in my house nearly always end up having a secondary purpose once they out-lived there usefulness. At least until they completely fail.

The re-purposing would not it's primary benefit, it's just one of the reasons why having a modifiable system has it's benefit. As a programmer and Designer I would have a use of a GPU farm, if I had the tools to adapt it's purpose, admittedly not a huge one. However I do a fair amount of experimental software in my programming, it's what interested me by bitcoin.
My bigger reason is modding it for extra performance from a new or tweaked bios. I'm just late to the party as such, so still learning.
I don't want to take this off-topic though by what I have plans for. I know FPGA all hold that risk, I just know of those FPGA that have allowed modifications have grown to have tweaks done to it that increased performance by 10-20% which is worth considering as a good selling point.
sr. member
Activity: 462
Merit: 251
May 20, 2012, 03:47:07 PM
#16
We are looking to make this rack stand alone and yes it is up against Butterfly's larger products. We would like to make this run free of a host PC. This is very much a big system concept. I hope we do a better product that the competitors but time will tell on that point. It's very unlikely we will do a PCIe backplane for this although we are looking at these for our general HPC products. The bought in industrial backplanes are usually very expensive so it's unlikely we would but that in. However we can use one of our standard ones that we have designed or even a derivative of one of them. The cost is quite reasonable doing it this way.

I would forget any secondary value on any sort of mining kit. If you are banking on that your equations will be wrong. FPGA families have replacements on average every 2 years and the old family will have limited value even still as chips never mind in a system that has either to be reused or silicon recovered. GPUs are even worse for this. Try selling a 2-3 year old GPU. It might have cost £500 but in 2/3 years you can buy a brand new board of equivalent performance usually for less than £100. Second hand maybe it goes for £30. I for one would not want to buy a ex-mining GPU given the stress put on them but of course most people don't mention that on Ebay.
newbie
Activity: 20
Merit: 0
May 20, 2012, 03:00:01 PM
#15
Ethernet would defiantly be a good investment on the backplane. If using Ethernet it would be great if the backplane handled all of its own operations without a host of some sort(eg. doesn't need to be controlled from a pc.) If that were the case i would asume you guys would would be using an ARM chip and a cut down version of linux and some cheap flash memory.
legendary
Activity: 1050
Merit: 1000
You are WRONG!
May 20, 2012, 01:51:45 PM
#14
sup
sr. member
Activity: 423
Merit: 250
May 20, 2012, 01:44:12 PM
#13
Something that would be freaking amazing, is a Ethernet port or a Wifi chip built in. Doing so, will eliminate the need of a computer. The board can hash the given string (idk what kind of info the pool sends to the miner), then it does its job, and sends it back to the pool.

A port for a little LCD would be nice.

Something like this



Where it shows something like this:
Quote
Connected FPGAs: 5
FPGAs Working: 4 <---- Maybe one is off or there is something wrong. Else, it would be 5

Current work speed: 3.2 Gh/s <---- it can be Mh/s if 1 board is connected. Easy logic here.

Accepted Shares: 430
DOA Shares: 10

If you do choose to add a wifi chip you can add

Quote
Connected via: Wifi (Dlink) <---- if you have Ethernet then it will say Ethernet

What you guys think?
This will sell like water lol.
full member
Activity: 347
Merit: 100
May 20, 2012, 01:26:45 PM
#12
Maybe a good idea for a backplane: http://www.chassis-plans.com/single-board-computer/S6806-backplane.htm
There is also a Rack for this backplane
sr. member
Activity: 476
Merit: 250
Keep it Simple. Every Bit Matters.
May 20, 2012, 01:14:00 PM
#11
Then a Ethernet connection makes much more sense for something that will be a 3-4u Rig.

As this would for comparison sake, be similar to the mini-rig by butterflylabs (sorry - someone had to say it eventually).
I personally would never consider going with butterflylabs, mostly since it's not UK based. I like to deal with local merchants when I'm buying expensive equipment like that. What sort of price bracket could one expect something like this to fall into? Similar, more or less?

A Rig like that, would be a big investment, so for the dual purpose of having a bit more freedom when it comes to tweaking it for max performance and efficient and maybe finding a secondary purpose for it in case bit mining doesn't work for us a year or two from now. What options are available to you as a hardware engineer?
If VRM isn't an ideal choice what else is there? you hinted there maybe other options?
sr. member
Activity: 462
Merit: 251
May 20, 2012, 12:52:54 PM
#10
Yes that is very much the concept. That also allows the processing cards to be added to or even replaced with newer better ones some way down the line.
Pages:
Jump to: