Author

Topic: Cutdown Graphics Card (Read 1656 times)

legendary
Activity: 1973
Merit: 1007
September 03, 2011, 03:08:34 AM
#19
Damn, I was really hoping to see some sawed off 6990 pics when I entered this thread Smiley

Now, if AMD made a cheap and energy efficient mining card, then every serious bitcoin miner would snatch them up, in turn driving up difficulty and defeating the purpose.

Also, I don't believe the rumors on the 7000 series having twice the stream processor count. If AMD's intended audience of gamers has to fight with large scale bitcoin mining operations just to get their hands on the latest series card, they may move to Nvidia...
newbie
Activity: 36
Merit: 0
September 03, 2011, 02:32:16 AM
#18
My apologies, I was quite tired and it just seemed like a bit of a snide comment that's all. I think the amount I can contribute here is now limited as this is going beyond the extent of my knowledge. But if you did these things I mentioned (cut memory speeds / size, cut bus width, using cheaper non-solid-state caps, cheaper high airflow fan), how much would the reference PCB design need to be changed? The latter two I don't think would require any change at all (maybe a few slight changes for the caps). The bus width I'm not sure about, but I think the only major thing on there is the change in memory. Even still, it's not as if you're drastically re-designing the whole thing, just a few changes here and there.
hero member
Activity: 602
Merit: 500
September 02, 2011, 11:55:18 PM
#17
Yes I do know that Tesla is based on the Fermi architecture, are you aware that they have 6GB of memory on them which would be totally useless?

You're talking about improving performance, I'm talking about cutting costs. I would have thought it would be pointless making an ATI "super GPU" equivalent to the Tesla, I'm saying it would be better to take a cheap consumer card (a HD 5850 for example) and cut the crap which isn't used. I'm pretty sure that the performance wouldn't budge, but that's got to have a fairly serious impact on the price.

Please don't take such an offensive stance, I am merely bouncing an idea off people.

An offensive stance is replying in a way that you don't like? That's sort of life.

As for RAM, it is a slight cost certainly, but it's not at all the main cost of what makes a tesla a > $2k card, not by a long shot. Creating a custom layout to be printed for a tiny market is the main cost. Tens of thousands compared to tens of millions of units makes a huge difference.
legendary
Activity: 2072
Merit: 1001
September 02, 2011, 11:29:10 PM
#16
when people discuss fpga boards, like the ones recently released, are they adding in the power usage
of the host computer which has a usb port to connect it to? buying just one means having a PC on all the
time too. And buying a low power PC, while not being that expensive, has to be factored in. Lets say 200
dollars for something small and just as efficient as the fpga board. And lets triple the power usage?

thoughts?

I guess I'm a geek, but I assumed everyone had a computer on all the time anyway?  I mean, where are all your files stored?  What do you stream to your TV from?  What does all your music play off of if you don't have a computer on 24/7?

I really cannot imagine having 10 of these fpga boards laying out in the living room or in some type of tray...
They use a molex connector right? They all need a usb port thus some type of hub comes into play. What good
is an laptop in that case when you need molex connectors (10 of them)? Some type of stand alone PSU with
an old laptop?

In my case, my main PC is in the living room connected to a aud/vid receiver which then goes to the tv.
I really do not want a bunch of these cards sitting around so the cat will sleep on them to stay warm.
So down in the basement they would go sitting on top of an old 1U half size rack mount server.. which aint
exactly power efficient if it contains a 350 watt power supply.

I am just trying to envision the most efficient way to hook up 10 of these boards without doubling the power
usage compared to the boards themselves. Lets say you get the dual fpga boards that draw about 15 watts..
so that is 150 watts for about 2 gh/s. Does one basically have to double or triple that power usage to get
them to be actually useful?
sr. member
Activity: 294
Merit: 250
September 02, 2011, 11:01:59 PM
#15
when people discuss fpga boards, like the ones recently released, are they adding in the power usage
of the host computer which has a usb port to connect it to? buying just one means having a PC on all the
time too. And buying a low power PC, while not being that expensive, has to be factored in. Lets say 200
dollars for something small and just as efficient as the fpga board. And lets triple the power usage?

thoughts?

I guess I'm a geek, but I assumed everyone had a computer on all the time anyway?  I mean, where are all your files stored?  What do you stream to your TV from?  What does all your music play off of if you don't have a computer on 24/7?
legendary
Activity: 2072
Merit: 1001
September 02, 2011, 10:38:19 PM
#14
when people discuss fpga boards, like the ones recently released, are they adding in the power usage
of the host computer which has a usb port to connect it to? buying just one means having a PC on all the
time too. And buying a low power PC, while not being that expensive, has to be factored in. Lets say 200
dollars for something small and just as efficient as the fpga board. And lets triple the power usage?

thoughts?
sr. member
Activity: 462
Merit: 250
It's all about the game, and how you play it
September 02, 2011, 07:24:26 PM
#13
Has anyone tried to contact amd to simply try and purchase a gpu bga package? i have to assume that connectiong it to what are known factors is simpler than desiging a brand new asic
newbie
Activity: 36
Merit: 0
September 02, 2011, 05:41:05 PM
#12
Yes I do know that Tesla is based on the Fermi architecture, are you aware that they have 6GB of memory on them which would be totally useless?

You're talking about improving performance, I'm talking about cutting costs. I would have thought it would be pointless making an ATI "super GPU" equivalent to the Tesla, I'm saying it would be better to take a cheap consumer card (a HD 5850 for example) and cut the crap which isn't used. I'm pretty sure that the performance wouldn't budge, but that's got to have a fairly serious impact on the price.

Please don't take such an offensive stance, I am merely bouncing an idea off people.
hero member
Activity: 602
Merit: 500
September 02, 2011, 05:30:19 PM
#11
I am talking about taking a standard gaming card (not a workstation card) and cutting the features down. I didn't mention an ATI-esque Tesla for mining, I was referring to a comment somone else made about the lack of ATI GPGPU cards.

Do you know what a tesla is? It's based on the fermi architecture just like any other gtx 5xx gaming card. If you want a GPGPU from ATI that's more or less the same idea. You are re-vamping an existing architecture for a much smaller audience, and so you are going to pay out the butt for it, and not necessarily get much improvement.
newbie
Activity: 36
Merit: 0
September 02, 2011, 04:21:59 PM
#10
I am talking about taking a standard gaming card (not a workstation card) and cutting the features down. I didn't mention an ATI-esque Tesla for mining, I was referring to a comment somone else made about the lack of ATI GPGPU cards.
hero member
Activity: 602
Merit: 500
September 02, 2011, 03:45:23 PM
#9


Now obviously economies of scale apply here, but even still - is this realistic?



No it isn't. Guess how much an NVidia Tesla costs?

Spoiler: Typically over $2000, and for not that much more card than your typical quarter cost gaming card. Same thing you'd see for an ATI style mining card.
newbie
Activity: 15
Merit: 0
September 02, 2011, 12:35:37 PM
#8
If all the quotes of the 7 series being "conservatively" double the stream processors is true, then its entirely possible we'll end up with cards in the $100 range capable of hitting a quite high Mh/s. Which turns things into a different ballgame altogether.

That being said, if it ever got off the ground, I would definitely be interested in buying a card that was designed for mining.
newbie
Activity: 36
Merit: 0
September 02, 2011, 12:27:31 PM
#7
Hmm you both have a fair point. I guess in the next year or so it's really MH/Watt which takes precedence over MH/$. Although the FPGAs recently released are untouchable for power consumption, they don't half cost a lot!
sr. member
Activity: 294
Merit: 250
September 02, 2011, 12:21:57 PM
#6
To me the huge advantage of FPGAs is their very low power usage.  This lowers electricity costs, heat and noise.

What you are proposing could lower the cost of the card, but the power usage would be pretty much the same.

If FPGAs get just a little cheaper, I'm getting a few.
sr. member
Activity: 392
Merit: 250
September 02, 2011, 12:06:27 PM
#5
I understand what you're suggesting, and it isn't totally crazy (unless you need "thousands" of customers for a graphics card manufacturing run instead of "hundreds").

All it would do is raise the difficulty Smiley

Seriously, though, this would have been more feasible several months ago. Now, with BTC at $8 each and most cards making 1/8th of a BTC per day (after electricity costs), MOST people -- even miners -- aren't expanding their mining farms with both hands.
newbie
Activity: 36
Merit: 0
September 02, 2011, 11:57:59 AM
#4
I don't think you understood me correctly. It's obvious no company wants to produce cards for mining (why would they?), AMD/ATI don't produce the cards, they design the chips and a reference card and that's it - they couldn't give a monkey's what manufacturers choose to do with their design.

So take the reference design => Cut out a load of stuff which isn't needed => Final result is a "mining" card with a much better MH/$.
newbie
Activity: 56
Merit: 0
September 02, 2011, 11:43:30 AM
#3
Simple answer - You are under the assumption that AMD wants to produce cards for Bitcoin mining.  They do not, and I agree with their decision. 
newbie
Activity: 20
Merit: 0
September 02, 2011, 10:34:29 AM
#2
I agree that this sounds like a better option then the fpgas. Plus in theory would be much cheaper. But ATI would rather have us by there 7000 series cards for a premium lol  Wink
newbie
Activity: 36
Merit: 0
September 02, 2011, 06:09:23 AM
#1
So there's lots of talk going around about various projects to fund custom FPGA and ASIC miners, the latter of which has developmental costs in the millions (ie. far out of reach for average users here). A while back someone mentioned they couldn't understand why ATI doesn't produce GPGPU cards like Nvidia's Tesla range, and I think they have a fair point in that: for mining the two essential factors are stream processors and core frequencies - standard consumer GPGPUs go half way on this by meeting the stream processor and frequency requirements, but have enormous VRAM chips (which would be pointless for mining).

As people are prepared to invest a total of millions on an ASIC, would it not be cheaper and more worthwhile (for your average user here) to start taking reference ATI designs and cutting all the unnecessary extras which bring the price up but don't improve mining rates? I don't pretend to really know anything about this field; but from what I understand, graphics card manufacturers take a reference design, tweak the design by changing routing and components to meet a specification, buy the components in and then assemble them onto a PCB - all of which is within reach for a well organized community project.

So by taking a reference design, cutting the bus width (would this decrease costs?), reducing the memory capacity and effective speed (DDR2 / DDR3 are still readily available and cheap) and using a fan chosen purely for airflow, I reckon that would sheer off a fair amount of the cost.

Now obviously economies of scale apply here, but even still - is this realistic?


This is really just a thought I had the other day, so I'd be interested to see what thoughts you guys had.

Cheers,
Mike
Jump to: