Pages:
Author

Topic: ASIC Testing on Scrypt? - page 6. (Read 17437 times)

sr. member
Activity: 347
Merit: 250
September 02, 2013, 11:37:37 PM
#87
Yeah but watercooling is more effective which should increase the lifespan of the GPUs.

It also takes care of the problem of having such a high heat load in such a small, densely packed area, which seems to be what people above were saying made it unlikely a GPU farm of this size can be built.  And Wolf0's estimate of 30% is real high.  Large GPU farms don't go and pay retail single unit pricing from gaming water block manufacturers.  We're talking about thousands of units here at wholesale pricing with the GPU manufacturer omitting the fan and heatsink to drop costs.
hero member
Activity: 868
Merit: 500
CryptoTalk.Org - Get Paid for every Post!
September 02, 2013, 11:33:58 PM
#86
Thanks for pointing that out. This seem like an expensive solution.

If you can spend $750,000 on GPU's then you probably could spend another $50,000+ on cooling  Smiley

30% of $750,000 is $225,000. Even if you have that kind of cash to drop on cooling, it's stupid to do because it's a fuck of a lot cheaper to get good AC.

Yeah but watercooling is more effective which should increase the lifespan of the GPUs.

I'm not sure if the increased lifespan justifies the amount of money you would need to spend though.

hero member
Activity: 868
Merit: 500
CryptoTalk.Org - Get Paid for every Post!
September 02, 2013, 11:18:03 PM
#85
Thanks for pointing that out. This seem like an expensive solution.

If you can spend $750,000 on GPU's then you probably could spend another $50,000+ on cooling  Smiley
vip
Activity: 756
Merit: 503
September 02, 2013, 11:14:32 PM
#84
Thanks for pointing that out. This seem like an expensive solution.
vip
Activity: 756
Merit: 503
September 02, 2013, 10:51:27 PM
#83
Quote
Water blocks for each GPU would easily raise the cost per GPU by 25%-30%. Sure, if you have unlimited funds, you can do it, but other than that, it's just stupid.
He's not talking about using water blocks for each GPU but water cooled air handlers like these:
hero member
Activity: 630
Merit: 500
September 02, 2013, 09:48:27 PM
#82
I guess the NSA Utah data center just opened for business?
hero member
Activity: 574
Merit: 500
September 02, 2013, 09:29:52 PM
#81
Look forward to chatting, love to hear from others on how they implemented, even after the fact. My expansion will never grow above 200,000 kh/s for GPUs.

I believe the power consumption would more likely be 1MW by extrapolation from my farm. And by "below Radar", I think it was meant that it was likely to be heard of by someone in the community.

In terms of switching over though, going from Bitcoin to Litecoin takes a bit of tweaking, but certainly could be pulled off. I'd question why they just switched though, as it would have been wiser to switch months ago.


That's the thing. If this was a large entity you would think that there would be people employed to manage this kind of setup or even the couple investors would operate it. Even if they switched a few machines a day over it would pay for labor and we would see a linear increase in difficulty. This thing popped out of nowhere, ran for a couple days, and is now gone. There was another user up around 1 million KH/s, but they've been bouncing around all over the place, slowly falling, and are around 600,000 KH/s now. Hopefully, whatever this thing is it just dies off and goes away.



Agreed ....Or I will give the co-ordinates to teh CIA as a potential hide out for al-cadia

A Predator drone strike should then deal with this Crime against humanity !!


sr. member
Activity: 472
Merit: 250
September 02, 2013, 08:53:22 PM
#80
Look forward to chatting, love to hear from others on how they implemented, even after the fact. My expansion will never grow above 200,000 kh/s for GPUs.

I believe the power consumption would more likely be 1MW by extrapolation from my farm. And by "below Radar", I think it was meant that it was likely to be heard of by someone in the community.

In terms of switching over though, going from Bitcoin to Litecoin takes a bit of tweaking, but certainly could be pulled off. I'd question why they just switched though, as it would have been wiser to switch months ago.


That's the thing. If this was a large entity you would think that there would be people employed to manage this kind of setup or even the couple investors would operate it. Even if they switched a few machines a day over it would pay for labor and we would see a linear increase in difficulty. This thing popped out of nowhere, ran for a couple days, and is now gone. There was another user up around 1 million KH/s, but they've been bouncing around all over the place, slowly falling, and are around 600,000 KH/s now. Hopefully, whatever this thing is it just dies off and goes away.

hero member
Activity: 630
Merit: 500
September 02, 2013, 08:16:40 PM
#79
Look forward to chatting, love to hear from others on how they implemented, even after the fact. My expansion will never grow above 200,000 kh/s for GPUs.

I believe the power consumption would more likely be 1MW by extrapolation from my farm. And by "below Radar", I think it was meant that it was likely to be heard of by someone in the community.

In terms of switching over though, going from Bitcoin to Litecoin takes a bit of tweaking, but certainly could be pulled off. I'd question why they just switched though, as it would have been wiser to switch months ago.
sr. member
Activity: 347
Merit: 250
September 02, 2013, 08:00:19 PM
#78
I do not consider 50,000kH/sec (on scrypt) to be a large GPU mining operation.

I don't either, but it at least shows that a person speaks from experience, rather than theory. If you told me you operated a 50,000-100,000 kh/s farm, and you were able to design it as you indicated and make a ROI in less than 8 months, I'd listen much more intently.

You're correct that I have not offered evidence of such.  And I will not be at this time.  But we'll revisit this question at a future date when no cryptocurrencies are still feasible to mine with GPU's.  No one is going to believe anything without photos (and even then, photos are routinely disputed on this forum), and that will not occur until GPU mining is dead.  Same reason no one else has posted photos of large professionally-operated GPU farms.  Take it for what it's worth, that's the best I can do for you, aside from the hints I've already dropped in the couple threads on this.


I think the overall point was that with the challenges that a small 50,000 kh/s farm brings, it's very unlikely to bring up a farm 40+ times that size instantly.

Ah, but it wasn't built instantly, it was built over a longer period of time for mining BTC.  The transition to LTC and the "hey, everyone check out the hash rate of that user on that pool" stunt did take a couple days, but that's not the length of time it took to build the farm.


That type of power consumption would not be under the radar.

This I'll agree with you on.  But there's no reason to believe it operates or needs to operate "under the radar".  If what you mean is that everyone everywhere will be aware that there's 700kW entering a building somewhere in the world and a column of hot air exiting a HVAC cooling tower adjacent to it, that part is not likely.  Large industrial customers routinely consume several MW to tens of MW, a single customer under 1MW is not going to attract any attention at all.
legendary
Activity: 2086
Merit: 1015
September 02, 2013, 07:48:08 PM
#77
Why does it need to be under the radar?
This farm is not running out of some guys bedroom closet. It is a large operation and the power would not be unusual for a large office or a data center



-- I do not run any farms just speaking common sense Wink
hero member
Activity: 868
Merit: 500
CryptoTalk.Org - Get Paid for every Post!
September 02, 2013, 07:38:17 PM
#76
Wind- We all know it is possible in concept and in practice with unlimited funds, but what you seem to be missing is the amount of money a facility to deal with this equipment and the footprint it would need is very large.

We're not saying it isn't a very large farm, we're saying it's quite unlikely, unless it's a major, major entity, which certainly could be the case. Regardless, thank you for your thoughts, and please try and apply your knowledge to real-world scenarios that factor into a setup like this, if a large farm. It is very unique, and that is why we're discussing it. If you think it's so simple to do cost effectively, take your thoughts a bit further and apply value to what you speak of, instead of just speaking on an engineering aspect.

I do not consider 50,000kH/sec (on scrypt) to be a large GPU mining operation.

I don't either, but it at least shows that a person speaks from experience, rather than theory. If you told me you operated a 50,000-100,000 kh/s farm, and you were able to design it as you indicated and make a ROI in less than 8 months, I'd listen much more intently. I think the overall point was that with the challenges that a small 50,000 kh/s farm brings, it's very unlikely to bring up a farm 40+ times that size instantly. That type of power consumption would not be under the radar.

The power used is equivalent to the power consumption of 430 households Smiley
hero member
Activity: 630
Merit: 500
September 02, 2013, 07:23:02 PM
#75
Wind- We all know it is possible in concept and in practice with unlimited funds, but what you seem to be missing is the amount of money a facility to deal with this equipment and the footprint it would need is very large.

We're not saying it isn't a very large farm, we're saying it's quite unlikely, unless it's a major, major entity, which certainly could be the case. Regardless, thank you for your thoughts, and please try and apply your knowledge to real-world scenarios that factor into a setup like this, if a large farm. It is very unique, and that is why we're discussing it. If you think it's so simple to do cost effectively, take your thoughts a bit further and apply value to what you speak of, instead of just speaking on an engineering aspect.

I do not consider 50,000kH/sec (on scrypt) to be a large GPU mining operation.

I don't either, but it at least shows that a person speaks from experience, rather than theory. If you told me you operated a 50,000-100,000 kh/s farm, and you were able to design it as you indicated and make a ROI in less than 8 months, I'd listen much more intently. I think the overall point was that with the challenges that a small 50,000 kh/s farm brings, it's very unlikely to bring up a farm 40+ times that size instantly. That type of power consumption would not be under the radar.
sr. member
Activity: 347
Merit: 250
September 02, 2013, 07:16:30 PM
#74
You just quoted someone who operates a 50,000+ KH gpu mining operation.

I do not consider 50,000kH/sec (on scrypt) to be a large GPU mining operation.
sr. member
Activity: 347
Merit: 250
September 02, 2013, 07:12:47 PM
#73
That's about 560,000 watts or 0.56MW.

Since 1w = 1 joule/sec, it should be enough to heat 1 gallon of water from 20c to boiling (100c) in roughly 2 seconds.

I know there are other factors, like the amount of open space, etc., but that should give you an idea of how much energy is consumed by that farm.

You'll need to spend some serious dough on cooling if you want to run a farm like that in one location.

The easiest way to cool it is to run a chilled water glycol loop to a small chiller plant and outside cooling tower, then either liquid-cool the GPU's (my preference) or air-cool the GPU's with water-cooled air handlers in the space.

Here, I'll calculate it for you.  560kW = 159 tons of cooling.  Here's a perfectly suitable 175 ton chiller option on eBay:
http://www.ebay.com/itm/2007-175-ton-Carrier-Centrifugal-Chiller-/221275176478?pt=LH_DefaultDomain_0&hash=item3385073e1e

Combine with an appropriately sized cooling tower and either order your GPU's with water blocks to operate directly on the glycol loop, or snag a few large surplus water-cooled air handlers if you insist on air cooling the GPU's.

This is just getting silly if you guys think that amount of heat rejection is unrealistic (or even hard) to accomplish at a single location.  Otherwise we need to change the debate from whether we're looking at a GPU farm transitioning from BTC to LTC, to instead be a debate about whether data centers really exist or whether they can be built and cooled.  Or for that matter, whether the technology exists to cool office buildings.
hero member
Activity: 630
Merit: 500
September 02, 2013, 07:11:28 PM
#72
You've never run a farm judging by your comments. Let me give you a clue why it's unique for a single location: Heat & Power.

You've never operated any sort of data center equipment at all judging by your comments.  We're talking pretty basic level data center engineering here.

I do both, but you do not apply the same concepts, as a large farm is not going to have the same resources to put forth as a very large corporate data center. The output from this "farm" would be on par with Ebay's data center in terms of power usage and cooling. Try again.
sr. member
Activity: 472
Merit: 250
September 02, 2013, 07:10:08 PM
#71
You've never run a farm judging by your comments. Let me give you a clue why it's unique for a single location: Heat & Power.

You've never operated any sort of data center equipment at all judging by your comments.  We're talking pretty basic level data center engineering here.

You just quoted someone who operates a 50,000+ KH gpu mining operation. There's a very fine line between being technically knowledgeable and having experience mining. You're very obviously possess technical knowledge, but very little applicable to mining knowledge. Please stop while you refrain that appearance of possessing technical knowledge.
sr. member
Activity: 347
Merit: 250
September 02, 2013, 07:03:21 PM
#70
He also appears to be clueless about the BIOS and AMD driver limits which make 16 GPU per system an impossibility.

Calling an arbitrary software limitation an "impossibility" is probably a bit naive.  I'm assuming we're discussing Linux here, as Windows has no place in a large GPU farm.  The kernel source code is not only readily available, it's documented "well enough" for most programmers to understand and modify it, and the portion of the Radeon drivers that directly interact with the kernel is supplied as source anyway.  It is not particularly difficult to fire up multiple X servers and instances of the Radeon kernel module with different GPU's controlled by each.

You can even half-ass it with a handful of Xen virtual machines with specific PCI devices mapped to specific virtual machines and not even fool the AMD drivers at all, without ever touching any code at all.

Here, I'll even save you the trouble of Googling how to accomplish it:
http://wiki.xen.org/wiki/Xen_PCI_Passthrough

The BIOS doesn't play a role in this.  Nor does it need to assign the PCI memory windows to all the GPU's, the Linux kernel is more than happy to enumerate the PCI buses and assign the windows itself.  Remember, hot-swappability was a design criteria for PCIe.  The BIOS is doing nothing once the Linux kernel fires up, Linux handles everything after that point.

Remember, DeathAndTaxes, just a couple weeks ago you posted that scrypt on GPU's operates entirely with on-die memory in the GPU and never touches external memory, and I called you out on it.  Lets maybe use some restraint before pulling out the "clueless" insult.
sr. member
Activity: 347
Merit: 250
September 02, 2013, 06:54:01 PM
#69
You've never run a farm judging by your comments. Let me give you a clue why it's unique for a single location: Heat & Power.

You've never operated any sort of data center equipment at all judging by your comments.  We're talking pretty basic level data center engineering here.
hero member
Activity: 868
Merit: 500
CryptoTalk.Org - Get Paid for every Post!
September 02, 2013, 06:53:50 PM
#68
Provided he uses 7950's and each GPU uses 200w while hashing at 700kh/s, he'll need around 2880 of them to get 2GH/s.


That's about 560,000 watts or 0.56MW.

Since 1w = 1 joule/sec, it should be enough to heat 1 gallon of water from 20c to boiling (100c) in roughly 2 seconds.

I know there are other factors, like the amount of open space, etc., but that should give you an idea of how much energy is consumed by that farm.

You'll need to spend some serious dough on cooling if you want to run a farm like that in one location.

Pages:
Jump to: