Pages:
Author

Topic: Has anyone ever considered creating rack mounted asics? (Read 3846 times)

newbie
Activity: 52
Merit: 0
A rack mounted ASIC would be awesome Smiley
legendary
Activity: 952
Merit: 1000
Oh, and this thread is HILARIOUS. IDK which is more comical: the ignorance, or the arrogance.
legendary
Activity: 952
Merit: 1000
Having connections and discounted rates at a small (YES, small) datacenter nearby, I would totally be interested in a rackmountable ASIC. I was actually disappointed that the BFL MR wasn't rackmountable.
sr. member
Activity: 420
Merit: 250
Ultimately as hardware gets cheaper and connections faster we'll see more and more small datacenters springing up wherever they're needed.

That's ridiculous.  In 1999 what required a rack of machines can now be done in a 1U unit.  Major companies will continue to build server rooms, because they have the data, but smaller companies simply have no need for all that space or the insane cost to build out such a space. 

Like you I own two racks, one in the house and one in the garage.  Like yours they take up room about the space of two large refrigerators.  All they do now is hold a bunch of old heavy servers that are now worthless.  These $50 servers cost $20 to $50 thousand when new a decade ago.

Google and Walmart need data centers.  With terrabyte hard drives the rest of us are lucky to need 1U of rack space to store every piece of data we will ever encounter.  You don't need a rack for anything you do.... nor will most prospective asic owners.  These days a quarter rack is far more than most any person can use for every component in their house.

Getting back to the original question, if you have a 1/4 million dollars to spend on this problem you are probably going to build your own chip... you are probably NOT going to buy 14 Avalons.  Even if you did buy them it's trivial to place a computer that isn't designed to be rack mounted on a shelf installed in the rack.  A shelf (including pull-put slider rails) can be added to ANY rack in 5 minutes. 

I don't think BFL will ever deliver, but I do believe their presentation is exactly what the world wants/needs/expects in an asic.  A coffee cup warmer.  There is no market for rack mounted asic mining rigs and the additional design expense would not be recovered.


Well see that's your problem... you're using them for storing out dated equipment. I've got one rack that's filled with my music mixing stuff (including a 42 tray dvd copy appliance) and the other with a couple of servers 2u servers and all my fpga-s... and yes they're on shelves, but I'd sure love to be able to buy rackable asics. The main reason I run them in the rack is because it's got it's own AC cart and that's by far the best way to keep temps where I want them.
full member
Activity: 124
Merit: 100
Go ahead and try convincing me that tower chassis are more dense than rackmount.

My only point, per the OP, is that there is no market for rack mounted asics.  Anyone spending $250k or more on ASIC technology will likely be building their own units and the one guy that bought 50 units will probably place them on wire shelves, saving the expense of buying an expensive rack.


What if I wanted to buy just a couple 1U rackmount ASICs and colo them at a small, local, and cheap datacenter instead of letting them heat up my house, make lots of noise, and get clogged with dust? Racking a server or two doesn't cost a fortune. Especially with the low bandwidth requirements for mining.
hero member
Activity: 924
Merit: 501
Go ahead and try convincing me that tower chassis are more dense than rackmount.

My only point, per the OP, is that there is no market for rack mounted asics.  Anyone spending $250k or more on ASIC technology will likely be building their own units and the one guy that bought 50 units will probably place them on wire shelves, saving the expense of buying an expensive rack.
hero member
Activity: 924
Merit: 501
It isn't confusing at all....that shelf is wasting 1U of rack space from the look of the hole spacing...not helping with rack density unless it is needed.

If the asic is 3 u as it appears you can place 14 of them in a rack with no lost space.  The shelf takes up zero space, the steel fits between the u below and the u above.  It is a 1u shelf.

member
Activity: 67
Merit: 10
Assuming you aren't going to set anything on top of that.... but then that's just a waste of space. Derp.

Your response is as confusing as mjsbuddha's

It isn't confusing at all....that shelf is wasting 1U of rack space from the look of the hole spacing...not helping with rack density unless it is needed.
full member
Activity: 124
Merit: 100
Assuming you aren't going to set anything on top of that.... but then that's just a waste of space. Derp.

Your response is as confusing as mjsbuddha's

Go ahead and try convincing me that tower chassis are more dense than rackmount.
hero member
Activity: 1118
Merit: 541
That's ridiculous.  In 1999 what required a rack of machines can now be done in a 1U unit.  Major companies will continue to build server rooms, because they have the data, but smaller companies simply have no need for all that space or the insane cost to build out such a space.  

That's what dedicated server, vps and webhosting providers are for; of which there are more providers for than ever. I have servers in 25 separate sites worldwide. There are dozens upon dozens of datacenters to choose from even in tiny city-states like Singapore, Hong Kong, Monaco, Malta and even Iceland! Even in third world countries you have a great deal of choice. I've personally hosted servers in 4 different datacenters in the philippines, 2 in egypt, and a half dozen in south africa.

If a datacenter is built in the woods, and no one is around to see it, does it exist?

Also, it really doesn't matter what your density is any more. You can get a full rack w/o power and internet for < 500$/mo anywhere in the world these days with most popular locations as low as 200$. You only need a single redundant drop for all of your boxes (2 network drops -> 2 switches w/ spanning tree & bonding, then split off 2 redundant ports for switches in each sub-rack). So you'd get 2x 100mbit ports @ 200$/mo + 100$ port fee. Then something like 300$/month per rack + 300$/month for 3-4Kw/h usable power. So you end up with 900$/month for your initial rack, and 600$/month for each subsequent rack. It's really a moot point arguing about a few Us. For what you'd pay for high density solutions (and the higher density cooling to go with it) it's usually cheaper to buy multiple racks and work out a better deal with your datacenter.
hero member
Activity: 924
Merit: 501
Assuming you aren't going to set anything on top of that.... but then that's just a waste of space. Derp.

Your response is as confusing as mjsbuddha's
full member
Activity: 124
Merit: 100
Installing a shelf inside a rack is the most retarded thing you can do. Racks are about high-density.


Yea that 1/32nd of an inch is really gonna cramp your style....



Assuming you aren't going to set anything on top of that.... but then that's just a waste of space. Derp.
hero member
Activity: 924
Merit: 501
Installing a shelf inside a rack is the most retarded thing you can do. Racks are about high-density.


Yea that 1/32nd of an inch is really gonna cramp your style....
sr. member
Activity: 336
Merit: 250
yung lean
Has anyone really been far even as decided to use even go want to do look more like?
full member
Activity: 124
Merit: 100
Installing a shelf inside a rack is the most retarded thing you can do. Racks are about high-density. A rackmountable ASIC miner would allow you to put much more hashing power in a smaller amount of space vs putting a few oddly-shaped chassis on a shelf. Also, racks and datacenters aren't only for hard drives.
hero member
Activity: 924
Merit: 501
Ultimately as hardware gets cheaper and connections faster we'll see more and more small datacenters springing up wherever they're needed.

That's ridiculous.  In 1999 what required a rack of machines can now be done in a 1U unit.  Major companies will continue to build server rooms, because they have the data, but smaller companies simply have no need for all that space or the insane cost to build out such a space. 

Like you I own two racks, one in the house and one in the garage.  Like yours they take up room about the space of two large refrigerators.  All they do now is hold a bunch of old heavy servers that are now worthless.  These $50 servers cost $20 to $50 thousand when new a decade ago.

Google and Walmart need data centers.  With terrabyte hard drives the rest of us are lucky to need 1U of rack space to store every piece of data we will ever encounter.  You don't need a rack for anything you do.... nor will most prospective asic owners.  These days a quarter rack is far more than most any person can use for every component in their house.

Getting back to the original question, if you have a 1/4 million dollars to spend on this problem you are probably going to build your own chip... you are probably NOT going to buy 14 Avalons.  Even if you did buy them it's trivial to place a computer that isn't designed to be rack mounted on a shelf installed in the rack.  A shelf (including pull-put slider rails) can be added to ANY rack in 5 minutes. 

I don't think BFL will ever deliver, but I do believe their presentation is exactly what the world wants/needs/expects in an asic.  A coffee cup warmer.  There is no market for rack mounted asic mining rigs and the additional design expense would not be recovered.




sr. member
Activity: 420
Merit: 250
Gotta love it when a thread turns into an idiot block.

IMO: datacenters are actually becoming smaller and more distributed. I personally know of several medium sized businesses who have broken up their single large data center and distributed parts of it out to various locations / branch offices.

Personally I have several racks inside the clean room I built in my garage.

Ultimately as hardware gets cheaper and connections faster we'll see more and more small datacenters springing up wherever they're needed.



sr. member
Activity: 476
Merit: 250
Keep it Simple. Every Bit Matters.
Huge Singular Data centers, might be normal in some places (can't really quote outside my experience), but in the Uk and many places in Europe, small data centers are the normal. It is my understanding based on quick bit of research, really large ones are maybe more common in America, but I also found small ones too.
It's rare to find really big data centers here (UK), instead there are lots small ones. Yes some are all owned by the same company.

One of the Data Centers I have used, instead of having 1 large data center, it has 3 all in very separate locations, to provide a element of redundancy protection, have them well connected so if in the unfortunate event something goes wrong, your backups or even other servers are in the another Data Center, unaffected and could easily recover from a disaster. Something not so easy with a single large Datacenter.

member
Activity: 88
Merit: 10
I used to work in the spare office space of the server building for a regional Australian Bank. It wasn't hundreds of thousands of square feet.

And the company I worked for had its own server rack for running its own servers and switches etc. You know what there was even a bit of space left over that would fit an ASIC rack....
hero member
Activity: 924
Merit: 501
Quote from: Viceroy
Yep, there are a few large data-centers.  Not a plethora of small data centers like there once were.

Stop talking. You have no idea what you are talking about.

We are seeing datacenters of all sizes popping up. Lots of smaller DCs supporting regional areas. As technology grows it's spreading out and the infrastructure has to follow.

(cough) bullshit (cough) 

There is no such thing as a small datacenter anymore.  They are on the order of hundreds of thousands of feet and they are concentrated in a few small areas of the country, like Ashburn VA. 

http://www.datacenterknowledge.com/archives/2012/09/11/loudoun-5-million-square-feet-of-data-centers-more-to-come/

Now you shut up and let the big boys talk.
Pages:
Jump to: