Author

Topic: Why is Bitcointalk hosted on SSDs? (Read 794 times)

sr. member
Activity: 280
Merit: 250
Bro, you need to try http://dadice.com
April 15, 2015, 09:29:06 AM
#9
SSDs makes the website MUCH MORE faster when it comes to doing jobs like logging in, fetching data etc. Also, they are less likely to get physical damages, because there arent any moving parts. Powerconsumption is also better than hdds: for comparison - SSDs take not more than 2W, however HDDs take more than 6W.
Bitcointalk works quite fluently for this load of traffic, maybe thats because of the SSDs?
hero member
Activity: 882
Merit: 1006
April 14, 2015, 05:15:43 AM
#8
Most websites are hosted on SSD's now. There is a significant performance increase when putting a DB on an SSD. While SSD's do have a shorter lifespan, these are enterprise SSD's designed to run in a server environment, not consumer ones. They can take more of a beating than consumer SSD's. Most sys admins will also put them in a RAID configuration so that in the event that if one drive dies the website will keep running as normal and there will be no downtime or data loss, simply plug the dead drive out and plug a new drive in whenever you get a chance. Unfortunately Bitcointalk was pretty unlucky and had two drives in the RAID fail at roughly the same time, which is quite rare but can happen (certain things can cause this to be more likely to occur, such as using the same drive model or a buggy RAID controller), which meant we had to restore from daily backups.
legendary
Activity: 1666
Merit: 1185
dogiecoin.com
April 14, 2015, 12:41:27 AM
#7
For web hosting purposes, the performance advantage appears to be the main benefit over traditional hard drives. I guess if the site is backed up regularly then a drive failure happening once or twice every decade might be a reasonable trade-off for a faster web experience. I've noticed that Dreamhost is now offering an SSD option for their VPS hosting service too which seems to have been well received.

There are at least daily backup images made. Hence when the last SSD deteriorated it was easy enough to load the backup image on. Only 8-12 hours of content was reverted.
full member
Activity: 350
Merit: 118
April 13, 2015, 07:57:10 PM
#6
And to add to the above, the I/O capacity of an SSD is on an entirely different level to that of a mechanical hard drive. And that's exactly what non file hosting sites require.

I'm guessing SSD's are used mostly because of their speed difference vs. classic hard drives. They have great random seek times (fractions of a millisecond because the disk doesn't need to spin to read data) plus most current SSD's can do about 550MB/s read and write speeds. Compare that with 200MB/s max on hard drives and it probably makes more sense to host this site on SSD's. SSD's are also getting better and better - they're becoming more reliable, coming with larger capacities, and faster speeds. The newest Intel SSD's can do about 1.5GB/s read/write, obviously for a much higher price. However that doesn't really relate to the question.

I think the major reason they're used is because of their quick random seek times, making it multitudes faster to prepare web pages for the thousands of constant users on this site compared to a hard drive.

For web hosting purposes, the performance advantage appears to be the main benefit over traditional hard drives. I guess if the site is backed up regularly then a drive failure happening once or twice every decade might be a reasonable trade-off for a faster web experience. I've noticed that Dreamhost is now offering an SSD option for their VPS hosting service too which seems to have been well received.
legendary
Activity: 1694
Merit: 1024
April 13, 2015, 07:24:56 PM
#5
I'm guessing SSD's are used mostly because of their speed difference vs. classic hard drives. They have great random seek times (fractions of a millisecond because the disk doesn't need to spin to read data) plus most current SSD's can do about 550MB/s read and write speeds. Compare that with 200MB/s max on hard drives and it probably makes more sense to host this site on SSD's. SSD's are also getting better and better - they're becoming more reliable, coming with larger capacities, and faster speeds. The newest Intel SSD's can do about 1.5GB/s read/write, obviously for a much higher price. However that doesn't really relate to the question.

I think the major reason they're used is because of their quick random seek times, making it multitudes faster to prepare web pages for the thousands of constant users on this site compared to a hard drive.
full member
Activity: 350
Merit: 118
April 13, 2015, 07:01:39 PM
#4
http://www.pcworld.com/article/2856052/grueling-endurance-test-blows-away-ssd-durability-fears.html

SSD technology has come a long way over the past 10 years. At my current usage rate (higher than average probably), the 840 Evo in my desktop will wear out in about 12 years, and that's a consumer drive, not enterprise (I don't know what SSD's are in use here).

Hmm... Didn't know that. Theymos did say that overuse of swap space was what deteriorated the SSDs though and that they are older models.

Quote
A platter drive I would normally replace after 5 years, or 50,000-60,000 hours.

I've had platter drives that have lasted over 25+ years although I've had a couple from the mid 2000's fail as well. The current one in my laptop is over 10 years old.

Quote
It also uses more power, produces more heat, is more vulnerable to shock, etc. They have their place still, but longevity is not a reason to buy a platter drive.

Shock, power consumption, and heat probably aren't concerns for a server since they don't tend to be moved around much, have cooling issues, or rely on limited power sources unlike laptops.

Quote
People worried about the finite lifecycle of a SSD are, for the most part, being ridiculous or using it as an excuse to justify their terrible decision to stick with platter drives. Your netbook probably has a cheap one. Cheap drives are crap drives, doesn't matter what kind they are.

Possible. My netbook is the 4 GB model from 2007. The Eee PC was cheap compared to the subnotebooks that came before it but ASUS isn't a manufacturer that tends to skimp on quality. I'm not sure if manufacturers still disable virtual memory on SSDs or not as a precautionary measure these days though.

EDIT: After doing a quick Google search for 'virtual memory on ssds', it looks like it's still a bad idea. Most of the sites recommend turning the option off.

Example:

Quote
No one likes a bricked SSD. You can reduce wear and tear and wring out every last write cycle - just don't treat it like a traditional hard drive...

...An SSD is flash storage. It has no moving parts. So unlike on a traditional mechanical hard drive, nothing breaks. SSD wear and tear has to do with write cycles.

Flash storage handles data in a specific way. When data is written to a block, the entire block must be erased before it can be written to again. The lifespan of an SSD is measured in these program-erase (P/E) cycles. Modern, consumer-grade, Multi-Level Cell (MLC) NAND memory can generally endure about 3,000 to 5,000 P/E cycles before the storage's integrity starts to deteriorate. The higher-end, Single-Level Cell (SLC) flash memory chip can withstand up to 100,000 P/E cycles.

You'd have to work hard to reach the P/E cycle limit for an MLC-based drive, let alone an SLC-based one. Nevertheless, every time you write something to the drive, you bring it a little closer to its demise. Don't obsess over every single write cycle—a few of our later tips are best suited for such tendencies—but do check out the following techniques for minimizing unnecessary writes to the drive...

...For the average user who doesn't write heaps of data to storage constantly, your SSD will probably live a long and happy life. And if you adjust your storage habits to the SSD's strengths, you could squeeze a few more cycles out of the drive.

Link: http://www.pcworld.com/article/2043634/how-to-stretch-the-life-of-your-ssd-storage.html

Quote
The performance problem is lessened considerably if you put the swap file on an SSD rather than a hard drive. But there’s a problem: SSDs wear out with too much writing, so putting a swap file on one might shorten its life.

If all you have is an SSD, you may want to disable virtual memory entirely. You can do this in the Virtual Memory dialog box by selecting the drive it’s on, clicking No paging file, then clicking Set.

Link: http://www.pcworld.com/article/2840886/if-windows-virtual-memory-is-too-low-you-can-increase-it-but-there-are-trade-offs.html

Quote
Should we allow Windows, or any other operating system, the right to ever install virtual memory on a SSD drive?

The short answer is no. The long answer is a little more difficult to explain. In order to understand why we shouldn't use virtual memory on our SSD drives, we should actually think about what virtual memory actually is, and what it is used for...

...SSD drives are random access drives, that access memory at high speed, and write faster, as well. This would make it more useful to use for virtual memory, than using traditional spinning drives, just from the speed stand point, but I maintain my not a good idea stance when it comes to virtual memory not belonging on SSDs, and there is a very important reason why. Each of the sectors has a limited write endurance, because of the very nature of a SSD drive, as oppose to a traditional spinning drive. So, while it it possible to damage a sector on a traditional spinning drive by normal wear and tear, the drive's sectors tend to last a lot longer, and there is a lot more sectors than the SSD drives, plus with the advent of perpendicular magnetic drive writing where we write the bit deeper than we used to, the magnetic storage of a spinning drive is much more resilient than those in standard SSD drives. In short, the spinning drive, even those limited in speed by the mere fact of needing to spin to the correct position, is less prone to issues related to limited write endurance, at least, not in any way that the current SSDs suffer.

Since virtual memory is a form of RAM, and it can be expected to change at any time. Being written and changed any time, to what may seem random to most, and because SSDs have a limited write endurance, it could be expected that such an operation can negatively impact the life span of a SSD drive. These kinds of actions can render a SSD useless in a shorter span of time.

Link: http://tqaweekly.com/episodes/season3/tqa-se3ep17.php
legendary
Activity: 1666
Merit: 1185
dogiecoin.com
April 13, 2015, 07:29:13 AM
#3
And to add to the above, the I/O capacity of an SSD is on an entirely different level to that of a mechanical hard drive. And that's exactly what non file hosting sites require.
legendary
Activity: 1652
Merit: 1128
April 13, 2015, 07:23:19 AM
#2
http://www.pcworld.com/article/2856052/grueling-endurance-test-blows-away-ssd-durability-fears.html

SSD technology has come a long way over the past 10 years. At my current usage rate (higher than average probably), the 840 Evo in my desktop will wear out in about 12 years, and that's a consumer drive, not enterprise (I don't know what SSD's are in use here). A platter drive I would normally replace after 5 years, or 50,000-60,000 hours. It also uses more power, produces more heat, is more vulnerable to shock, etc. They have their place still, but longevity is not a reason to buy a platter drive.

People worried about the finite lifecycle of a SSD are, for the most part, being ridiculous or using it as an excuse to justify their terrible decision to stick with platter drives. Your netbook probably has a cheap one. Cheap drives are crap drives, doesn't matter what kind they are.
full member
Activity: 350
Merit: 118
April 13, 2015, 05:30:38 AM
#1
According to this thread which was created to explain the downtime and data loss that Bitcointalk experienced earlier this year, Bitcointalk is hosted on solid-state drives (SSDs):

Technical details:

The bitcointalk.org and bitcoin.it databases were stored on a RAID 1+0 array: two RAID 1 arrays of 2 SSDs each, joined via RAID 0 (so 4 SSDs total, all the same model). We noticed yesterday that there were some minor file system errors on the bitcoin.it VM, but we took it for a fluke because there were no ongoing problems and the RAID controller reported no disk issues. A few hours later, the bitcointalk.org file system also started experiencing errors. When this was noticed, the bitcointalk.org database files were immediately moved elsewhere, but the RAID array deteriorated rapidly, and most of the database files ended up being too badly corrupted to be used. So a separate OS was set up on a different RAID array, and the database was restored using a daily backup.

My guess is that both of the SSDs in one of the RAID-1 sub-arrays started running out of spare sectors at around the same time. bitcoin.it runs on the same array, and it's been running low on memory for a few weeks, so its use of swap may have been what accelerated the deterioration of these SSDs. The RAID controller still reports no issues with the disks, but I don't see what else could cause this to happen to two distinct VMs. I guess the RAID controller doesn't know how to get the SMART data from these drives. (The drives are fairly old SSDs, so maybe they don't even support SMART.)

I'm curious as to why this is so. Don't SSDs have finite read-write cycles? I would think that it's not really a good idea to use SSDs to host a PHP application and a database that is constantly being written to. Even more so if the memory is low (and it was) since this forces the server to constantly write to the swap space.

I have an Asus Eee PC which uses an SSD for storage. Unlike the rest of their laptop range, the virtual memory was disabled on these netbooks to spare the SSD. Not sure if it's related but I've also heard that using a Raspberry Pi to run a full node is a bad idea since the write cycles would wear out the SD card very quickly.
Jump to: