Is there any way to test for this? I thought that SSDs were built to last several years, even under extremely heavy load.
They can last years if you have a decent mix of reads and writes for the flash technology. Typical write-ahead-log is a worst case write-amplification pattern: writes 512-byte blocks, forces physical sync/flush to the drive, which in turn forces full page (about 4096-bytes) relocate and erase in the flash controller. And to add the insult to the injury that data is never read unless the database or the OS crashes.
I don't know of any way to check it while the disks are online, besides continuously monitoring the S.M.A.R.T. parameters with smartmontools and hope that the "old-age" statistics shown by them aren't lying.
The folks I know would simply run a full backup, full low-level security erase of the flash drives and then full restore. The security erase may help fix the bottlenecks in the wear-leveling software in the flash controller that is causing those random delays.
But they also aggresively use the warranties on the flash drives, preemptively returning them before the warranty expires. This is only workable if one orders such drives in quantities large enough to compile some sensible wear statistics.
Due to the extremely competitive nature of this business it is quite hard to get true and correct information about the wear-leveling algorithms used and their interaction with file systems. I'm sorry about not being able to be more specific: on one hand there are NDAs, on the other hand the flash controller firmware changes very frequently.
To get flash SSD last several years you'll need to mix them with normal spinning drives. Since database logs are almost exclusively sequentially written and almost never read they won't bottleneck the whole machine.
Edit: If you are willing and able to rebuild your kernel there ware some patches that use ATA TRIM and optional bitwise negation to significantly reduce the flash wear on some controllers. The frequent operation of "allocate disk
blocks sectors and initialize to zeros" is replaced with simply "erase block" which initializes flash to all 0xFF and then stores all data complemented. Folks who created and maintained those patches would presumably know for which controllers they are worth applying and maybe if there are other custom patches for similar circumstances. The other strings worth seaarching may be "Deterministic Read Zero after Trim" DZAT, which basically refers to the same idea implemented closer to the actual hardware.
Edit2: I keep forgetting about the partition table issue. Many OS installers continue to create the "compatible" disk partition layout with integral number of "cylinders" (C/H/S == x/255/63). With 255 "heads" per "cylinder" and 63 "sectors" per "track" you would be assured that the partitions have odd sizes and alignments. Therefore the most common I/O operation 4kB is assured to cross into 2 physical 4kB sectors. Various optimizations then become ineffective and they automatically disable itself for the sake of safety. This is related, but non identical, to the "long sector" "advanced format" on the mechanical drives. Just make sure that you didn't accidently set your system into "512 byte sector emulation" (512e) mode.