Solid State Drives (SSD) are getting cheap and fast. So I’m putting 4 of them together in a Raid 0 array for a super fast MySQL server. The drives are just 30 gigs but cost only $75 each at NewEgg. Trying a 4k chunk size as these drives are unbuffered. I think that’s the sweet spot.
REVISED
Bottom line – it doesn’t work. I’m still investigating why but instead of being fast it’s extremely slow. Just formatting it with mke2fs takes 4 minutes and 54 seconds. These drives are made by OCZ and I think they are cooking the numbers on the flash drive speed. I think that the real numbers are more than 10 times slower than they specify.
Well from what I’ve been reading the buffered write rates are in the 150MB/s range, so 200 seconds for one drive on average – I presume you’ve already raid-ed them in which case you may be getting better than that actually. I would have to say formatting isn’t a good identifier for benchmarking though, it’s a constant write stream with very little reading.
Make sure that you are NOT putting any paging or heavy writing filesystems on it though. The random write/reads will ruin whatever possible performance gains you’ll get. I’d use it for OS system files and distributions and databases where the largest part of the system is read-only.
Now it would be interesting to have a secondary SATA hard drive holding your high TPS database files, with the ssds holding all of your lookups and low-write tables. I suspect it would be on par with caching it all in memory, though you still have the overhead of the SATA controller access.
Did you have some particular reason to strip it instead of going for a raid-5 config? With new drive types it’s far more realistic (and comfortable) to set them up as raid 5 so you can potentially hot-swap out a bad drive and not lose the whole array.
You actually should be able to use that raid box that John was pushing a few months back, I can’t remember but I thought it could handle raid-5.
I’m using Linux software raid and the motherboard has 4 SATA II ports. the drive is rated at 150/90 r/w speed which is reasonable fast. In Raid it should be about 4X that speed. What using real hard drives this works. So what I suspect is that those numbers are cooked and that in real life you don’t get that speed under load with random reads and writes.
Check out that macbook 128gb ssd upgrade.
After a while the mac crawled to a halt … defective blocks, ssd returned on warranty.
HD-Tach shows my 64 GB G.Skill drive is reading just below 150 MB/s with an access time of .2ms. Linear writes like copying a file to the drive gives me ~30-50 MB/s. The documentation says that I should be getting 155 MB/s max on reads and 90 MB/s max on writes. Doing a NTFS format (not a quick format) of the drive in Windows setup was near instantaneous.
What read/write performance you’re actually getting? Have you tried testing the drives individually to make sure you don’t have a bad one? That’s all I can think of.
I wouldn’t expect 4x performance since the seek times are already so low on SSD’s, but it should help with write performance.
Marc, what speed testing diagnostics have you tried? SI-Soft makes a good one, and there are several others. I’m not certain how they’ll work on SSDs, but it might be worth a try.
Are the raid drivers spec’d to perform this task?
the bios of the raid/sata controller should be accessible, did the drives auto detect ok?
SSD has a completely different “behavioral topography” from hard media. If the OS and drivers aren’t designed for that then it’s inefficient.
A combo of hard and SSD can work well, and will increasingly get better. Unfortunately, the Linux folks are well behind on this one and need to catch up.
Raid controllers use memory buffers to speed up the transfer rates on regular hard disks and Raid controllers can actually hinder SSD performace. Unless you get a specific type of Raid Controller for SSD performance then you wouldn’t see a performance increase you will actually see a decrease.
Why should it take a specific driver for SSD? It’s a SATA II drive. It should act like any other drive.
As to what speed test am I using – none. It’s just that a format process that should take 10 seconds is taking 5 minutes.
Raid performance off a motherboard, your speed will be slow.
Worth spending the extra $ for a dedicated PCI-Sata-Raid card that supports four HD’s.
Then again, nothing beats older SCSI hardware. Sub 36G’s SCSI-2 are usually below 20$ (used) and servers with a six-drive SCSI backplane, easily found for sub 400$.
Database performance on such machines is high, however, also a high HD replacement rate.
Depends if you “see” your machine on a daily basis or not.
I’m running several other raid arrays, Raid 0, Raid 1, and Raid 10, all off the motherboard using software raid and they are all fine and fast. The fancy raid controllers only help on raid 5 and 6.
Sorry Marc, I was hoping this would have worked, I guess that’s why the drives are so cheap.
Keep trying…
#11
Very intresting, but back in the old DOS/486 days we weren’t using an OS that used a “virtual memory” file that would cause heavy drive access. Ever since windows 3.1 came around there was this use of virtual ram since for some reason we needed more RAM than we actually had.
20 years later we are using systems that have at least 2-4 GIGABYTES of RAM, which larger than the first set of hard drives I had in 10 years ago.
Yet why are we still using virtual ram? Do we really still need it now? Why would simple apps like web browsers, Email and IM software need a giant swap file of 8 gigs?
“# 42 Marc Perkel said, on January 12th, 2009 at 8:11 am
I’m running several other raid arrays, Raid 0, Raid 1, and Raid 10, all off the motherboard using software raid and they are all fine and fast. The fancy raid controllers only help on raid 5 and 6.”
Now I’m confused. Are you using Linux “softRAID” or your motherboard’s “fake RAID.” Both are software RAID and you may see bad SSD performance with some of the older motherboard RAID controllers.
I would try using the motherboard RAID and install Windows or some other OS to see if its just a Linux problem. Hook the drives up individually to another machine, format them, and run a speed test to make sure you don’t have a bad drive in the mix.
If I had a bad drive it wouldn’t work at all.
Not necessarily, you might have a bad drive that’s getting crappy read/write performance and slowing the whole array down.
It’s not like its that hard to check. Just hook them up to a Windows machine and run a free benchmark like HD-Tach on each drive and see what it says.
OCZ’s Core drives suck. I could have told you that. I have a 120gb model. They freeze for a good 10 seconds sometimes, and are slow for anything but a pure read.
Now… I did get it to work quickly- Windows Disk Protection- It caches writes to a log file and commits them at start-up. Makes the drive fly. It’s the random writes that kill these SSD’s.
I’m don’t recall why, I did read it, but my flash drives are much slower reading than writing. They are slower than the hard drive. I have noted that if I stick a lot of pictures in a file then copy the file for some reason that is much faster than just copying the individual pictures.
I have a chip reader and bought a large size chip (4 gig but the compactFlash I/II/MD itself is what is over sized) to speed up boot time on my Vista machine. It works on my HP.
At a guess my machine mainly boots off the chip and has a few read files there.
Considering that I like to save my working files often in case of failure and office does temp files and flash memory is slow to write and is reported to be failure prone due to being written too to oftern I don’t want my data storage file to be flash. Hard drive space is cheaper and faster. I have 28 gig of pictures and am starting to do video.
My take on built in fash drives is stick your OS and programs on the flash drive and your data on a hard drive/drives.
Any new status on this Marc? What did you learn? Do you still use this setup?
I’m with you man. I just took 4 OCZ Core Series v2 128 GB and tried them in a variety of RAID configs, 0, 0+1(10), and 5. Ideally I wanted RAID 6 for speed and redundancy. So what did I learn…..RAID 5 does not work at all with Vista, and works like crap with Windows 7. I couldn’t imagine it was the drive so I took my two Raptors (old drives) and configured them in RAID 0. Low and behold, it works fine, and it’s fast. So it looks like the hang up is with the drives. Any thoughts?
This is an old thread, but I just read an article that might be useful for anyone trying raid with ssd’s. It’s dated from 2007… but basically they used 9 – 30GB ssd’s and tried different combos for raid 0, and ran several tests, shows results too. Probably answer some questions here…