Volume 14, Issue 50; 13 Dec 2011

Some thoughts on my utter failure to configure a working dual-drive system.

Background: spinning rust is slow and prone to failure and SSDs are fast and…well, I don't know about their long-term reliability but they're fast, ok? I've long wanted to replace my conventional disk with an SSD, but 500Gb SSDs are still out of my price range. In the course of a periodic check on just how out of range they are, I came across the data doubler, a bracket that allows you to mount a SATA drive where the optical bay is.

I hatched a cunning plan. I'd pull out the optical bay and use the data doubler to put in a second hard drive, an SSD in the affordable range. I'd boot off the SSD, but leave most of my data on the conventional drive. My thinking was that having swap and /tmp and such on the SSD would provide a performance boost. And because I'd only be using it to boot, it could be small.

I selected a 120Gb SSD and charged ahead. After making a complete backup. Because I may be crazy but I'm not (usually) stupid. At least, I try not to make the same mistakes more than once.

The hardware was easy; the physical installation was quick and painless. The software …; ah, software, thou art a heartless bastard. It all goes downhill from here.

I should explain that there's one extra complication involved. I use PGP Whole Disk Encryption (now a Symantec product). Laptops get lost and stolen. Governments sometimes decide to confiscate them at borders. None of these things have happened to me, but if any of them do, I don't want the added worry that I've let any confidential information on my laptop leak into other hands. I don't consider myself paranoid, but then I suppose I wouldn't.

Problem one: the Snow Leopard install DVD won't boot. After wasting time trying to determine if this was a consequence of the external optical drive enclosure, I came to the conclusion that it's either because of an EFI update that's been done, or maybe the SSD needs drivers that aren't on the original DVD. I dunno.

Boot the DVD on my old laptop, install onto a small USB, boot off the USB, run “Software Update” several times, move that drive to my new laptop, boot off of it, use SuperDuper! to install it on the SSD, and reboot with fingers crossed. Success.

Get the encryption software installed, (re)encrypt the drive(s), reboot.

Problem two: the second drive doesn't mount at boot time. Maybe this is a consequence of the encryption. Maybe it's a consequence of the OS X auto mounter, I'm not really sure. It means there's no practical way to simply put my home directory on the spinning rust.

I looked around in the OS X boot sequence. Unlike Linux, it's clearly not something mortals are supposed to tinker with. The /etc/fstab says, literally, “IGNORE THIS FILE”. There's no equivalent of /etc/init.d AFAICT. There's launchd and maybe the answer's in there, but I pressed on with other techniques instead. More fool I, perhaps.

Mac OS X doesn't support the “bind” mount option. And I couldn't see a practical way to make “union” mounts do what I wanted. What's more, there's nothing like Linux's “Ctrl+Alt+F1” access to a console, so there are few options for getting in between boot and graphical login.

(Logging in, luckily, does mount the second drive with an appropriate passphrase prompt from the PGP software.)

Next I ran afoul of the Mac's aggressive defaulting of the directory structure. Programs think ~/Documents is where documents go, ~/Pictures is where pictures go, etc., and they think can put any amount of data in ~/Library/Application Support. Some programs can be persuaded otherwise, but I've learned it's simpler to leave things that way. But I don't want arbitrary amounts of data on my little SSD; I quickly wound up with an ugly little rat's nest of symbolic links.

The problem with symbolic links (despite their enormous utility) is that they're too visible. They're a trick of the filesystem and if you walk past a symbolic link, or if an application knows its location by another name, then relative paths aren't what you expect.

Finally, there's /usr/local. There's a whole bunch of stuff in there that, again, I don't need on my SSD. So I made that a symbolic link. Some library/linking issues persuaded me to rebuild my Homebrew install. Some aspect of that process persuaded me that the brew system needed to know the real location (/Volumes/Data/usr/local). That lead to the relative path problem aluded to above, solved with another symbolic link (/Volumes/Data/Volumes/Volumes).

You can see now that I've missed the left turn in Albequerque, can't you?

The final straw was the DisplayLink driver for my (second, external) USB monitor. It put a library in /usr/local. Which, now that it's a symbolic link to a directory on an encrypted drive, isn't available at boot time. It was sort of ok, as long as I didn't reboot with the external monitor plugged in and logged in before I did plug it in.

At this point, I gave up. The path I'd taken, the decisions I'd made, left me with a sometimes bootable rat's nest. I'd clearly gone so far off the reservation that it was only a matter of time before I discovered something awful. At the least convenient moment.

Last night I restored from the backup and went back to booting off the spinning rust. Four days lost, none the worse for wear, and perhaps a little wiser.

On reflection, I think 120Gb probably should be enough to install the OS and applications. I think I might try again, when I have a chance, with only the absolutely necessary symbolic links (I'm thinking of ~/Library/Application Support/MarkLogic and ~/Library/Application Support/Steam in particular (though I know enough about MarkLogic to actually move the data out of “application support” if I want.

I'll leave ~/Documents and friends alone, chosing to put my data elsewhere on an application-by-application basis. I'll leave /usr/local on the SSD, etc.

Maybe that'll work.

Performance metrics

Along the way, I took a few simple performance measurements. Nothing terribly scientific, I didn't run several tests and average them, I didn't work hard to make the conditions exactly the same, etc. One of the ways that I expected an SSD to be an improvement was in the time it takes to build Marklogic Server. As a test, I timed the build. In each case, I ran the test on a freshly booted machine with the same few apps running (stuff that starts when I login). I ran make clean then timed the build.

Entirely on spinning rust, it took about 44 minutes of wall-clock time as reported by time.

Booting off the SSD, but with the source files on spinning rust, it took about 39 minutes.

Booting off the SSD, with the source files on the SSD, it took about 36 minutes.

So that's a maximum possible savings of just under 20%. Pretty good really, though perhaps an awful lot of fuss for eight minutes. None of these tests took into account the situation where I think the savings would be even more dramatic: when there's enough going on to cause some swapping to occur. Running a build when I've got another server, two Emacsen, a browser, some office app, and a WebEx up, for example.

[And you need to do that how often? —ed]


I don’t know how it is with toy operating systems, but with Linux I am quite happy with having /tmp on a ramdisk (tmpfs in Linux-speak; my computer has 4GB RAM). No need to play with expensive toys.

—Posted by Matěj Cepl on 13 Dec 2011 @ 01:50 UTC #

On the last ed-remark, you need to do that just about every time you do a build - or do you reboot your machine before each build?

When I got an SSD, the speedup I noticed first was in the time of starting apps. Starting almost anything now takes next to nothing, under almost any conditions. When I log in, all the open-at-login items (stickies/skype/itunes/etc.) start crazy fast. Effectively, you should expect (and tune for) responsiveness improvements (lotsa little things are much quicker), not for speed improvements for big tasks.

—Posted by Jacek Kopecky on 13 Dec 2011 @ 02:25 UTC #

The point of the closing editorial remark was that I don't actually need to rebuild the server from scratch so often that I need to do it with office apps or webex (or photoshop or lightroom, etc.) running. But you're right, there are always two emacsen and a web browser and another server running, so the point's cloudy at best.

I did notice how fast apps started. I'm almost certain I'll try it again. After I get caught up from four lost days :-/

—Posted by Norman Walsh on 13 Dec 2011 @ 02:56 UTC #

How many parallel compiler processes did you have for that build? If your swapfile is on the SSD you really ought to max out your CPU and let swap happen. For example I use -j8 on my quad-core hyperthreaded Core i7 machine with 8G RAM; I close down any large-memory-using programs and let it go to town.

I just did a clean make -j8 and it took 5 minutes 29 seconds (not including boot time, because I didn't reboot) -m

—Posted by Micah Dubinko on 13 Dec 2011 @ 04:24 UTC #

I timed "make slow; make -j3". You're absolutely right that I could have been more aggressive in the SSD case. I'll try again when I'm back in a place where I can.

For the curious: "make slow" compiles eight or ten particularly large modules in series; they're guaranteed to cause swapping if you run them in parallel. That's less important on an SSD, but for fairness I timed the same commands in each case.

—Posted by Norman Walsh on 13 Dec 2011 @ 06:27 UTC #

Seagate makes a "hybrid" drive now that I'm very seriously considering. The "Momentus XT" (model # STAN500100). About $150 for a 500GB HDD + 4GB SLC NAND + smarts with supposedly similar performance to a full SSD. The most obvious drawback is hardiness (drops). I understand that under normal usage (ie: you do the same thing every day and you are not editing large videos) it should spin down so power usage is similar to an SSD most of the time.

One of these will be going into my desktop over Christmas (the main family machine) as the main system drive. The only thing that's slowed me up on my laptop is that I would actually like a 1TB drive because I share my laptop with my son. No announcements on larger sizes yet, but I'm assuming they must be coming.

—Posted by Derek Read on 13 Dec 2011 @ 07:20 UTC #