being the trigger man in a metaphor
Monday, November 10 2014
This morning Mike, my fellow web developer/project manager (and main contact for work in Los Angeles) had handed off a project to me whereby we would expand a volume on the Amazon web services EC2 instance hosting that theatre site we've been working for. For whatever reason, that volume had been initially set at 8 gigabytes, a ceiling far too low for the operation of a reliable website. We'd read the documentation for the procedure for how to expand the volume size, but it would require downtime, so we'd been kicking that can down the road until this morning, when it was my job (as the most fearless cowboy in the saloon) to reach down and pick it up (or pull the trigger, depending on which metaphor you're following). Everything seemed to go okay as I stopped the instance (bringing the website down), detached the old eight gigabyte volume, and attached the new one (made with an image of the old one). As is always the curse of doing anything at all with Amazon web services, the problems began when I tried to bring the instance back up with the new larger volume. What one would hope would happen (and what that procedure I linked to earlier led me to expect) was that the server would come up and the theatre website would be back on line. What happened instead was that the server came up and was unreachable by both SSH and the web. That is what is known as a bad outcome. Now, with the site down, I had to research a solution that would somehow bring it back up. You can see why my colleagues like it for me to be the one to do this sort of work.
The first thing I tried doing was reverting back to the old eight gigabyte volume, but something had switched and now there was no rollback procedure possible. After some research, I found something vague about the need for a new elastic IP address, which turned out to be an unlisted (but necessary) thing to do in the procedure. But even with a new IP address, the site was unreachable. The simple process of moving to a new volume had caused the site to receive a new IP address, something that never changes otherwise. It gradually dawned on me that this would require a change wherever the DNS name hosting happens, but of course nobody had ever given me access to that. Meanwhile, of course, everybody else who works for the film website had been running around like chickens with their heads chopped off, repeatedly getting me on the phone and hoping for answers, occasionally distracting me from the just-in-time research I needed to be doing. One of the guys even told me to take a more professional tone after I complained in an otherwise-informative email about how my morning was being unexpectedly destroyed by "this shit." (Truth be told, my pay grade is not high enough on this project to be held to such a standard.) In the end, I'd done everything I could, and the guy who had the login info for the DNS hosting was the one who would be able to bring the site back up again.
But even once he'd fixed the DNS pointing, the site was dead. It turned out that the server's Apache service (the one that serves web pages) hadn't been configured to start up on its own and I had to do it manually (good thing I'd taken notes and knew precisely the SSH command to issue; I don't think I know any two Linux installations where this is done identically). I'd never seen something so ridiculous in my life. My experience with Amazon web services are full of these sorts of ordeals. And even when solutions, they're often concealed behind fast-changing proprietary jargon and jumbled abbreviations, the definitions for which are hard to find. Does anyone know what an AWS Elastic Beanstalk is? Is that even still a thing?
Other things I did included making a photo album website for the Wall Street house (using the photos taken recently by Deborah) to better market it to prospective tenant. And at some point I used a remote desktop app to take over a Macintosh somewhere in the United Kingdom so I could see why the installation of the Lightroom-connected webapp was failing on it. I changed some permissions, noted an unusual placement of Lightroom Catalog databases, and then, because the remote desktop was so slow and had such bad latency (every response of the UI required at least 8000 miles) that I managed to lose some copies of catalog files I was trying to temporarily locate in an convenient location in the file system (because otherwise the paths were too long to reliably type given the jerkiness of the UI). Working with any UI would have been frustrating given the latency, but I was particularly enraged by the absence in the Macintosh environment of a simple way to copy and paste filesystem paths. I've railed about this before and I'll do it again: if your GUI doesn't support the copy and pasting of paths, I'm never going to be happy using it. One of the most overlooked aspects of the Web is the way it presents locations as short strings that can be copied, pasted, saved, and modified. I don't how it can be that it's 2014 and the folks who design and maintain the Macintosh OS fail to see how useful this idea can be on a local file system (particularly given how big such filesystems have become).
Late this afternoon, I drove out to the Wall Street house, mostly just to bring the trash and recycling cans back from the curb. This marked the second time I'd put the recycling can out on the curb, but I've never seen any indication that the City of Kingston has come by and taken anything out of it. Perhaps their recycling days are different from their trash days, though there is no information on their website. (Kingston, Ontario, by contrast, has an interactive map supplying this information.) I did notice, however, that someone (I want to say homeless people, but Kingston doesn't have many of those, particularly at this time of year) had taken all the cans and bottles that can be redeemed for deposits. That made me happy.
While at the Wall Street House, I used some primer, glue, and the little C-shaped piece of PVC I'd made yesterday to repair the tiny leak I'd found in the new bathroom's shit pipe. The repair took less than a minute and seemed to do the trick.
For linking purposes this article's URL is:feedback
previous | next