The Daily Parker

Politics, Weather, Photography, and the Dog

Starting the oldest item on my to-do list

I mentioned a few weeks ago that I've had some difficulty moving the last remaining web application in the Inner Drive Technology Worldwide Data Center, Weather Now, into Microsoft Windows Azure. Actually, I have two principal difficulties: first, I need to re-write almost all of it, to end its dependency on a Database of Unusual Size; and second, I need the time to do this.

Right now, the databases hold about 2 Gb of geographic information and another 20 Gb of archival weather data. Since these databases run on my own hardware right now, I don't have to pay for them outside of the server's electricity costs. In Azure, that amount of database space costs more than $70 per month, well above the $25 or so my database server costs me.

I've finally figured out the architecture changes needed to get the geographic and weather information into cheaper (or free) repositories. Some of the strategy involves not storing the information at all, and some will use the orders-of-magnitude-less-expensive Azure table storage. (In Azure storage, 25 Gb costs $3 per month.)

Unfortunately for me, the data layer is about 80% of the application, including the automated processes that go out and get weather data. So, to solve this problem, I need a ground-up re-write.

The other problem: time. Last month, I worked 224 hours, which doesn't include commuting (24 hours), traveling (34 hours), or even walking Parker (14 hours). About my only downtime was during that 34 hours of traveling and while sitting in pubs in London and Cardiff.

I have to start doing this, though, because I'm spending way too much money running two servers that do very little. And I've been looking forward to it—it's not a chore, it's fun.

Not to mention, it means I get to start working on the oldest item on my to-do list, Case 46 ("Create new Gazetteer database design"), opened 30 August 2006, two days before I adopted Parker.

And so it begins.

Changing the way I read

Last week, I bought an ASUS Transformer TF700, in part to help out with our seriously-cool Galahad project, and in part so I could read a bunch of heavy technical books on tonight's flight to London. And yes, I had a little tablet-envy after taking the company's iPad home overnight. It was not unlike fostering a puppy, in the sense that you want to keep it, but fortunately not in the sense of needing to keep Nature's Miracle handy.

Then yesterday, Scott Hanselman pointed out a great way to get more use out of the pad: Instapaper. I'm hooked. As Hanselman points out,

Here's the idea. You get a bunch of links that flow through your life all week long. These are often in the form of what I call "long-form reading." Hackernews links, NYTimes studys, academic papers, etc. Some folks make bookmarks, have folders called "Links" on their desktops, or email themselves links.

I have these websites, papers and interesting links rolled up and delivered automatically to my Kindle every week. Think about how amazing that is and how it can change your relationship with content on the web. The stress and urgency (and open tabs) are gone. I am naturally and organically creating a personalized book for weekend reading.

I have a bookmarklet from Instapaper that says "Read Later" on my browser toolbar. I've put it in every browser I use, even Mobile Safari. I've also logged into Instapaper from all my social apps so that I can Read Later from my iPhone Twitter Client for example. You'd be surprised how many apps support Instapaper once you start looking for this.

What this means it is that Instapaper is ready and waiting for me in every location where an interesting piece of long-form reading could present itself. I don't stress, I click Read Later and the document is shipped off to Instapaper.

I'm sold. I actually have it updating my tablet every 12 hours, because I do a lot of my reading on the 156 bus. Or, today, British Airways 226.

Direct effects of moving to Azure

I still haven't moved everything out of the Inner Drive Technology Worldwide Data Center to Microsoft Windows Azure, because the architecture of Weather Now simply won't support the move without extensive refactoring. But this week I saw the first concrete, irrefutable evidence of cost savings from the completed migrations.

First, I got a full bill for a month of Azure service. It was $94. That's actually a little less than I expected, though in fairness it doesn't include the 5–10 GB database that Weather Now will use. Keep in mind, finishing the Azure migration means I get to shut off my DSL and landline phone, for which AT&T charges me $55 for the DSL and $100 for the phone.

I also found out how much less energy I'm using with 3 of 5 servers shut down. Here is my basic electricity use for the past two years:

The spikes are, obviously, air conditioning—driven very much by having the server rack. Servers produce heat, and they require cooling. I have kept the rack under 25°C, and even at that temperature, the servers spin up their cooling fans and draw even more power. The lowest usage periods, March to May and October to December, are cool and moist, so I don't use either air conditioning or humidifiers.

Until this month, my mean electricity use was 1100 kWh per month overall, 1386 kWh in the summer, and 908 kWh in the shoulder seasons. In the last two years, my lowest usage was 845 kWh.

Last month it was 750 kWh. Also notice how, during the much hotter summer in 2012 (compared with 2011), my electricity use was slightly lower. It was just harder to see the savings until now.

Including taxes, that means the bill was only $20 less than the usual shoulder-season bill. But I'm not General Motors; that $20 savings is 20% of the bill. Cutting my electricity bills 20% seems like a pretty good deal. And next summer, with no servers in the house, I'll be able to run less air conditioning, and the A/C won't have to compete with the heat coming off the server rack.

Now I've just got to figure out how to migrate Weather Now...

Windows Azure deployment credentials

My latest entry is up on the 10th Magnitude tech blog:

We've taken a little more time than we'd hoped to figure out how to deal with Azure deployment credentials and profiles properly. In an effort to save other development teams some of our pain, we present our solution. First, the general principle: Publication profiles are unique to each developer, so each developer should have her own management certificate, uploaded by hand to each relevant subscription.

When you deploy a project to a Windows Azure Cloud Service instance, you have to authenticate against the Azure subscription using a management certificate. The Publish Windows Azure Application wizard in Visual Studio presents you with a helpful link to sign in to your Azure subscription and download credentials. If you do this every time you publish to a new subscription, you (a) rapidly run up against the 10-certificate limit in Azure; and (b) get ridiculous credential files called things like "WorkSubscription1-AzDem12345-JoesSubscription-MySecretProjectThatMyBossDoesntKnowAboutSubscription.publishsettings" which, if you're not paying attention, soon shows up on a Subversion commit report (and gives your boss access to that personal project you forgot to mention to her).

Don't do that. Instead, do this:

1. Create a self-signed certificate using IIS. Name it something clear and unique; I used "david.10thmagnitude.com," for instance.
Image of creating a self-signed certificate
Then export it to a private folder.
Image of exporting a certificate from IIS to a folder

2. Import the .pfx file into your local certificate store.
Image of importing a private key

3. Export the same certificate as a .cer file.
Image of exporting a cer file

4. Go to the Azure management portal's management certificate list.

5. Upload the certificate you just created to the subscriptions to which you want to publish cloud services.
 Image of uploading a cer file

Now you have a single certificate for all your subscriptions. Next, create a publishing profile with the certificate:

6. In your Azure cloud service project, right-click the project node and choose "Publish…" to bring up the Publish Windows Azure Application wizard.

7. Drop down the "Choose your subscription" list and click "<Manage...>"

8. Click "new"

9. In the "Create or select..." drop down, find the certificate you just created and choose it.

10. Continue setting up your publishing profile as you've done before.

That's it. Except for one other thing.

If you have more than 0 developers working on a project, at some point you'll use source control. Regardless whether you have Subversion, Mercurial, or whatever, you need to avoid committing keys, certificates, and publishing profiles into your VCS. Make sure that your VCS ignores the following extensions: *.pfx, *.cer, *.publishsettings, and *.azurePubxml.

You want to ignore pfx and publishsettings files because they contain secrets. (I hope everyone knows this already. Yes, pfx files use passwords, but publishsettings don't; and anyway, why would you want to risk anyone else authenticating as you without your knowledge?) Ignore cer files because they're not necessary in an Azure project. And ignore azurePubxml files because every developer who publishes to Azure will wind up overwriting the files, or creating new ones that no one else uses.

How the Cloud helps people sleep

Last night, around 11:30pm, the power went out in my apartment building and the ones on either side. I know this because the five UPS units around my place all started screaming immediately. There are enough of them to give me about 10 minutes to cleanly shut down the servers, which I did, but not before texting the local power company to report it. They had it on again at 1:15am, just after I'd fallen asleep. I finally got to bed around 2 after bringing all the servers back online, rebooting my desktop computer, and checking to make sure no disk drives died horribly in the outage.

But unlike the last time I lost power, this time I did not lose email, issue tracking, this blog, everyone else's site I'm hosting, or the bulk of my active source control repositories. That's because they're all in the cloud now. (I'm still setting up Mercurial repositories on my Azure VM, but I had moved all of the really important ones to Mercurial earlier in the evening.)

So, really, only Weather Now remains in the Inner Drive Technology Worldwide Data Center, and after last night's events, I am even more keen to get it up to the Azure VM. Then, with only some routers and my domain controller running on a UPS that can go four hours with that load, a power outage will have less chance of waking me up in the middle of the night.

Azure Web Sites adds a middle option

My latest 10th Magnitude blog post is up, in which I dig into Microsoft's changes to Azure Web Sites announced Monday. The biggest change is that you can now point your own domain names at Azure Web Sites, which solves a critical failing with the product that has dogged them from its June release.

Since this Daily Parker post was embargoed for a day while my 10th Magnitude post got cleared with management, I've played with the new Shared tier some more. I've come to a couple of conclusions:

  • It might work for a site like Inner Drive's brochure, except for the administrative tools lurking on the site that need SSL. Azure Web sites still have no way to configure secure (https://) access.
  • They still don't expose the Azure role instance to .NET applications, making it difficult to use tools like the Inner Drive Extensible Architecture™ to access Azure table storage. The IDEA™ checks to see whether the Azure role instance exists (using RoleEnvironment.IsAvailable) before attempting to access Azure-specific things like tables and blobs.
  • The cost savings isn't exactly staggering. A "very small" Web Role instance costs about $15 per month. A Shared-level Web Site costs about $10. So moving to a Shared Web Site won't actually save much money.
  • Deployments, however, are a lot easier to Web Sites. You can make a change and upload it in seconds. Publishing to a Web Role takes about 15 minutes in the best circumstances. Also, since Web Sites expose FTP endpoints, you can even publish sites using Beyond Compare or your favorite FTP client.

I did upgrade one old site from Free to Shared to move its domain name off my VM. (The VM hosted a simple page that redirected users to the site's azurewebsites.net address.) I'll also be moving Hired Wrist in the next few days, as the overhead of running it on a VM doesn't make sense to me.

In other news, I've decided to go with Mercurial for source control. I'm sad to give up the tight integration with Visual Studio, but happy to gain DVCS capabilities and an awesomely simple way of ensuring that my source code stays under my control. I did look at Fog Creek's Kiln, but for one person who's comfortable mucking about inside a VM, it didn't seem worth the cost ($299).

Chicago's digital infrastructure

Crain's Chicago Business yesterday ran the first part in a series about How Chicago became one of the nation's most digital cities. Did you know we have the largest datacenter in the world here? True:

Inside the former R.R. Donnelley & Sons Co. printing plant on East Cermak Road, next to McCormick Place, is the world's largest, most-connected Internet data center, according to industry website Data Center Knowledge. It's where more than 200 carriers connect their networks to the rest of the world, home to many big Internet service providers and where the world's major financial exchanges connect to one another and to trading desks. "It's where the Internet happens," Cleversafe's Mr. Gladwin says.

Apparently Chicago also hosts the fifth-largest datacenter in the world, Microsoft's North Central Azure hub in Northlake. (Microsoft's Azure centers are the 5th-, 6th-, 9th-, and 10th-largest in the world, according to Data Center Knowledge.) And then there's Chicago's excellent fiber:

If all of the publicly available fiber coming in and out of the Chicago area were bundled together, it would be able to transmit about 8 terabits per second, according to Washington-based research firm TeleGeography. (A terabit per second is the equivalent of every person on the planet sending a Twitter message per second.)

New York would be capable of 12.3 terabits, and Washington 11.2 terabits. Los Angeles and San Francisco are close behind Chicago at 7.9 and 7.8 terabits, respectively. New York is the primary gateway to Europe, and Washington is the control center of the world's largest military and one of the main connection points of the Internet.

Chicago benefits from its midcontinent location and the presence of the financial markets. "The fiber optic lines that go from New York and New Jersey to Chicago are second to none," says Terrence Duffy, executive chairman of CME Group Inc., who says he carefully considered the city's infrastructure when the futures and commodities exchange contemplated moving its headquarters out of state last year because of tax issues. "It benefits us to be located where we're at."

Now, if I can just get a good fiber to my house...

The Azure migration hits a snag with source control

Remember how I've spent the last three months moving stuff into the Cloud? And how, as of three weeks ago, I only had two more services to move? I saved the best for last, and I don't know for sure now whether I can move them both without some major changes.

Let me explain the economics of this endeavor, and why it's now more urgent that I finish the migration. And then, as a bonus, I'll whinge a bit about why one of the services might have to go away completely.

I currently have a DSL and a 20-amp power line going into my little datacenter. The DSL ostensibly costs $50 per month, but really it's $150 per month because it comes as an adjunct to my landline. I don't need a landline, and haven't for years; I've only kept it because getting DSL without a landline would cost—you guessed it—$150 per month. The datacenter has six computers in it, two of which are now indefinitely offline thanks to the previous migrations to Azure. Each server uses between $10 and $20 of electricity per month. Turning two off in July cut my electricity use by about $30. Of the four remaining servers, I need to keep two of them on, but fortunately those two (a domain controller and a network attached storage, or NAS, box) are the most efficient; the two hogs, using $40 of electricity every month, are my Web and database servers. I get to turn them off as soon as the last two services get migrated.

So we're already up to $190 per month that goes away once I finish these migrations, down from $220-230 per month three months ago (or $280-300 in the summer, when I have to run A/C all the time to keep it cool). I've already brought Azure services online, including a small virtual machine, and I signed up for Outlook Online, too. Together, my Azure and Office 365 services, once everything is moved, should cost about $120-130 per month, which stays exactly the same during the summer, because Microsoft provides the air conditioning.

The new urgency comes from my free 90-day Azure trial expiring last week. Until then, all my Azure services have been free. Now, I'm paying for them. The faster I finish this migration, the faster I get to save over $100 per month ($180 in the summer!) in IT expenses—and have much more reliable services that don't collapse every time AT&T or Commonwealth Edison has a hiccup in my neighborhood.

Today, in the home stretch with only Vault and Weather Now left to move, it turns out I might have to give up on Vault completely. Vault requires integration between the Web and database servers that is only possible in Vault if they're running on the same virtual network or virtual machine.

I want to keep using Vault because it has my entire source control history on it. This includes all the changes to all the software I've written since 1998, and in fact, some of the only copies of programs I wrote back then. I don't want to lose any of this data.

Unfortunately, Vault's architecture leaves me with only three realistic options if I want to keep using it:

  • Keep the Web and database servers running and keep the DSL up, obviating the whole migration effort;
  • Move the database and Web services to the domain controller, allowing me to turn the servers off, which still leaves me with a $155 per month DSL and landline bill (and puts a domain controller on the Internet!); or
  • Upgrade the my Azure VM to Medium, doubling its cost (i.e., about $60 more per month), then install SQL Server and Vault on it.

None of these options really works for me. The third is the least worst, at least from a cost perspective, and also puts a naked SQL Server on the Internet. With, oh yeah, my entire source control history on it.

So suddenly, I'm considering a totally radical option, which solves the cost and access problems at the expense of convenient access to my source history: switch to a new source control system. I say "convenient access" because even after this migration, I have no plans to throw away the servers or delete any databases. Plus, it turns out there are tools available to migrate out of Vault. I'll evaluate a few options over the next two weeks, and then do what I can to migrate before the end of September.

Not to mention, it looks like Sourcegear may be re-evaluating Vault (as evidenced by a developer blog that hasn't changed in over a year), possibly for many of these reasons. Vault was developed as a replacement to the "source destruction system" Microsoft Visual SourceSafe, and achieved that mandate admirably. But with the incredible drop in cloud computing prices over the past two years, it may have lived long enough already.

As for the final service to migrate, Weather Now: I know how to move it, I just haven't forced myself to do it yet.

Moving FogBugz to Azure

I should really learn to estimate networking and migration tasks better. The last time I upgraded my FogBugz instance on my local web server, it took about 20 minutes. This led me to estimate the time to migrate it to a Microsoft Azure Virtual Machine at 2 hours.

Well, 2½ hours later, I'm a little frustrated, but possibly closer to getting this accomplished.

The point of a virtual machine, of course, is that it should appear the same as any other machine anywhere. But using an Azure VM means either using an Azure SQL Database or installing SQL Server right on the VM. Obviously you'd want to do the former, unless you really like pain. Unfortunately, FogBugz doesn't make it easy to do this, and in fact puts up roadblocks you'll need to get around.

Here, then, are the steps I went through trying to get FogBugz moved to an Azure VM:

0. First, before anything else, I copied my FogBugz database in its entirety up to a new Azure SQL Database using the incredibly useful SQL Azure Migration Wizard tool.

1. Ran the FogBugz installer on the VM. It didn't accept my database connection because SQL in Azure doesn't have the xp_msver extended stored procedure that lets FogBugz know what version of SQL it uses.

2. Checked Fog Creek Software's support forum. They don't support FogBugz on Azure SQL Databases.

3. Attempted to create the xp_msver stored procedure on the SQL Azure master database; permission denied.

4. Installed SQL Server 2008 Express on the VM.

5. Re-ran the FogBugz installer. It can't connect to the IIS configuration file, and therefore thinks I don't have IIS on the machine.

6. Re-ran the FogBugz installer, this time just extracting the files to a folder under the VM's Web root.

7. Set up a new application in IIS pointing to the FogBugz folder.

That put me in the weeds, because the application has no configuration settings available. Opening the app in a browser gives me the error message: "The FogBugz database is down or could not be found." It further suggests that I need to change the registry entry HKEY_LOCAL_MACHINE\SOFTWARE\Fog Creek Software\FogBugz\{application folder}\sConnectionString. So I did, and I got the same error.

The reason is that on 64-bit servers, the FogBugz configuration keys aren't in HKLM\Software; they're really in HKLM\Wow6432Node. I figured this out because, remember, I have a running FogBugz installation, and I was able to search the server's registry directly.

All right, moving on:

8. Copied my existing server's FogBugz registry keys (from the right registry folder) to the VM.

Nope. FogBugz still gave me the same error. I also copied the connection string to the registry key FogBugz claimed it was looking in, with the same result.

OK, I'm going to now uninstall everything and reboot the VM, then try again to install FogBugz pointing to SQL Express. Back in a flash...

(20 minutes later...)

OK, FogBugz installed cleanly, but at the moment it's pointing to the local SQL Express database. So: change the connection string to SQL Azure...and bam! It works.

Excellent. Only two applications left to move...

One more site and two stubs moved to the Cloud

The title says it all. I've moved Hired Wrist, my dad's brochure site, up to my Azure VM, leaving only Weather Now, plus my bug tracking and source control applications, in my living room the Inner Drive Technology Worldwide Data Center.

I'll move the two third-party apps next weekend. My experience moving Hired Wrist this morning suggests that moving Weather Now will be, as we say, "non-trivial" (i.e., bloody hard).