Nostalgia: Airspace Edition. The end of the road for VORs

The FAA is in the process of redesigning the Class B airspace around SFO airport, and it signals an interesting  shift in air navigation: the requirement that everyone in the airspace be able to navigate by means of GPS.

They are undertaking the redesign primarily to make flying around SFO quieter and more fuel efficient. The new shape will allow steeper descents at or near “flight idle” — meaning the planes can just sort of glide in, burning less gas and making less noise. As a side benefit, they will be able to raise the bottom of the airspace in certain places so that it is easier for aircraft not going to SFO to operate underneath.

As far as I’m concerned, that’s all good, but I noticed something interesting about the new and old design. Here’s the old design:

This picture, or one like it, will be familiar to most pilots. It’s a bunch of concentric circles with lines radiating out from it, dividing it into sectored rings. The numbers represent the top and bottom of those sections, in hundreds of feet. This is the classic “inverted wedding cake” of a Class B airspace. In 3D, it looks something like this, but more complicated.

This design was based around the VOR, a radio navigation system, that could tell you what azimuth (radial) you are relative to a fixed station, such as the VOR transmitter on the field at SFO. A second system, usually coupled with a VOR, called DME, allows you to know your distance from the station. Together, you can know your exact position, but because of this “polar coordinate” way of knowing your position, designs intended to be flown by VOR+DME tend to be made of slices and sectors of circles.

The new proposed design does away with this entirely.

Basically, they just drew lines any which way, wherever it made sense. This map is almost un-navigable by VOR and DME. It takes a lot of knob twisting and fiddling to establish your exact position if it is not based on an arc or radial. Basically, this map is intended for aircraft with GPS.

All of this is well and good, I guess. GPS has been ubiquitous in every phone, every iPad and every pilot’s flight bag for a long time.

I learned to fly in a transitional era, when GPS existed, but the aircraft mostly had 2 VOR receivers and a DME. My flight instructor would never have let me use a GPS as a mean of primary navigation. Sure, for help, but I needed to be able to steer the plane without it, because the only “legal” navigation system in the plane were the VORs. I still feel a bit guilty when I just punch up “direct to” in my GPS and follow the purple line. It feels like cheating.

But it’s not, I guess. Time marches on. Today, new aircraft all have built-in GPS, but a lot of older ones don’t. And if they’re going to fly under the SFO Class B airspace, they’re going to need to use one of those iPads to know where they are relative to those airspace boundaries. And strictly, speaking, they probably should get panel-mounted GPS as well.

 

 

progress, headphones edition

It looks like Intel is joining the bandwagon of people that want to take away the analog 3.5mm “headphone jack” and replace it with USB-C. This is on the heels of Apple announcing that this is definitely happening, whether you like it or not.

obsolete technology
obsolete technology

There are a lot of good rants out there already, so I don’t think I can really add much, but I just want to say that this does sadden me. It’s not about analog v. digital per se, but about simple v. complex and open v. closed.

The headphone jack is a model of simplicity. Two signals and a ground. You can hack it. You can use it for other purposes besides audio. You can get a “guzinta” or “guzoutta” adapter to match pretty much anything in the universe old or new — and if you can’t get it, you can make it. Also, it sounds Just Fine.

Now, I’m not just being ant-change. Before the 1/8″ stereo jack, we had the 1/4″ stereo jack. And before that we had mono jacks, and before that, strings and cans. And all those changes have been good. And maybe this change will be good, too.

But this transition will cost us something. For one, it just won’t work as well. USB is actually a fiendishly complex specification, and you can bet there will be bugs. Prepare for hangs, hiccups, and snits. And of course, none of the traditional problems with headphones are eliminated: loose connectors, dodgy wires, etc. On top of this, there will be, sure as the sun rises, digital rights management, and multiple attempts to control how and when you listen to music. Prepare to find headphones that only work with certain brands of players and vice versa. (Apple already requires all manufacturers of devices that want to interface digitally with the iThings to buy and use a special encryption chip from Apple — under license, natch.)

And for nerd/makers, who just want to connect their hoozyjigger to their whatsamaducky, well, it could be the end of the line entirely. For the time being, while everyone has analog headphones, there will be people selling USB-C audio converter thingies — a clunky, additional lump between devices. But as “all digital” headphones become more ubiquitous, those adapters will likely disappear, too.

Of course, we’ll always be able to crack open a pair of cheap headphones and steal the signal from the speakers themselves … until the neural interfaces arrive, that is.

EDIT: 4/28 8:41pm: Actually, the USB-C spec does allow analog on some of the pins as a “side-band” signal. Not sure how much uptake we’ll see of that particular mode.

 

Power corrupts, software ecosystems edition

I’ve written a lot of software in my day, but it turns out little of it has been for use of anyone but myself and others in the organizations for which I’ve worked.

Recently, my job afforded me a small taste of what it’s like to publish software. I wrote a small utility, an extension to a popular web browser to solve a problem with a popular web application. The extension was basically a “monkey patch” of sorts over the app in question.

Now, in order to get it up on the “web store”, there were some hoops to jump through, including submission to the company for final approval. In the process, I signed up for the mailing list that programmers use 2000px-Gnome-system-lock-screen.svgto help them deal with hiccups in the publishing process.

As it turned out, my hoop-jumping wasn’t too hard. Because I was only publishing to my own organization, this extension ultimately did not need to be reviewed by the Great And Powerful Company. But I’ve continued to follow the mailing list, because it has turned out to be fascinating.

Every day, a developer or two or three, who has toiled for months or years to create something they think is worthwhile, sends a despondent message to the list: “Please help! My app has been rejected and I don’t know why!” Various others afflicted try to provide advice for things they tried that worked or did not.

And the thing is, in many cases, nobody can help them. Because the decision was made without consultation with the developer. The developer has no access to the decision-maker whatsoever. No email, no phone call, no explanation. Was the app rejected because it violated the terms and conditions? Which ones?

The developers will have to guess, make some changes, and try their luck again. It’s got to be an infuriateing process, and a real experience of powerlessness.

This is how software is distributed today — through a small number of authorities. Google, Apple, Amazon, etc. If you want to play, you do it by their rules. Even on PCs and Macs there seems a strong push to move software distribution onto the company web stores. So far, it is still possible to put a random executable on your PC and run it — at least after clicking through a series of warnings. But will that be the case forever? We shall see.

The big companies have some good reasons for doing this. They can

  • assert quality control (of a sort. NB: plenty of crap apps in curated stores anyway)
  • help screen for security problems
  • screen out malware.

But they also have bad (that is, consumer-unfriendly) reasons to do this, like

  • monetizing other people’s work to a degree only possible in a monopoly situation
  • keeping out products that compete with their own
  • blocking apps that circumvent company-imposed limitations (like blocking frameworks and interpreters that might allow people to develop for the framework rather than the specific target OS)

All of those reasons are on top of the inevitable friction associated with dealing with a large, careless, monolithic organization and its bureaucrats, who might find it easier to reject your puny app than to take the time to understand what it’s doing and why it has merit.

Most sad to me is that the amazing freedom that computing used to have is being restricted. Most users would never use that freedom, but it was nice to have. If you could find a program, you could run it. If you could not find it, you could write it. And that is being chipped away at.

 

 

 

Back when computers were computers

Trying to learn new tricks, this old dog dove into Amazon Lambda today. Distributed, cloud-based programming is really something to wrap your head around if you grew up working “a computer” where most if not all the moving parts you needed to make your program run were on that computer. Well, that’s not how the kids are doing it these days.

So Lambda is cool because you only have to write functions and everything about being a computer: setting it up, maintaining it, and provisioning more as needed is abstracted away and handled automatically. But that does come at a price: your functions need to be completely and 100% stateless. Hmm, all that fancy automatic provisioning and teardown seems a bit less impressive in that context.

To tie all those functions into what we used to call “a program” you still need state, and you get that from the usual suspects: , S3 or a database of your choosing, DropBox, whatever.

Which, to me seems like actually, all that provisioning and scaling mumbo jumbo is not handled automagically at all. You still need to think about your database, how many copies you need to X transactions per second, etc. Except now you’re using a database when you might have merely declared a variable which would be stored in, you know, memory as long as the program was running.

So now everything is way more complex and about a metric gajillion times slower.

But this is the cost, I guess, of the future, where every application is designed to support 100 million users. I think it makes great sense for applications where those 100 million users’ data is interacting with each other. But for old-skool apps, where you’re working with your own data most of the time (ie, editing a doc) I’d much rather write code for “a computer.”