Technical Support that isn’t

I’ve been doing a little progr^H^H^H^H^Hsoftware engineering lately, and with it, I’ve been interacting with libraries and APIs from third parties. Using APIs can be fun, because it lets your simple program leverage somebody else’s clever work. On the other hand, I really hate learning complex APIs because the knowledge is a) too often hard-won through extended suffering and b) utterly disposable. You will not be able to use what you’ve learned next week, much less, next year.

So, when I’m learning a new API, I read the docs, but I admit to trying to avoid reading them too closely or completely, and I try not to commit any of it to memory. I’m after the bare minimum to get my job done.

That said, I sometimes get stuck and have to look closer, and a few times recently, I’ve even pulled the trigger on the last resort: writing for support. Here’s the thing: I do this when I have read the docs and am seriously stuck and/or strongly suspect that the API is broken in some way.

As it happens, I have several years’ experience as a corporate and field applications engineer (that’s old-skool Silicon Valley speak for person-who-helps-make-it-work), so I like to think I know how to approach support folks; I know how I would like to be approached.

I always present them with a single question, focused down to the most basic elements, preferably in a form that they can use to reproduce the issue themselves.

But three out of three times recently when I’ve done this in the past month (NB: a lot of web APIs are, in fact, quite broken), I have run into a support person who replied before considering my example problems or even simply running. Most annoying, they sent me the dreaded “Have you looked at the documentation?”

All of those are support sins, but I find the last the most galling. For those of you whose job it is to help others use your product, let me make this very humble suggestion: always, always, ALWAYS point the user to the precise bit of documentation that answers the question asked. (Oh, and if your docs website does not allow that, it is broken, too. Fix it.)

This has three handy benefits:

  1. It helps the user with their problem immediately. Who woodanodeit?
  2. It chastens users who are chastenable (like me), by pointing out how silly and lazy they were to write for help before carefully examining the docs.
  3. It forces you, the support engineer, to look at the documentation yourself, with a particular question in mind, and gives you the opportunity to consider whether the doc’s ability to answer this question might be improvable.

 

On the other hand, sending a friendly email suggesting that the user “look at the documentation” makes you look like an ass.

 

 

Culture of Jankery

The New York Times has a new article on Nest, describing how a software glitch allowed units to discharge completely and become non-functional. We’re all used to semi-functional gadgets and Internet services, but when it comes to thermostats we expect a higher standard of performance. After all, when thermostats go wro1043px-Nest_front_officialng, people can get sick and pipes can freeze. Or take a look at the problems of Nest Protect, a famously buggy smoke detector. Nest is an important case, because these are supposed to be the closest thing we have to grownups in IoT right now!

Having worked for an Internet of Things company, I have more than a little sympathy for Nest. It’s hard to make reliable connected things. In fact, it might be impossible — at least using today’s prevailing techniques and tools, and subject to today’s prevailing expectations for features, development cost, and time to market.

First, it should go without saying that a connected thermostat is millions or even billions of times as complex as the old, bimetallic strips that it is often replacing. You are literally replacing a single moving part that doesn’t even wear out with a complex arrangement of chips, sensors, batteries, and relays, and then you are layering on software: an operating system, communications protocols, encryption, a user interface, etc. Possibility that this witch’s brew can be more reliable than a mechanical thermostat: approximately zero.

But there is also something else at work that lessens my sympathy: culture. IoT is coming from the Internet tech world’s attempt to reach into physical devices. The results can be exciting, but we should stop for a moment to consider the culture of the Internet. This is a the culture of “go fast and break things.” Are these the people you want building devices that have physical implications in your life?

My personal experience with Internet-based services is that they, work most of the time. But they change on their own schedule. Features and APIs come and go. Sometimes your Internet connection goes out. Sometimes your device becomes unresponsive for no obvious reason, or needs to be rebooted. Sometimes websites go down for maintenance at an inconvenient time. Even when the app is working normally, experience can vary. Sometimes it’s fast, sometimes slow. Keypresses disappear into the ether, etc.

My experience building Internet-based services is even more sobering. Your modern, complex web or mobile app is made up of agglomeration of sub-services, all interacting asynchronously through REST APIs behind the scenes. Sometimes, those sub-services use other sub-services in their implementation, and you don’t even have a way of knowing what ones. Each of those links can fail for many reasons, and you must code very defensively to gracefully handle such failures. Or you can do what most apps do — punt. That’s fine for chat, but you’ll be sorely disappointed if your sprinkler kills your garden, or even if your alarm clock failures to wake you up before an important meeting.

And let’s not even talk of security, a full-blown disaster in the making. I’ll let James Mickens cover that territory.

Anyhoo, I still hold hope for IoT, but success is far from certain.

 

 

The great unequalizer?

This post is sloppily going to try to tie together two threads that have been in my newsfeed for years.

The first thread is about rising inequality. It is the notion, as Thomas Piketty puts it that r > g, or returns to capital are greater than the growth rate of the economy, so that ultimately wealth concentrates. I think there is some good evidence that wealth is concentrating now (stagnating middle class wages for decades) but I am certainly not economist enough to judge. So let’s just take this as a supposition for the moment. (NB: Also, haven’t read the book.)

There are many proposed mechanisms for this concentration, but one of them is that the wealthy, with both more at stake and more resources to wield, access power more successfully than most people. That is, they adjust the rules of the game constantly in their favor. (Here’s a four year old study that showed that in the Chicago area, more than 1/2 of those with wealth over $7.5M had personally contacted their congressional representatives. Have you ever gotten your Senator on the line?)

The second thread is about what technology does or does not accomplish. Is tech the great equalizer, bringing increased utility to everyone by lowering the cost of everything? Or is its role more complex?

The other day in the comments of Mark Thoma’s blog, I came across an old monograph by EW Dijkstra that described some of the belief of early computer scientists:

I have fond memories of a project of the early 70’s that postulated that we did not need programs at all! All we needed was “intelligence amplification”. If they have been able to design something that could “amplify” at all, they have probably discovered it would amplify stupidity as well…

If you think of tech as an amplifier of sorts, then one can see that there is little reason to think that it always makes life better. An amplifier can amplify intelligence as well as stupidity, but it can also amplify greed and avarice, could it not?

Combining these threads we get to my thesis, that technology, rather than lifting all boats and making us richer, can be exploited by those who own it and control its development and deployment can use it disproportionally to their benefit, resulting in a permanent wealthy class. That is, though many techno-utopians see computers as a means to allow everyone a leisure-filled life, a la Star Trek, the reality is that there is no particular reason to think that that benefits of technology and computers will accrue to the population at large. Perhaps there are good reasons to think they won’t.

In short, if the wealthy can wield government to their benefit, won’t they similarly do so with tech?

There is plenty of evidence contrary to this thesis. Tech has given us many things we could never have afforded before, like near-infinite music collections, step-by-step vehicle navigation, and near-free and instant communications (including transmission of cat pictures). In that sense, we are indeed all much richer for it. And in the past, to the degree that tech eliminated jobs, it always seemed that new demand arose as a result, creating even more work. Steam engine, electrification, etc.

But recent decades do seem a different pattern, and I can’t help but see a lot of tech’s gifts as bric-a-brac, trivial as compared to the basics of education and economic security, where it seems that so far, tech has contributed surprisingly little. Or, maybe that should not be surprising?

 

Freedom from v. freedom to: aviation edition

The FAA is still in its rule-making process for drones (or as they call them, UAS — unmanned aircraft systems), but they have announced that all drone operators must register themselves and their aircraft online. There will be a $3 fee (waived for early-registrants until 1/20/2016) and the registration lasts three years.

A quad-copter, quad-coptering.
A quad-copter, quad-coptering.

Some drone enthusiasts and some libertarians are up in arms. “It’s just a new technology that they fear they can’t control!” “Our rights are being curtailed for no good reason!” “That database will be used against us, just wait and see!”

I have a few thoughts.

First, I have more than a smidgeon of sympathy for these views. We should always pause whenever the government decides it needs to intervene in some process. And to be frank, the barriers set by the FAA to traditional aviation are extremely high. So high that general aviation has never entered the mainstream of American culture, and given the shrinking pilot population, probably never will. The price to get in the air is so high in terms of training that few ever get there. As a consequence, the price of aircraft remains high, the technological improvement of aircraft remains slow, rinse, repeat.

In fact, I have often wondered what the world might be like if the FAA had been more lax about crashes and regulation. Perhaps we’d have skies filled with swarms of morning commuters, with frequent crashes which we accept as a fact of life. Or perhaps those large volumes of users would spur investment in automation and safety technologies that would mitigate the danger — at least after an initial period of carnage.

I think I would be upset if the rules were like those for general aviation. But in fact registration is pretty modest. I suspect that later, there will be some training and perhaps a knowledge test, which seems quite reasonable. As a user of the National Airspace System (both as a pilot and a passenger) I certainly appreciate not ramming into solid objects while airborne. Registration, of course, doesn’t magically separate aircraft, but it provides a means for accountability. Over time, I suspect rules will be developed to set expectations on behavior, so that all NAS users know what to expect in normal operations. Call it a necessary evil, or, to use a more traditional term, “governance.”

But there is one interesting angle here: the class of UAS being regulated (those weighing between 0.55 lb and 55 lb) have existed for a long time in the radio-controlled model community. What has changed to make drones “special,” requiring regulation now?

I think it is not the aircraft themselves, but the community of users. Traditional radio-controlled models were expensive to buy, took significant time to build, and were difficult to fly. The result was an enthusiast community, which by either natural demeanor or soft-enforced community norms, seemed able to keep their model airplanes out of airspace used by manned aircraft.

Drones came along and that changed quickly. The drones are cheap and easy to fly, and more and different people are flying them. And they’re alone, not in clubs. The result has been one serious airspace incursion after another.

A lot of people seem to think that because drones aren’t fundamentally different technology from traditional RC hobby activity, that no new rule is warranted. I don’t see the logic. That’s not smart. It’s not about the machines, it’s about the situation.

Anyway, I think the future for drone ops is actually quite bright. There is precedent for a vibrant hobby along with reasonable controls. Amateur radio is one example. Yes, taking a multiple-choice test is a barrier to many, but perhaps a barrier worth having. Also, the amateur radio community seems to have developed its own immune system against violators of the culture and rules, which works out nicely, since the FCC (like the FAA) has limited capacity for enforcement. And it’s probably not coincidental that the FCC has never tried to build up a large enforcement capability.

Which brings me to my final point, which is that if the drone community is smart they will create a culture of their own and they will embrace and even suggest rules that allow their hobby to fruitfully coexist with traditional NAS users. The Academy of Model Aeronautics, a club of RC modelers, could perhaps grow to encompass the coming army of amateur drone users.

 

 

 

Simpler times, news edition

The other evening, I was relaxing in my special coffin filled with semiconductors salvaged from 1980’s-era consumer electronics, when I was thinking about how tired I am of hearing about a certain self-funded presidential candidate, or guns, or terrorism … and my mind wondered to simpler times. Not simpler times without fascists and an easily manipulated populace, but simpler times where you could more easily avoid pointless and dumb news, while still getting normal news.

45308

It wasn’t long ago that I read news, or at least “social media” on Usenet, a system for posting on message boards that predates the web and even the Internet. My favorite “news reader” (software for reading Usenet) was called trn. I learned all it’s clever single-key commands and mastered a feature common to most serious news readers: the kill file.

Kill files are conceptually simple. They contain a list of rules, usually specified as regular expressions, that determine which posts on the message board you will see. Usenet was the wild west, and it always had a lot of garbage on it, so this was a useful feature. Someone is being rude, or making ridiculous, illogical arguments? <plonk> Into the kill file goes their name. Enough hearing about terrorism? <plonk> All such discussions disappear.

Serious users of Usenet maintained carefully curated kill files, and the result was generally a pleasurable reading experience.

Of course, technology moves on. Most people don’t use text-based news readers anymore, and Facebook is the de-facto replacement for Usenet. And in fact, Facebook is doing curation of our news feed – we just don’t know what it is they’re doing.

All of which brings me to musing about why Facebook doesn’t support kill files, or any sophisticated system for controlling the content you see. We live in more advanced times, so we should have more advanced software, right?

More advanced, almost certainly, but better for you? Maybe not. trn ran on your computer, and the authors (its open source) had no pecuniary interest in your behavior. Facebook, of course, is a media company, not a software company, and in any case, you are not the customer. The actual customers do not want you to have kill files, so you don’t.

Though I enjoy a good Facebook bash more than most people, I must also admit that Usenet died under a pile of its own garbage content. It was an open system and, after, a gajillion automated spam posts, even aggressive kill files could not keep up. Most users abandoned it. Perhaps if there had been someone with a pecuniary interest in making it “work,” things would have been different. Also, if it could had better support for cat pictures.

Garbage can strikes again

When I was in policy school, we learned about something called the “Garbage Can Model of Organizational Choice,” which for some reason has stuck with me. I don’t want to boil it down too much, but in it, Cohen, March, and Olson (and later, Kingdon) theorize that people are constantly coming up with “solutions” that more or less end up in a theoretical trash bin. Except, nobody ever empties the trash. Instead, it lingers. At the same time, the random stochastic process known as life generates a constant stream of problems. Every once in awhile, a “problem” comes along that fits a “solution” waiting in the trash can, and if there’s an actor who favors that solution who has been paying attention and waiting patiently, he trots it out and starts flogging hard.

In light of the Paris attacks, we’ve been seeing this from the security establishment in a big way. They like tools that let them see and watch everything, and they do not like anything that gets in their way. So, for example, banning encryption that they cannot defeat is a solution that sits in the trash can perpetually. That’s why it’s unsurprising that the ex-CIA director is calling for Edward Snowden’s hanging or Dianne Feinstein and other senators are railing against Silicon Valley for offering its users strong encryption.

It’s all about having an established agenda and seizing an opportunity when it comes along. Politics as usual, move along, these droids are not particularly interesting.

But there is actually something a bit interesting going on here. The actual facts and circumstances right now do not support the panopticon theory of governance favored by intelligence and law & order types. The terrorists in this case did not use encryption. They sent each other SMS and other messages completely in the clear. If you look in the Internet, you will find article after article debunking the notion that controls on encryption would have made a difference in these attacks at all.

In fact, given the circumstances of this particular case, it looks like the intelligence agencies already had all the tools they needed to stop this attack. They just didn’t. This, if anything, should be the actual story of the day!

Okay, so this is perhaps also not interesting to the jaded news junky. Maybe it’s a bit further down in the playbook, but we’ve all seen people who should be on the defensive go on the offensive in a big, loud way. But I still find it disturbing that the facts are not steering the debate at all. If you enjoy making fun of fact-free conservatives, then this is not the circus for you, either, as powerful Dems are behind this crap.

Various media outlets, even mainstream ones, are calling out the bullshit, but the bullshit continues.

Same as it ever was, or new, disturbing political discourse untethered to reality. You decide.

Oh, and just as an aside: you can’t stop the bad guys from using strong encryption. So what are you actually calling for?

 

 

 

Throwaway knowledge

I was recently tasked with writing an pretty useful extension for Chrome that would enhance Google Calendar so that room numbers on our campus would show links that would take you to campus’s internal map tool. Google Maps does not understand this campus, so the normal map link provided is not helpful.

This task took me the better part of a day and a half, and the whole time I was gripped by this awful feeling that in order to solve this simple problem, I had to learn a bunch of random stuff that would be mostly useless as soon as I was done.

For example, I needed to learn how Chrome extensions work. For that, I needed to familiarize myself with the Google Chrome API — at least enough to get this job done. I can say for sure that even in the small amount of time I spent on that task, I could tell that this API is not stable. They change it as often as they feel like, adding this, removing that, and most ominously “deprecating” some other things. Depracation is the worse. It means “we’re not taking this function way today, but we are taking it away, probably when you least expect it.” And, in any case, how many Chrome extensions is the average programmer going to write?

Furthermore, getting into Google Calendar enough to insert my links was an exercise in pure disposable hackery. I had to reverse engineer enough of Calendar to figure out where to scan for locations and where to insert links. And I needed to understand how Calendar works well enough to know when it is going to change up the document model and set event handlers for that.

This script works today, but it’s obviously going to break, probably very soon, because Google can and will change the internal guts of Calendar whenever they please. Unlike the extension API mentioned above, they are under no obligation, not even an twinge of guilt, to hold their internal implementation of Calendar to some fixed standard.

All this points to some rather sad facts about the future of coding for the web. You’ll  absolutely have to work through the supported APIs, and when they change, that’s your problem, and if they are not offered or are incomplete, that’s also your problem, and if they go away, also your problem. Compare that, for example, to a piece of software on your PC. Maybe you wrote an ancient MSDOS .bat file that does something with the input and output of an old program you liked. If the company that made the program makes a new version that breaks your script, you could upgrade to it when you want to — or never, depending on what was convenient for you.

I’ve already been bitten by this. I made an alarm clock that interfaces with Google Calendar using a published API. It is a nice alarm clock with a mechanical chime handmade by hippies from Woodstock, NY. It worked, just fine for years, until, one day, it stopped working. Google had deprecated, and then dropped the API the clock was using. There was a new API. I had to go to my years-old code and rewrite it (to use a substantially more complicated API, by the way).

I guess that’s life, but people who make alarm clocks for a living may be surprised that users want them to provide active support and software upgrades forever. Or maybe we’ll just throw out our clocks after a year or so, like a phone.

But this is a mere annoyance compared to my main concern: unstable platforms discourage the development of knowledge and expertise. Why learn anything well if the entire API, or hell, the only supported programming language, or even the entire theoretical framework for working with the API (does anybody seriously think REST with PUT, GET, and POST is going to last forever?) is going to change next Wednesday? Perhaps that’s why in my interviews with Google nobody ever seemed to care one whit about any of my knowledge of experience.

I, for one, welcome the coming generations of ADD-afflicted software engineers and am excited to see the wagonloads of new “wheels” they’ll invent.

Yippee!

Back when computers were computers

Trying to learn new tricks, this old dog dove into Amazon Lambda today. Distributed, cloud-based programming is really something to wrap your head around if you grew up working “a computer” where most if not all the moving parts you needed to make your program run were on that computer. Well, that’s not how the kids are doing it these days.

So Lambda is cool because you only have to write functions and everything about being a computer: setting it up, maintaining it, and provisioning more as needed is abstracted away and handled automatically. But that does come at a price: your functions need to be completely and 100% stateless. Hmm, all that fancy automatic provisioning and teardown seems a bit less impressive in that context.

To tie all those functions into what we used to call “a program” you still need state, and you get that from the usual suspects: , S3 or a database of your choosing, DropBox, whatever.

Which, to me seems like actually, all that provisioning and scaling mumbo jumbo is not handled automagically at all. You still need to think about your database, how many copies you need to X transactions per second, etc. Except now you’re using a database when you might have merely declared a variable which would be stored in, you know, memory as long as the program was running.

So now everything is way more complex and about a metric gajillion times slower.

But this is the cost, I guess, of the future, where every application is designed to support 100 million users. I think it makes great sense for applications where those 100 million users’ data is interacting with each other. But for old-skool apps, where you’re working with your own data most of the time (ie, editing a doc) I’d much rather write code for “a computer.”

 

 

WSJ swipes at science

Rather interesting piece by Matt Ridley in the WSJ, making the case that spending on basic science is a waste. It’s definitely worth a read for it’s world-tipped-on-its-side-itude. [ Ridley, a Conservative member of the House of Lords, has some interesting views about many things, so a scan of his wikipedia page, linked above, is worthwhile if you’re going to read the article. ]

Though I will happily grant the author the point that the linear model of:

is incorrect and simplistic, I don’t think that’ll be news to anyone who has ever spent a few minutes thinking about any of those things. Yes, technological advance is chaotic. Yes, innovation comes from many places, and the arrows are not always in the same direction.

science_matrix

But stating that the direction is not always from science to tech is a very far cry from proving that we can get away without science altogether.

He’s right, of course, that not all science leads to anything particularly valuable, and even when it does, it’s hard to know in advance what will and won’t. Sometimes hundreds of years can pass between a discovery and the moment society knows what to do with it.

In fact, it is for those very reasons and man more that it makes sense for governments to fund science.

The rest of the piece is, unfortunately, worse. I don’t have enough time to criticize all the arguments in the piece, but a few quick call-outs:

In 2007, the economist Leo Sveikauskas of the U.S. Bureau of Labor Statistics concluded that returns from many forms of publicly financed R&D are near zero and that “many elements of university and government research have very low returns, overwhelmingly contribute to economic growth only indirectly, if at all.”

You don’t say? Yeah, you can’t point to the monetary benefits of science because it does not directly generate monetary benefits. I wonder if that has anything to do with the fact that you can’t sell public knowledge? But you can use it to make things, and sell those. Or use it to direct your own research and make something of that. Whodathunk? Also, in the process, you get a bunch of educated people that private actors will hire to make things.

And, by the way, there are good reason to finance science with public money. Here’s one:

Let’s say knowledge “A”, obtained at cost a’ can be combined by technology entrepreneurs, P,Q,R to generate wealth p’,q’,r’. Without government funding of science, unless p’,q’,r’ each individually are more than a’, it won’t happen, because in the private investment scenario, the private investors has to recoup their costs alone. Even if p’ > a’, we still won’t see any of q’ and r’.  But with the public investment to get A, we get all of p’,q’,r’. When you throw in the uncertainty of the value of A at the time that it is being generated, it’s even harder for the private sector to justify. This has been known for a good while.

[ Aside: this is and other interesting aspects of innovation are covered in great detail, by the way, in the late Suzanne Scotchmer’s well thought out book, Innovation and Incentives. ]

Ridley also has a weird theory that technology has become a living organism, desiring to and able to perpetuate itself. I don’t half understand what that means, but it’s a strange foundation for an argument that government science doesn’t matter:

Increasingly, technology is developing the kind of autonomy that hitherto characterized biological entities. The Stanford economist Brian Arthur argues that technology is self-organizing and can, in effect, reproduce and adapt to its environment. It thus qualifies as a living organism, at least in the sense that a coral reef is a living thing. Sure, it could not exist without animals (that is, people) to build and maintain it, but then that is true of a coral reef, too.

And who knows when this will no longer be true of technology, and it will build and maintain itself? To the science writer Kevin Kelly, the “technium”—his name for the evolving organism that our collective machinery comprises—is already “a very complex organism that often follows its own urges.” It “wants what every living system wants: to perpetuate itself.”

Even if this is true, why believe that the innovation we would get from a purely technologically driven progress is the “best” innovation we can get, or even the innovation we want? Oh, that’s right, in the libertarian mindset, “we” doesn’t exist. So, it’s a good thing if, say, industry sink billions into fantastic facial moisturizer while cures for diseases that only affect the poor go unfunded.

Here’s another groaner:

To most people, the argument for public funding of science rests on a list of the discoveries made with public funds, from the Internet (defense science in the U.S.) to the Higgs boson (particle physics at CERN in Switzerland). But that is highly misleading. Given that government has funded science munificently from its huge tax take, it would be odd if it had not found out something. This tells us nothing about what would have been discovered by alternative funding arrangements.

And we can never know what discoveries were not made because government funding crowded out philanthropic and commercial funding, which might have had different priorities. In such an alternative world, it is highly unlikely that the great questions about life, the universe and the mind would have been neglected in favor of, say, how to clone rich people’s pets.

Ah, yes, the “counterfactual would have been better” argument. Of course, it comes with no particular theory or reason why private incentives would advance science, only the assertion that it would. Except, it turns out we do, in fact, have counterfactuals, because there are countries all around the world through history that made different prioritization of science, along with associated outcomes, and the answer is quite grim for the laissez faire folks, I’m afraid.

The rest of the article trots out a bunch of examples of interesting and important technologies, such as the steam engine, that came into being more or less without the underlying science to back them up. But I can make a list, too. Wozniak and Jobs made a computer in their garage, and — bang! — there came the internet. Except, the were already standing on giants, including boatloads of government-funded basic research (a lot of it defense-driven, yes) from which sprung semiconductors and the very notions of electronic computer. (Turing, Von Neumann)

Or lets take a look at radio. Sure, Marconi doing some early tinkering with spark gap transmitters allowed us to get some dit-dahs across the Atlantic without too much understanding, but even he was standing on Maxwell. And besides, the modern digital communications would not be possible without the likes of Fourier, Shannon, Nyquist, Hartley, all of which were doing science. (Some in private labs, though.)

I’m not historian of science, so I hope to soon read blogs from such people responding to this piece.

I’m unsettled by something else, though:

This is a full-throated, direct attack on government-funded science itself, printed in a mainstream publication.

It was not long ago that no serious political ideology in the US would have been broadly anti- public science research. Sure, we’ve seen serious efforts to undermine science in certain areas: climate change, danger of pesticides, etc, but nobody has come straight out and said that government should get out of the science business entirely.

Should be interesting to see if this is the start of a new long-term strategy or just one man’s rant.

 

 

If programming languages were exes

 

[ Please excuse this ridiculous flight of fancy. This post occurred to me yesterday while I was hypoxically working my way up Claremont on a bike. ]

An common game among the nerderati is to compare favorite computer languages, talking trash about your friends’ favorites. But what if programming languages were ex-girlfriend (or -boyfriends)?

Perl 5

Perhaps not the most handsome ex, but probably the most easy-going. Perl was up for anything and didn’t care much what you did as long as it was fun. Definitely got you in trouble a few times. Did not get jealous if you spent time with other languages. Heck, Perl even encouraged it as long as you could all play together. Perl was no priss, and taught you about things that you shouldn’t even describe in polite company. The biggest problem with Perl is that nobody approved, and in the end, you dumped Perl because everyone told you that you had to grow up and move on to a Nice, Serious Language. But you do wonder what might have been…

Perl 6

Never actually went on a date. Stood you up many times.

Python

Trim and neat, Python really impressed you the first time you met. Python came with a lot of documentation, which was a breath of fresh air at first. However, the times when Python’s inflexibility proved annoying started to mount. After one PEP talk too many, you decided to move on. You still remember that one intimate moment when Python yelled out “you’re doing it wrong!” Relationship- ender. Mom was disappointed.

C++

C++ seemed to have it all. It knew just about everything to know about programming. If you heard of some new idea, the odds were that C++ had heard of it before you and incorporated awhile back. You had many intellectual conversations about computer science with C++. Thing is, C++ seemed kind of rulesy, too, and it was hard to know what C++ really wanted from you. Most annoying, whenever you didn’t know what C++ wanted, it blamed you for not “getting” it. C++ also seemed to have a bit of a dark side. Sure, most of the time C++ could be elegant and structured, but more than once you came home to find C++ drunk and in bed with C doing some truly nasty things.

C

C is not an ex. C is your grumpy grandpa/ma who gives zero f@#ks what the kids are doing today. C is the kind of computer language that keeps a hot rod in the garage, but crashes it every time it takes it out. It’s a wonder C is still alive, given its passtime of lighting M-80’s while holding them between its fingers. Thing is, it’s actually pretty fun to hang out with C, someone who can tell good stories and get its hands dirty.

PHP

Looked a lot like Perl, just as promiscuous, but never said or did anything that made you think or laugh. Boring. Dumped.

Haskell

The weird kid in high school that sat alone and didn’t seem to mind be ostracized. Everything Haskell ever said in class was interesting, if cryptic. There was something attractive about Haskell, but you could never put your finger on it. In the end, you couldn’t imagine a life as such an outsider, so you never even got Haskell’s phone number.

Excel

Wore a tie starting in elementary school, Excel was set on business school. Funny thing was that beneath that business exterior, Excel was a complete slob. Excel’s apartment was a pig sty. It was amazing anything ever worked at all. Pretty boring language in the end, though. Went on a few dates, but no chemistry.

Java

Man, in the 90’s everybody was telling you to date Java. This was the language you could finally settle down with. Good thing your instincts told you to dodge that bullet, or you’d be spending your retirement years with a laggy gui for an internal app at a bank. Ick.

Javascript

You were never that impressed with Javascript, but you have to admit its career has taken off better than yours has. Seems Javascript is everywhere now, a celebrity really. Javascript has even found work on servers. At least Javascript is not hanging out with that ugly barnacle, Jquery as much as it used to.