Technical Support that isn’t

I’ve been doing a little progr^H^H^H^H^Hsoftware engineering lately, and with it, I’ve been interacting with libraries and APIs from third parties. Using APIs can be fun, because it lets your simple program leverage somebody else’s clever work. On the other hand, I really hate learning complex APIs because the knowledge is a) too often hard-won through extended suffering and b) utterly disposable. You will not be able to use what you’ve learned next week, much less, next year.

So, when I’m learning a new API, I read the docs, but I admit to trying to avoid reading them too closely or completely, and I try not to commit any of it to memory. I’m after the bare minimum to get my job done.

That said, I sometimes get stuck and have to look closer, and a few times recently, I’ve even pulled the trigger on the last resort: writing for support. Here’s the thing: I do this when I have read the docs and am seriously stuck and/or strongly suspect that the API is broken in some way.

As it happens, I have several years’ experience as a corporate and field applications engineer (that’s old-skool Silicon Valley speak for person-who-helps-make-it-work), so I like to think I know how to approach support folks; I know how I would like to be approached.

I always present them with a single question, focused down to the most basic elements, preferably in a form that they can use to reproduce the issue themselves.

But three out of three times recently when I’ve done this in the past month (NB: a lot of web APIs are, in fact, quite broken), I have run into a support person who replied before considering my example problems or even simply running. Most annoying, they sent me the dreaded “Have you looked at the documentation?”

All of those are support sins, but I find the last the most galling. For those of you whose job it is to help others use your product, let me make this very humble suggestion: always, always, ALWAYS point the user to the precise bit of documentation that answers the question asked. (Oh, and if your docs website does not allow that, it is broken, too. Fix it.)

This has three handy benefits:

  1. It helps the user with their problem immediately. Who woodanodeit?
  2. It chastens users who are chastenable (like me), by pointing out how silly and lazy they were to write for help before carefully examining the docs.
  3. It forces you, the support engineer, to look at the documentation yourself, with a particular question in mind, and gives you the opportunity to consider whether the doc’s ability to answer this question might be improvable.

 

On the other hand, sending a friendly email suggesting that the user “look at the documentation” makes you look like an ass.

 

 

Culture of Jankery

The New York Times has a new article on Nest, describing how a software glitch allowed units to discharge completely and become non-functional. We’re all used to semi-functional gadgets and Internet services, but when it comes to thermostats we expect a higher standard of performance. After all, when thermostats go wro1043px-Nest_front_officialng, people can get sick and pipes can freeze. Or take a look at the problems of Nest Protect, a famously buggy smoke detector. Nest is an important case, because these are supposed to be the closest thing we have to grownups in IoT right now!

Having worked for an Internet of Things company, I have more than a little sympathy for Nest. It’s hard to make reliable connected things. In fact, it might be impossible — at least using today’s prevailing techniques and tools, and subject to today’s prevailing expectations for features, development cost, and time to market.

First, it should go without saying that a connected thermostat is millions or even billions of times as complex as the old, bimetallic strips that it is often replacing. You are literally replacing a single moving part that doesn’t even wear out with a complex arrangement of chips, sensors, batteries, and relays, and then you are layering on software: an operating system, communications protocols, encryption, a user interface, etc. Possibility that this witch’s brew can be more reliable than a mechanical thermostat: approximately zero.

But there is also something else at work that lessens my sympathy: culture. IoT is coming from the Internet tech world’s attempt to reach into physical devices. The results can be exciting, but we should stop for a moment to consider the culture of the Internet. This is a the culture of “go fast and break things.” Are these the people you want building devices that have physical implications in your life?

My personal experience with Internet-based services is that they, work most of the time. But they change on their own schedule. Features and APIs come and go. Sometimes your Internet connection goes out. Sometimes your device becomes unresponsive for no obvious reason, or needs to be rebooted. Sometimes websites go down for maintenance at an inconvenient time. Even when the app is working normally, experience can vary. Sometimes it’s fast, sometimes slow. Keypresses disappear into the ether, etc.

My experience building Internet-based services is even more sobering. Your modern, complex web or mobile app is made up of agglomeration of sub-services, all interacting asynchronously through REST APIs behind the scenes. Sometimes, those sub-services use other sub-services in their implementation, and you don’t even have a way of knowing what ones. Each of those links can fail for many reasons, and you must code very defensively to gracefully handle such failures. Or you can do what most apps do — punt. That’s fine for chat, but you’ll be sorely disappointed if your sprinkler kills your garden, or even if your alarm clock failures to wake you up before an important meeting.

And let’s not even talk of security, a full-blown disaster in the making. I’ll let James Mickens cover that territory.

Anyhoo, I still hold hope for IoT, but success is far from certain.

 

 

The great unequalizer?

This post is sloppily going to try to tie together two threads that have been in my newsfeed for years.

The first thread is about rising inequality. It is the notion, as Thomas Piketty puts it that r > g, or returns to capital are greater than the growth rate of the economy, so that ultimately wealth concentrates. I think there is some good evidence that wealth is concentrating now (stagnating middle class wages for decades) but I am certainly not economist enough to judge. So let’s just take this as a supposition for the moment. (NB: Also, haven’t read the book.)

There are many proposed mechanisms for this concentration, but one of them is that the wealthy, with both more at stake and more resources to wield, access power more successfully than most people. That is, they adjust the rules of the game constantly in their favor. (Here’s a four year old study that showed that in the Chicago area, more than 1/2 of those with wealth over $7.5M had personally contacted their congressional representatives. Have you ever gotten your Senator on the line?)

The second thread is about what technology does or does not accomplish. Is tech the great equalizer, bringing increased utility to everyone by lowering the cost of everything? Or is its role more complex?

The other day in the comments of Mark Thoma’s blog, I came across an old monograph by EW Dijkstra that described some of the belief of early computer scientists:

I have fond memories of a project of the early 70’s that postulated that we did not need programs at all! All we needed was “intelligence amplification”. If they have been able to design something that could “amplify” at all, they have probably discovered it would amplify stupidity as well…

If you think of tech as an amplifier of sorts, then one can see that there is little reason to think that it always makes life better. An amplifier can amplify intelligence as well as stupidity, but it can also amplify greed and avarice, could it not?

Combining these threads we get to my thesis, that technology, rather than lifting all boats and making us richer, can be exploited by those who own it and control its development and deployment can use it disproportionally to their benefit, resulting in a permanent wealthy class. That is, though many techno-utopians see computers as a means to allow everyone a leisure-filled life, a la Star Trek, the reality is that there is no particular reason to think that that benefits of technology and computers will accrue to the population at large. Perhaps there are good reasons to think they won’t.

In short, if the wealthy can wield government to their benefit, won’t they similarly do so with tech?

There is plenty of evidence contrary to this thesis. Tech has given us many things we could never have afforded before, like near-infinite music collections, step-by-step vehicle navigation, and near-free and instant communications (including transmission of cat pictures). In that sense, we are indeed all much richer for it. And in the past, to the degree that tech eliminated jobs, it always seemed that new demand arose as a result, creating even more work. Steam engine, electrification, etc.

But recent decades do seem a different pattern, and I can’t help but see a lot of tech’s gifts as bric-a-brac, trivial as compared to the basics of education and economic security, where it seems that so far, tech has contributed surprisingly little. Or, maybe that should not be surprising?

 

Freedom from v. freedom to: aviation edition

The FAA is still in its rule-making process for drones (or as they call them, UAS — unmanned aircraft systems), but they have announced that all drone operators must register themselves and their aircraft online. There will be a $3 fee (waived for early-registrants until 1/20/2016) and the registration lasts three years.

A quad-copter, quad-coptering.
A quad-copter, quad-coptering.

Some drone enthusiasts and some libertarians are up in arms. “It’s just a new technology that they fear they can’t control!” “Our rights are being curtailed for no good reason!” “That database will be used against us, just wait and see!”

I have a few thoughts.

First, I have more than a smidgeon of sympathy for these views. We should always pause whenever the government decides it needs to intervene in some process. And to be frank, the barriers set by the FAA to traditional aviation are extremely high. So high that general aviation has never entered the mainstream of American culture, and given the shrinking pilot population, probably never will. The price to get in the air is so high in terms of training that few ever get there. As a consequence, the price of aircraft remains high, the technological improvement of aircraft remains slow, rinse, repeat.

In fact, I have often wondered what the world might be like if the FAA had been more lax about crashes and regulation. Perhaps we’d have skies filled with swarms of morning commuters, with frequent crashes which we accept as a fact of life. Or perhaps those large volumes of users would spur investment in automation and safety technologies that would mitigate the danger — at least after an initial period of carnage.

I think I would be upset if the rules were like those for general aviation. But in fact registration is pretty modest. I suspect that later, there will be some training and perhaps a knowledge test, which seems quite reasonable. As a user of the National Airspace System (both as a pilot and a passenger) I certainly appreciate not ramming into solid objects while airborne. Registration, of course, doesn’t magically separate aircraft, but it provides a means for accountability. Over time, I suspect rules will be developed to set expectations on behavior, so that all NAS users know what to expect in normal operations. Call it a necessary evil, or, to use a more traditional term, “governance.”

But there is one interesting angle here: the class of UAS being regulated (those weighing between 0.55 lb and 55 lb) have existed for a long time in the radio-controlled model community. What has changed to make drones “special,” requiring regulation now?

I think it is not the aircraft themselves, but the community of users. Traditional radio-controlled models were expensive to buy, took significant time to build, and were difficult to fly. The result was an enthusiast community, which by either natural demeanor or soft-enforced community norms, seemed able to keep their model airplanes out of airspace used by manned aircraft.

Drones came along and that changed quickly. The drones are cheap and easy to fly, and more and different people are flying them. And they’re alone, not in clubs. The result has been one serious airspace incursion after another.

A lot of people seem to think that because drones aren’t fundamentally different technology from traditional RC hobby activity, that no new rule is warranted. I don’t see the logic. That’s not smart. It’s not about the machines, it’s about the situation.

Anyway, I think the future for drone ops is actually quite bright. There is precedent for a vibrant hobby along with reasonable controls. Amateur radio is one example. Yes, taking a multiple-choice test is a barrier to many, but perhaps a barrier worth having. Also, the amateur radio community seems to have developed its own immune system against violators of the culture and rules, which works out nicely, since the FCC (like the FAA) has limited capacity for enforcement. And it’s probably not coincidental that the FCC has never tried to build up a large enforcement capability.

Which brings me to my final point, which is that if the drone community is smart they will create a culture of their own and they will embrace and even suggest rules that allow their hobby to fruitfully coexist with traditional NAS users. The Academy of Model Aeronautics, a club of RC modelers, could perhaps grow to encompass the coming army of amateur drone users.

 

 

 

Simpler times, news edition

The other evening, I was relaxing in my special coffin filled with semiconductors salvaged from 1980’s-era consumer electronics, when I was thinking about how tired I am of hearing about a certain self-funded presidential candidate, or guns, or terrorism … and my mind wondered to simpler times. Not simpler times without fascists and an easily manipulated populace, but simpler times where you could more easily avoid pointless and dumb news, while still getting normal news.

45308

It wasn’t long ago that I read news, or at least “social media” on Usenet, a system for posting on message boards that predates the web and even the Internet. My favorite “news reader” (software for reading Usenet) was called trn. I learned all it’s clever single-key commands and mastered a feature common to most serious news readers: the kill file.

Kill files are conceptually simple. They contain a list of rules, usually specified as regular expressions, that determine which posts on the message board you will see. Usenet was the wild west, and it always had a lot of garbage on it, so this was a useful feature. Someone is being rude, or making ridiculous, illogical arguments? <plonk> Into the kill file goes their name. Enough hearing about terrorism? <plonk> All such discussions disappear.

Serious users of Usenet maintained carefully curated kill files, and the result was generally a pleasurable reading experience.

Of course, technology moves on. Most people don’t use text-based news readers anymore, and Facebook is the de-facto replacement for Usenet. And in fact, Facebook is doing curation of our news feed – we just don’t know what it is they’re doing.

All of which brings me to musing about why Facebook doesn’t support kill files, or any sophisticated system for controlling the content you see. We live in more advanced times, so we should have more advanced software, right?

More advanced, almost certainly, but better for you? Maybe not. trn ran on your computer, and the authors (its open source) had no pecuniary interest in your behavior. Facebook, of course, is a media company, not a software company, and in any case, you are not the customer. The actual customers do not want you to have kill files, so you don’t.

Though I enjoy a good Facebook bash more than most people, I must also admit that Usenet died under a pile of its own garbage content. It was an open system and, after, a gajillion automated spam posts, even aggressive kill files could not keep up. Most users abandoned it. Perhaps if there had been someone with a pecuniary interest in making it “work,” things would have been different. Also, if it could had better support for cat pictures.

Garbage can strikes again

When I was in policy school, we learned about something called the “Garbage Can Model of Organizational Choice,” which for some reason has stuck with me. I don’t want to boil it down too much, but in it, Cohen, March, and Olson (and later, Kingdon) theorize that people are constantly coming up with “solutions” that more or less end up in a theoretical trash bin. Except, nobody ever empties the trash. Instead, it lingers. At the same time, the random stochastic process known as life generates a constant stream of problems. Every once in awhile, a “problem” comes along that fits a “solution” waiting in the trash can, and if there’s an actor who favors that solution who has been paying attention and waiting patiently, he trots it out and starts flogging hard.

In light of the Paris attacks, we’ve been seeing this from the security establishment in a big way. They like tools that let them see and watch everything, and they do not like anything that gets in their way. So, for example, banning encryption that they cannot defeat is a solution that sits in the trash can perpetually. That’s why it’s unsurprising that the ex-CIA director is calling for Edward Snowden’s hanging or Dianne Feinstein and other senators are railing against Silicon Valley for offering its users strong encryption.

It’s all about having an established agenda and seizing an opportunity when it comes along. Politics as usual, move along, these droids are not particularly interesting.

But there is actually something a bit interesting going on here. The actual facts and circumstances right now do not support the panopticon theory of governance favored by intelligence and law & order types. The terrorists in this case did not use encryption. They sent each other SMS and other messages completely in the clear. If you look in the Internet, you will find article after article debunking the notion that controls on encryption would have made a difference in these attacks at all.

In fact, given the circumstances of this particular case, it looks like the intelligence agencies already had all the tools they needed to stop this attack. They just didn’t. This, if anything, should be the actual story of the day!

Okay, so this is perhaps also not interesting to the jaded news junky. Maybe it’s a bit further down in the playbook, but we’ve all seen people who should be on the defensive go on the offensive in a big, loud way. But I still find it disturbing that the facts are not steering the debate at all. If you enjoy making fun of fact-free conservatives, then this is not the circus for you, either, as powerful Dems are behind this crap.

Various media outlets, even mainstream ones, are calling out the bullshit, but the bullshit continues.

Same as it ever was, or new, disturbing political discourse untethered to reality. You decide.

Oh, and just as an aside: you can’t stop the bad guys from using strong encryption. So what are you actually calling for?

 

 

 

Throwaway knowledge

I was recently tasked with writing an pretty useful extension for Chrome that would enhance Google Calendar so that room numbers on our campus would show links that would take you to campus’s internal map tool. Google Maps does not understand this campus, so the normal map link provided is not helpful.

This task took me the better part of a day and a half, and the whole time I was gripped by this awful feeling that in order to solve this simple problem, I had to learn a bunch of random stuff that would be mostly useless as soon as I was done.

For example, I needed to learn how Chrome extensions work. For that, I needed to familiarize myself with the Google Chrome API — at least enough to get this job done. I can say for sure that even in the small amount of time I spent on that task, I could tell that this API is not stable. They change it as often as they feel like, adding this, removing that, and most ominously “deprecating” some other things. Depracation is the worse. It means “we’re not taking this function way today, but we are taking it away, probably when you least expect it.” And, in any case, how many Chrome extensions is the average programmer going to write?

Furthermore, getting into Google Calendar enough to insert my links was an exercise in pure disposable hackery. I had to reverse engineer enough of Calendar to figure out where to scan for locations and where to insert links. And I needed to understand how Calendar works well enough to know when it is going to change up the document model and set event handlers for that.

This script works today, but it’s obviously going to break, probably very soon, because Google can and will change the internal guts of Calendar whenever they please. Unlike the extension API mentioned above, they are under no obligation, not even an twinge of guilt, to hold their internal implementation of Calendar to some fixed standard.

All this points to some rather sad facts about the future of coding for the web. You’ll  absolutely have to work through the supported APIs, and when they change, that’s your problem, and if they are not offered or are incomplete, that’s also your problem, and if they go away, also your problem. Compare that, for example, to a piece of software on your PC. Maybe you wrote an ancient MSDOS .bat file that does something with the input and output of an old program you liked. If the company that made the program makes a new version that breaks your script, you could upgrade to it when you want to — or never, depending on what was convenient for you.

I’ve already been bitten by this. I made an alarm clock that interfaces with Google Calendar using a published API. It is a nice alarm clock with a mechanical chime handmade by hippies from Woodstock, NY. It worked, just fine for years, until, one day, it stopped working. Google had deprecated, and then dropped the API the clock was using. There was a new API. I had to go to my years-old code and rewrite it (to use a substantially more complicated API, by the way).

I guess that’s life, but people who make alarm clocks for a living may be surprised that users want them to provide active support and software upgrades forever. Or maybe we’ll just throw out our clocks after a year or so, like a phone.

But this is a mere annoyance compared to my main concern: unstable platforms discourage the development of knowledge and expertise. Why learn anything well if the entire API, or hell, the only supported programming language, or even the entire theoretical framework for working with the API (does anybody seriously think REST with PUT, GET, and POST is going to last forever?) is going to change next Wednesday? Perhaps that’s why in my interviews with Google nobody ever seemed to care one whit about any of my knowledge of experience.

I, for one, welcome the coming generations of ADD-afflicted software engineers and am excited to see the wagonloads of new “wheels” they’ll invent.

Yippee!

Back when computers were computers

Trying to learn new tricks, this old dog dove into Amazon Lambda today. Distributed, cloud-based programming is really something to wrap your head around if you grew up working “a computer” where most if not all the moving parts you needed to make your program run were on that computer. Well, that’s not how the kids are doing it these days.

So Lambda is cool because you only have to write functions and everything about being a computer: setting it up, maintaining it, and provisioning more as needed is abstracted away and handled automatically. But that does come at a price: your functions need to be completely and 100% stateless. Hmm, all that fancy automatic provisioning and teardown seems a bit less impressive in that context.

To tie all those functions into what we used to call “a program” you still need state, and you get that from the usual suspects: , S3 or a database of your choosing, DropBox, whatever.

Which, to me seems like actually, all that provisioning and scaling mumbo jumbo is not handled automagically at all. You still need to think about your database, how many copies you need to X transactions per second, etc. Except now you’re using a database when you might have merely declared a variable which would be stored in, you know, memory as long as the program was running.

So now everything is way more complex and about a metric gajillion times slower.

But this is the cost, I guess, of the future, where every application is designed to support 100 million users. I think it makes great sense for applications where those 100 million users’ data is interacting with each other. But for old-skool apps, where you’re working with your own data most of the time (ie, editing a doc) I’d much rather write code for “a computer.”