The answer is always the same, regardless of question. (Civil Aviation Edition)

The Wall Street Journal had an editorial last week suggesting that the US air traffic control system needs to privatize.

It’s not a new debate, and though I will get into some specifics of the discussion below, what really resonated for me is how religious and ideological is the belief that corporations just do everything better. It’s not like the WSJ made any attempt whatsoever to list (and even dismiss) counter-arguments to ATC privatization. It’s almost as if the notion that there could be some justification for a publicly funded and run ATC has just never occurred to them.

It reminded me of a similar discussion, in a post in an energy blog I respect, lamenting the “dysfunction” in California’s energy politics, particularly from the CPUC.

What both pieces seemed to have in common is a definition of dysfunction that hews very close to “not the outcome that a market would have produced.” That is to say, they see the output of non-market (that is, political) processes as fundamentally inferior and inefficient, if not outright illegitimate. Of course, the outcomes from political processes can be inefficient and dysfunctional, but this is hardly a law of nature.

For my loyal reader (sadly, not a typo), none of this is news, but it still saddens me that so many potentially interesting problems (like how best to provision air traffic control services) break down on such tired ideological grounds: do you want to make policy based on one-interested-dollar per vote or one-interested-person per vote?

I want us to be much more agnostic and much more empirical in these kinds of debates. Sometimes markets get good/bad outcomes, sometimes politics does.

For example, you might never have noticed that you can’t fly Lufthansa or Ryanair from San Francisco to Chicago. That’s because there are “cabotage” laws in the US that bar foreign carriers from offering service between US cities. Those laws are blatantly anti-competitive and the flying public is definitely harmed by this. This is a political outcome I don’t particularly like due, in part, to Congress paying better attention to the airlines than to the passengers. Yet, I’m not quite ready to suggest that politics does not belong in aviation.

Or, in terms of energy regulation, it’s worth remembering that we brought politics into the equation a very long time ago because “the market” was generating pretty crappy outcomes, too. What I’m saying is that neither approach has a exclusive rights to dysfunction.

A control towerOK. Let’s get back to ATC and the WSJ piece.

In it, the WSJ makes frequent reference to Canada’s ATC organization, NavCanada, that was privatized during a budget crunch a few years back, and has performed well since then. This is in contrast to to an FAA that has repeated “failed to modernize.”

But the US is not Canada, and our air traffic situation is very different. A lot of planes fly here! Anyone who has spent any serious time looking at our capacity problems knows thUS and Europe have very different sources of flight delaysat the major source of delay in the US is from insufficient runways and terminal airspace, not control capabilities per se. That is to say, modernizing the ATC system so that aircraft could fly more closely using GPS position information doesn’t really buy you all that much if the real crunch is access to the airport. If you are really interested, check out this comparison of the US and European ATC performance. The solution in the US is pouring more concrete in more places, not necessarily a revamped ATC. (It’s not that ATC equipment could not benefit from revamping, only that it is not the silver bullet promised.)

Here’s another interesting mental exercise: Imagine you have developed new technology to improve the throughput of an ATC facility by 30% — but the hitch is that when you deploy the technology, there will be diminution in performance during the switchover, as human learning, inevitable hiccups, and the need to temporary run the old and new systems in parallel takes its toll. Now imagine that you want to deploy that technology at a facility that is already operating at near its theoretical maximum capability. See a problem there? It’s not an easy thing.

Another issue in the article regards something called ADS-B (Automatic Dependent Surveillance – Broadcast), a system by which aircraft broadcast their GPS-derived position. Sounds pretty good, and yet, the US has taken a long time to get it going widely. (It’s not required on all aircraft until 2020) Why? Well, one reason is that a lot of the potential cost-savings from switching to ADS-B would come from the retirement of expensive, old primary radars that “paint” aircraft with radio waves and sense the reflected energy. Thing is, primary radars can see metal objects in the sky, and ADS-B receivers only see aircraft that are broadcasting their position. You may have heard in recent hijackings how transponders were disabled by pilot — so, though the system is cool, it certainly cannot alone replace the existing surveillance systems. The benefits are not immediate and large, and it leaves some important problems unsolved. Add in the high cost of equippage, and it was an easy target to delay. But is that a sign of dysfunction or good decision-making?

All of which is to say that I’m not sure a privately run organization, facing similar constraints, would make radically different decisions than has the FAA.

Funding the system is an interesting question, too. Yes, a private organization that can charge fees has a reliable revenue stream and is thus is able to go to financial markets to borrow for investment.  This is in contrast to the FAA, which has had a hard time funding major new projects because of constant congressional budget can-kicking. Right now the FAA is operating on an extension of its existing authorization (from 2012), and a second extension is pending, with a real reauthorization still behind that. OK, so score one for a private organization. (Unless we can make Congress function again, at least.)

But what happens to privatized ATC if there is a major slowdown in air travel? Do investments stop, or is service degraded due to cost cutting, or does the government end up lending a hand anyway? And how might an airline-fee-based ATC operate differently from one that ostensibly serves the public? Even giving privatization proponents the benefit of the doubt that a privatized ATC would be more efficient and better at cost saving, would such an organization be as good at spending more money when an opportunity comes along to make flying safer, faster, or more convenient for passengers? How about if the costs of such changes fall primarily on the airlines, through direct equippage costs and ATC fees? Or, imagine a scenario where most airlines fly large aircraft between major cities, and an an upstart starts flying lots of small aircraft between small cities. Would a privatized ATC or publicly funded ATC better resist the airlines’ anti-competitive pressures to erect barriers to newcomers?

I actually don’t know the answers. The economics of aviation are somewhat mysterious to me, as they probably are to you unless your an economist or operations researcher. But I’m pretty sure the Scott McCartney of the WSJ knows even less.

 

progress, headphones edition

It looks like Intel is joining the bandwagon of people that want to take away the analog 3.5mm “headphone jack” and replace it with USB-C. This is on the heels of Apple announcing that this is definitely happening, whether you like it or not.

obsolete technology
obsolete technology

There are a lot of good rants out there already, so I don’t think I can really add much, but I just want to say that this does sadden me. It’s not about analog v. digital per se, but about simple v. complex and open v. closed.

The headphone jack is a model of simplicity. Two signals and a ground. You can hack it. You can use it for other purposes besides audio. You can get a “guzinta” or “guzoutta” adapter to match pretty much anything in the universe old or new — and if you can’t get it, you can make it. Also, it sounds Just Fine.

Now, I’m not just being ant-change. Before the 1/8″ stereo jack, we had the 1/4″ stereo jack. And before that we had mono jacks, and before that, strings and cans. And all those changes have been good. And maybe this change will be good, too.

But this transition will cost us something. For one, it just won’t work as well. USB is actually a fiendishly complex specification, and you can bet there will be bugs. Prepare for hangs, hiccups, and snits. And of course, none of the traditional problems with headphones are eliminated: loose connectors, dodgy wires, etc. On top of this, there will be, sure as the sun rises, digital rights management, and multiple attempts to control how and when you listen to music. Prepare to find headphones that only work with certain brands of players and vice versa. (Apple already requires all manufacturers of devices that want to interface digitally with the iThings to buy and use a special encryption chip from Apple — under license, natch.)

And for nerd/makers, who just want to connect their hoozyjigger to their whatsamaducky, well, it could be the end of the line entirely. For the time being, while everyone has analog headphones, there will be people selling USB-C audio converter thingies — a clunky, additional lump between devices. But as “all digital” headphones become more ubiquitous, those adapters will likely disappear, too.

Of course, we’ll always be able to crack open a pair of cheap headphones and steal the signal from the speakers themselves … until the neural interfaces arrive, that is.

EDIT: 4/28 8:41pm: Actually, the USB-C spec does allow analog on some of the pins as a “side-band” signal. Not sure how much uptake we’ll see of that particular mode.

 

More minimum wage bullshit

 

Workers unaware that they are soon to be laid off.
Workers unaware that they are soon to be laid off.

Some clever economists have come up with a name for the religious  application of simple economic principles to complex situations where they probably don’t apply: Econ-101ism.

That’s immediately what I thought of when my better half told me about this stupid article in Investor’s Business Daily about the minimum wage and UC Berkeley.

See, folks at Berkeley touted the $15/hr minimum wage as a good thing, and then UC laid off a bunch of people. Coincidence? The good people at Irritable Bowel Disease think not!

Except, few at UC gets paid the minimum wage. And the $15/hr minimum wage has not taken effect and won’t take effect for years. And the reason for the job cuts are the highly strained budget situation at the UCs, a problem that is hardly new.

You could make an argument that a $15/hr minimum will strain the economy, resulting in lower tax revenue, resulting in less state money, resulting in layoffs at the UC’s. I guess. Quite a lot of moving parts in that story, though.

Smells like bullshit.

Edit: UCB does have its own minimum wage, higher than the California minimum. It has been $14/hr since 10/2015 and will be $15/hr starting in 2017. (http://www.mercurynews.com/business/ci_28522491/uc-system-will-raise-minimum-wage-15-an)

Another edit: Chancellor Dirks claim the 500 job cuts would save $50M/yr. That implies an average hourly cost of $50/hr. Even if 1/2 that goes to overhead and benefits, those would be $25/hr jobs, not near the minimum. In reality, the jobs probably had a range of salaries, and one can imagine some were near the $15 mark, but it is not possible that all or even most of them were.

 

Taking back Public Policy

Hold on folks, I’m about to get highly normative.

You see, I keep running into people who claim to do “public policy” for a living, or their business card says “Director of Public Policy.”

But when I talk to them, I find out they don’t know anything about public policy. Worse, they don’t care. What most of these people do for a living is “trying to win” some game.

So public policy becomes “communications,” when there is a need to convince people that something is good for them, or supporting politicians directly, when simple communications doesn’t quite cut the mustard.

At best, I would call this work advocacy. But “Director of the Stuff We Want” does not sound so good, so we get “Director of Public Policy.”

Okay, whatever, I’m not an idiot. I get how things work in the real world. Down in the scrum, there is no public good, there is no “what is best?” There are only people fighting for what they want, and we all pretend that sorta kinda over enough time, we end up with outcomes that are a reasonable balance of everyone’s interests, intensity of interests (particularly important if you like guns), and resources (particularly important if you have lots of money).

Except that process seems to be whiffing a bit these days, no?

What I wish for “public policy” would be for the field to somehow professionalize, to set norms of behavior, to set some notion of this-bullshit-is-too-much. Maybe, if so many people purporting to offer policy analysis weren’t so entirely full of crap all the time, we could one day reach the point where people would take policy analysis half seriously again.

So, in the interest of brevity, here are some signs your policy work may be pure hackery:

  • You talk in absolutes. If you’re in the business of telling someone that solar power or electric utilities or oil and gas companies or wind turbines or nuclear  are wonderful or evil, you probably are not doing public policy work. You’re just confusing people and wasting everyone’s time and attention.
  • your salary includes a bonus for successfully causing / stopping something
  • you will not admit publicly to any shortcoming of your preferred position
  • you do not even read work that comes to different conclusions than yours
  • if you arrive at conferences in a private jet

I also notice a lot of people who “do” public policy are also attorneys. That makes sense — knowing how the law works certainly helps. But lawyering and policy work should not be the same. Lawyers have a well-developed set of professional ethics centered around protecting their clients’ interests while not breaking any laws. This is flying way too low to the ground for good policy work. The policy world should aspire to a higher standard. Based on the low esteem most folks feel for the legal profession, it seems reasonable that if we ever hope for people to take policy work seriously, we’ll need to at least view “our clients” more broadly than “who pays our salary.”

So, what is public policy? Well, I think it’s the process by which the impacts of choices faced by government are predicted and the results of choices already made are evaluated. It takes honesty, humility, and a willingness to let data update your conclusions.

Back in Real Life, public policy professionals, of course, also need skills of persuasion and influence in order advocate on behalf of their conclusions (and their employers’ conclusions, natch). But for the love of god, if you skip the analytical step, you’re not doing public policy, you’re doing assholery.

 

 

Worst environmental disaster in history?

In keeping with Betteridge’s Law: no.

My news feed is full of headlines like:

These are not from top-tier news sources, but they’re getting attention all the same. Which is too bad, because they’re all false by any reasonable SoCal gas leakmeasure. Worse, all of the above seem to deliberately misquote from a new paper published in Science. The paper does say, however:

This CH4 release is the second-largest of its kind recorded in the U.S., exceeded only by the 6 billion SCF of natural gas released in the collapse of an underground storage facility in Moss Bluff, TX in 2004, and greatly surpassing the 0.1 billion SCF of natural gas leaked from an underground storage facility near Hutchinson, KS in 2001 (25). Aliso Canyon will have by far the largest climate impact, however, as an explosion and subsequent fire during the Moss Bluff release combusted most of the leaked CH4, immediately forming CO2.

Make no doubt about it, it is a big release of methane. Equal, to the annual GHG output of 500,000 automobiles for a year.

But does that make is one of the largest environmental disasters in US history? I argue no, for a couple of reasons.

Zeroth: because of real, actual environmental disasters, some of which I’ll list below.

First: without the context of the global, continuous release of CO2, this would not affect the climate measurably. That is, by itself, it’s not a big deal.

Second: and related, there are more than 250 million cars in the US, so this is 0.2% of the GHG released by automobiles in the US annually. Maybe the automobile is the ongoing environmental disaster? (Here’s some context: The US is 15.6% of global GHG emissions, transport is 27% of that, and 35% of that is from passenger cars. By my calculations, that makes this incident about 0.0003% of global GHG emissions.)

Lets get back to some real environmental disasters? You know, like the kind that kill people, animals, and lay waste to the land and sea? Here are a list of just some pretty big man-made environmental disasters in the US:

Of course, opening up the competition to international disasters, including US-created ones, really expands the list, but you get the picture.

All this said, it’s really too bad this happened, and it will set California back on its climate goals. I was saddened to see that SoCal Gas could not cap this well quickly, or at least figure out a way to safely flare the leaking gas.

But it’s not the greatest US environmental disaster of all time. Not close.

 

 

 

Power corrupts, software ecosystems edition

I’ve written a lot of software in my day, but it turns out little of it has been for use of anyone but myself and others in the organizations for which I’ve worked.

Recently, my job afforded me a small taste of what it’s like to publish software. I wrote a small utility, an extension to a popular web browser to solve a problem with a popular web application. The extension was basically a “monkey patch” of sorts over the app in question.

Now, in order to get it up on the “web store”, there were some hoops to jump through, including submission to the company for final approval. In the process, I signed up for the mailing list that programmers use 2000px-Gnome-system-lock-screen.svgto help them deal with hiccups in the publishing process.

As it turned out, my hoop-jumping wasn’t too hard. Because I was only publishing to my own organization, this extension ultimately did not need to be reviewed by the Great And Powerful Company. But I’ve continued to follow the mailing list, because it has turned out to be fascinating.

Every day, a developer or two or three, who has toiled for months or years to create something they think is worthwhile, sends a despondent message to the list: “Please help! My app has been rejected and I don’t know why!” Various others afflicted try to provide advice for things they tried that worked or did not.

And the thing is, in many cases, nobody can help them. Because the decision was made without consultation with the developer. The developer has no access to the decision-maker whatsoever. No email, no phone call, no explanation. Was the app rejected because it violated the terms and conditions? Which ones?

The developers will have to guess, make some changes, and try their luck again. It’s got to be an infuriateing process, and a real experience of powerlessness.

This is how software is distributed today — through a small number of authorities. Google, Apple, Amazon, etc. If you want to play, you do it by their rules. Even on PCs and Macs there seems a strong push to move software distribution onto the company web stores. So far, it is still possible to put a random executable on your PC and run it — at least after clicking through a series of warnings. But will that be the case forever? We shall see.

The big companies have some good reasons for doing this. They can

  • assert quality control (of a sort. NB: plenty of crap apps in curated stores anyway)
  • help screen for security problems
  • screen out malware.

But they also have bad (that is, consumer-unfriendly) reasons to do this, like

  • monetizing other people’s work to a degree only possible in a monopoly situation
  • keeping out products that compete with their own
  • blocking apps that circumvent company-imposed limitations (like blocking frameworks and interpreters that might allow people to develop for the framework rather than the specific target OS)

All of those reasons are on top of the inevitable friction associated with dealing with a large, careless, monolithic organization and its bureaucrats, who might find it easier to reject your puny app than to take the time to understand what it’s doing and why it has merit.

Most sad to me is that the amazing freedom that computing used to have is being restricted. Most users would never use that freedom, but it was nice to have. If you could find a program, you could run it. If you could not find it, you could write it. And that is being chipped away at.

 

 

 

Apple Open Letter… eh

[ Updated below, but I’m leaving the text here as I originally wrote it. ]

 

By now, just about everyone has seen the open letter from Apple about device encryption and privacy. A lot of people are impressed that such a company with so much to lose would stand up for their customers. Eh, maybe.

I have to somewhat conflicting thoughts on the whole matter:

1)

If Apple had designed security on the iPhone properly, it would not even be possible for them to do what the government is asking. In essence, the government plan is for Apple to develop a new version of iOS that they can “upgrade” the phone to, which would bypass (or make it easier to bypass) the security on the device. Of course, it should not be possible to upgrade the OS of a phone without the consent of a verified users, so this is a bug they baked in from the beginning — for their benefit, of course, not the government’s.

Essentially, though they have not yet written the “app” that takes advantage of this backdoor, they have already created it in a sense. The letter is therefore deceptive as written.

2)

The US government can get a warrant to search anything. Anything. Any. Thing. This is has it has been since the beginning of government. They can’t go out and do so without a warrant. They can’t (well, shouldn’t) be able to pursue wholesale data mining of every single person, but they can get a warrant to break any locked box and see what’s inside.

Why should data be different?

I think the most common argument around this subject is that the government cannot be trusted with such power. That is, yes, the government may have a reasonably right to access encrypted data in certain circumstances (like decrypting known terrorist’s phones!) but the tools that allow that also give them the power to access data under less clear-cut circumstances as well.

The argument then falls into a slippery slope domain — a domain in which I’m generally unimpressed. In fact, I would dismiss it entirely if the US government hadn’t already engaged in important widespread abuse of similar powers.

Nevertheless, I think the argument that the government should not have backdoors to people’s data is one of practical controls rather than fundamental rights to be free from search.

 

I have recommendations to address both thoughts:

  1. Apple, like all manufacturers, should implement security properly, so that neither they nor any other entity possess a secret backdoor.
  2. Phone’s should have a known backdoor, a one-time password algorithm seeded at the time of manufacture, and stored and managed by a third party, such as the EFF. Any attempts to access this password, whether granted or denied, would be logged and viewable as a public record.

I don’t have a plan for sealed and secret warrants.

 

[ Update 2/17 11:30 CA time ]

So, the Internet has gone further and explained a bit more about what Apple is talking about and what the government has asked for. It seems that basically, the government wants to be able to to brute-force the device, and wants Apple to make a few changes to make that possible:

  1. that the device won’t self-wipe after too many incorrect passwords
  2. that the device will not enforce extra time-delay between attempts
  3. that the the attempts can be conducted electronically, via the port, rather than manually by the touch screen

I guess this is somehow different than Apple being able to hack their own devices, but to me, it’s still basically the same situation. They can update the OS and remove security features. That the final attack is brute force rather than a backdoor is hardly relevant.

So I’m standing behind my assessment that the Apple security is borked by design.

Technical Support that isn’t

I’ve been doing a little progr^H^H^H^H^Hsoftware engineering lately, and with it, I’ve been interacting with libraries and APIs from third parties. Using APIs can be fun, because it lets your simple program leverage somebody else’s clever work. On the other hand, I really hate learning complex APIs because the knowledge is a) too often hard-won through extended suffering and b) utterly disposable. You will not be able to use what you’ve learned next week, much less, next year.

So, when I’m learning a new API, I read the docs, but I admit to trying to avoid reading them too closely or completely, and I try not to commit any of it to memory. I’m after the bare minimum to get my job done.

That said, I sometimes get stuck and have to look closer, and a few times recently, I’ve even pulled the trigger on the last resort: writing for support. Here’s the thing: I do this when I have read the docs and am seriously stuck and/or strongly suspect that the API is broken in some way.

As it happens, I have several years’ experience as a corporate and field applications engineer (that’s old-skool Silicon Valley speak for person-who-helps-make-it-work), so I like to think I know how to approach support folks; I know how I would like to be approached.

I always present them with a single question, focused down to the most basic elements, preferably in a form that they can use to reproduce the issue themselves.

But three out of three times recently when I’ve done this in the past month (NB: a lot of web APIs are, in fact, quite broken), I have run into a support person who replied before considering my example problems or even simply running. Most annoying, they sent me the dreaded “Have you looked at the documentation?”

All of those are support sins, but I find the last the most galling. For those of you whose job it is to help others use your product, let me make this very humble suggestion: always, always, ALWAYS point the user to the precise bit of documentation that answers the question asked. (Oh, and if your docs website does not allow that, it is broken, too. Fix it.)

This has three handy benefits:

  1. It helps the user with their problem immediately. Who woodanodeit?
  2. It chastens users who are chastenable (like me), by pointing out how silly and lazy they were to write for help before carefully examining the docs.
  3. It forces you, the support engineer, to look at the documentation yourself, with a particular question in mind, and gives you the opportunity to consider whether the doc’s ability to answer this question might be improvable.

 

On the other hand, sending a friendly email suggesting that the user “look at the documentation” makes you look like an ass.

 

 

Culture of Jankery

The New York Times has a new article on Nest, describing how a software glitch allowed units to discharge completely and become non-functional. We’re all used to semi-functional gadgets and Internet services, but when it comes to thermostats we expect a higher standard of performance. After all, when thermostats go wro1043px-Nest_front_officialng, people can get sick and pipes can freeze. Or take a look at the problems of Nest Protect, a famously buggy smoke detector. Nest is an important case, because these are supposed to be the closest thing we have to grownups in IoT right now!

Having worked for an Internet of Things company, I have more than a little sympathy for Nest. It’s hard to make reliable connected things. In fact, it might be impossible — at least using today’s prevailing techniques and tools, and subject to today’s prevailing expectations for features, development cost, and time to market.

First, it should go without saying that a connected thermostat is millions or even billions of times as complex as the old, bimetallic strips that it is often replacing. You are literally replacing a single moving part that doesn’t even wear out with a complex arrangement of chips, sensors, batteries, and relays, and then you are layering on software: an operating system, communications protocols, encryption, a user interface, etc. Possibility that this witch’s brew can be more reliable than a mechanical thermostat: approximately zero.

But there is also something else at work that lessens my sympathy: culture. IoT is coming from the Internet tech world’s attempt to reach into physical devices. The results can be exciting, but we should stop for a moment to consider the culture of the Internet. This is a the culture of “go fast and break things.” Are these the people you want building devices that have physical implications in your life?

My personal experience with Internet-based services is that they, work most of the time. But they change on their own schedule. Features and APIs come and go. Sometimes your Internet connection goes out. Sometimes your device becomes unresponsive for no obvious reason, or needs to be rebooted. Sometimes websites go down for maintenance at an inconvenient time. Even when the app is working normally, experience can vary. Sometimes it’s fast, sometimes slow. Keypresses disappear into the ether, etc.

My experience building Internet-based services is even more sobering. Your modern, complex web or mobile app is made up of agglomeration of sub-services, all interacting asynchronously through REST APIs behind the scenes. Sometimes, those sub-services use other sub-services in their implementation, and you don’t even have a way of knowing what ones. Each of those links can fail for many reasons, and you must code very defensively to gracefully handle such failures. Or you can do what most apps do — punt. That’s fine for chat, but you’ll be sorely disappointed if your sprinkler kills your garden, or even if your alarm clock failures to wake you up before an important meeting.

And let’s not even talk of security, a full-blown disaster in the making. I’ll let James Mickens cover that territory.

Anyhoo, I still hold hope for IoT, but success is far from certain.

 

 

The great unequalizer?

This post is sloppily going to try to tie together two threads that have been in my newsfeed for years.

The first thread is about rising inequality. It is the notion, as Thomas Piketty puts it that r > g, or returns to capital are greater than the growth rate of the economy, so that ultimately wealth concentrates. I think there is some good evidence that wealth is concentrating now (stagnating middle class wages for decades) but I am certainly not economist enough to judge. So let’s just take this as a supposition for the moment. (NB: Also, haven’t read the book.)

There are many proposed mechanisms for this concentration, but one of them is that the wealthy, with both more at stake and more resources to wield, access power more successfully than most people. That is, they adjust the rules of the game constantly in their favor. (Here’s a four year old study that showed that in the Chicago area, more than 1/2 of those with wealth over $7.5M had personally contacted their congressional representatives. Have you ever gotten your Senator on the line?)

The second thread is about what technology does or does not accomplish. Is tech the great equalizer, bringing increased utility to everyone by lowering the cost of everything? Or is its role more complex?

The other day in the comments of Mark Thoma’s blog, I came across an old monograph by EW Dijkstra that described some of the belief of early computer scientists:

I have fond memories of a project of the early 70’s that postulated that we did not need programs at all! All we needed was “intelligence amplification”. If they have been able to design something that could “amplify” at all, they have probably discovered it would amplify stupidity as well…

If you think of tech as an amplifier of sorts, then one can see that there is little reason to think that it always makes life better. An amplifier can amplify intelligence as well as stupidity, but it can also amplify greed and avarice, could it not?

Combining these threads we get to my thesis, that technology, rather than lifting all boats and making us richer, can be exploited by those who own it and control its development and deployment can use it disproportionally to their benefit, resulting in a permanent wealthy class. That is, though many techno-utopians see computers as a means to allow everyone a leisure-filled life, a la Star Trek, the reality is that there is no particular reason to think that that benefits of technology and computers will accrue to the population at large. Perhaps there are good reasons to think they won’t.

In short, if the wealthy can wield government to their benefit, won’t they similarly do so with tech?

There is plenty of evidence contrary to this thesis. Tech has given us many things we could never have afforded before, like near-infinite music collections, step-by-step vehicle navigation, and near-free and instant communications (including transmission of cat pictures). In that sense, we are indeed all much richer for it. And in the past, to the degree that tech eliminated jobs, it always seemed that new demand arose as a result, creating even more work. Steam engine, electrification, etc.

But recent decades do seem a different pattern, and I can’t help but see a lot of tech’s gifts as bric-a-brac, trivial as compared to the basics of education and economic security, where it seems that so far, tech has contributed surprisingly little. Or, maybe that should not be surprising?