Taking back Public Policy

Hold on folks, I’m about to get highly normative.

You see, I keep running into people who claim to do “public policy” for a living, or their business card says “Director of Public Policy.”

But when I talk to them, I find out they don’t know anything about public policy. Worse, they don’t care. What most of these people do for a living is “trying to win” some game.

So public policy becomes “communications,” when there is a need to convince people that something is good for them, or supporting politicians directly, when simple communications doesn’t quite cut the mustard.

At best, I would call this work advocacy. But “Director of the Stuff We Want” does not sound so good, so we get “Director of Public Policy.”

Okay, whatever, I’m not an idiot. I get how things work in the real world. Down in the scrum, there is no public good, there is no “what is best?” There are only people fighting for what they want, and we all pretend that sorta kinda over enough time, we end up with outcomes that are a reasonable balance of everyone’s interests, intensity of interests (particularly important if you like guns), and resources (particularly important if you have lots of money).

Except that process seems to be whiffing a bit these days, no?

What I wish for “public policy” would be for the field to somehow professionalize, to set norms of behavior, to set some notion of this-bullshit-is-too-much. Maybe, if so many people purporting to offer policy analysis weren’t so entirely full of crap all the time, we could one day reach the point where people would take policy analysis half seriously again.

So, in the interest of brevity, here are some signs your policy work may be pure hackery:

  • You talk in absolutes. If you’re in the business of telling someone that solar power or electric utilities or oil and gas companies or wind turbines or nuclear  are wonderful or evil, you probably are not doing public policy work. You’re just confusing people and wasting everyone’s time and attention.
  • your salary includes a bonus for successfully causing / stopping something
  • you will not admit publicly to any shortcoming of your preferred position
  • you do not even read work that comes to different conclusions than yours
  • if you arrive at conferences in a private jet

I also notice a lot of people who “do” public policy are also attorneys. That makes sense — knowing how the law works certainly helps. But lawyering and policy work should not be the same. Lawyers have a well-developed set of professional ethics centered around protecting their clients’ interests while not breaking any laws. This is flying way too low to the ground for good policy work. The policy world should aspire to a higher standard. Based on the low esteem most folks feel for the legal profession, it seems reasonable that if we ever hope for people to take policy work seriously, we’ll need to at least view “our clients” more broadly than “who pays our salary.”

So, what is public policy? Well, I think it’s the process by which the impacts of choices faced by government are predicted and the results of choices already made are evaluated. It takes honesty, humility, and a willingness to let data update your conclusions.

Back in Real Life, public policy professionals, of course, also need skills of persuasion and influence in order advocate on behalf of their conclusions (and their employers’ conclusions, natch). But for the love of god, if you skip the analytical step, you’re not doing public policy, you’re doing assholery.

 

 

A different kind of techno-utopianism

What follows is a rather meandering meditation.

Bah, techno-utopianism

There’s a lot of techno-utopianism coming out of Silicon Valley these days. Computers will do our bidding, we will all be free to pursue lives of leisure. Machines, amplifying human talent, will make sure we are all rewarded (or not) appropriately to our skills and effort.

You already know I’m skeptical. Technology today seems to take as often as it gives: you get slick, you give up control. You get free media, you are renting out your eyeballs. A lot of people seem to express powerlessness when it comes to computing.

And why shouldn’t they? They don’t control the OS on their phone, they don’t even know exactly what an OS is. They didn’t decide how Facebook should work. If they don’t like it, they can’t do much about it, except not use it — hardly an option in this world.

A better techno-utopianism

But I am a techno utopian in my own way. In my utopia, computers (and software) become flexible, capable building blocks and people understand them enough to put recompose them for their own purposes.  These blocks would be honest, real, tools, that people — programmers and non-programmers — can wield skillfully and without a sense that there is anything hidden or subtle going on underneath the hood. Basically, that we’d all be masters of our technology. I’m not saying it’s realistic, it’s just my own preferred imaginary world.

How Dave Thinks of Computers

When I started my tech career, I was an engineer in the semiconductor business. We had  computer aided design (CAD) software that helped us design chips. Logic simulators could help us test digital logic circuits. Circuit simulators could help with the analog stuff. Schematic capture tools let us draw circuits symbolically. Graphic layout tools let us draw the same circuits’ physical representation. Design rule checking tools helped us make sure our circuits conformed the manufacturing requirements. The list of CAD tools went on and on. And there was a thing about CAD tools: the were generally buggy and did not interoperate worth a damn. Two tools from the same vendor might talk, but from different vendors — forget it.

So we wrote a lot of software to slurp data from here, transform it in some way, and splat it to there. It was just what you had to do to get through the day. It was the glue that made a chip design “workflow” flow.

These glue tools were not works of software engineering art. They were hacks thrown together by skilled engineerins, but not skilled software engineers in order to get something done. The results were not handsome, not shrink-wrap ready, and not user-friendly, but were perfectly workable for our own purposes.

That experience really affected the way I view computing. To this day, I see people throw up their hands because Program X simply is incompatible with Program Y; the file formats are incompatible, undocumented, secret. Similarly, people who might write “just ok” software would never dream of trying because they do not have the time or knowledge to write Good, Proper Software.

In my utopia, that barrier would mostly go away.

The real key is knowing “how computers work.”

Khan!!!!

There is a push to teach “coding” in school these days, but I see it as simultaneously too much and too little. It’s too much in that the emphasis on learning to write software is going to be lost on many people who will never use that skill, and the knowledge of one programming language or another has a ridiculously short half-life. It is not important that every high school senior needs to be able to write an OS, or even a simple program. They do not need to understand how digital logic, or microprocessors work. And teaching them the latest framework seems pointless.

But I do want them to understand what data is, how it flows through a computer, the different ways it can be structured. When they ask a computer to do something, I want them to have a good, if vague notion of how much work that “thing” is.

That is, they should understand a computer, in the same way Kirk wants Savvis to know, “why things work on a starship.”

See, Kirk doesn’t understand warp theory or how impulse engines work, but he knows how a starship “works,” and that makes him a good captain.

How things work on a computer

Which brings me back to my utopia: I want everyone to know how things are done on a computer. Because anyone who has spent any length of time around computers knows that certain patterns emerge repeatedly — and a lot of programming has a constant vague feeling of deja-vu. That makes sense, because, more or less, computers really only do a few things (these overlap a lot, too):

  • reading data from one (or more places) in memory, doing something with it, and writing the results to another (or more) places in memory.
  • reading data from an external resource (file, network connection, usb port) or writing it to an (file, network connection, usb port, display, etc)
  • waiting for something happen, then acting

With regard to data data itself, I want people do understand basic data structural concepts:

structs, queues, lists, stacks, hashes, files — what they are and why/when they are used. They should know that they can be composited arbitrarily: structs of hashes of stacks of files containing  structs, etc.

And finally, I want people to understand something of computational complexity — what computer scientists sometimes refer to as “big-O” notation. Essentially, this is all about knowing how the difficulty of solving a problem grows with the size of the problem. It applies to the time (compute cycles) and space (memory) needed to solve a problem. Mastering this is an advanced topic in CS education, which is why it is usually introduced late-ish in CS curricula. But I’m not talking about mastery. I’m talking about awareness. Bring it in early-ish, in everyone’s curriculum!

Dave’s techno-utopia

Back to my utopia. In my utopia, computers, the Internet would not be the least bit mysterious. People would have a gut-level understanding of how it works. For example, what happens when you click search in Google.

Anyone could slap together solutions to problems using building blocks that they may or may not understand individually, but whose purpose and capabilities they do understand, using the concepts mentioned above. And if they can’t or won’t do that, at least they can articulate what they want in those terms.

In Dave’s techno utopia, people would use all kinds of software: open, proprietary, big and small, that does clever and exotic things that they might never understand. But they would also know that, under the hood, that software slurps, transforms, and splats, just like every other piece of software. Moreover, they would know how to splat and slurp from it themselves, putting together “flows” that serve their purposes.

 

Nerd alert: Google inverter challenge

A couple of years ago Google announced an electrical engineering contest with a $1M prize. The goal was build the most compact DC to AC power inverter that could meet certain requirements, namely 2kVA power output at 240 Vac 60Hz, from a 450V DC source with a 10Ω impedance. The inverter had to withstand certain ambient conditions and reliability, and it had to meet FCC interference requirements.

Fast forward a few years, and the results are in. Several finalists met the design criteria, and the grand prize winner exceeded the energy density requirements by more than 3x!

First, Congrats, to the “Red Electrical Devils!” Screen Shot 2016-03-09 at 9.56.34 AMI wish I were smart enough to have been able to participate, but my knowledge of power electronics is pretty hands-off, unless you are impressed by using TRIACs to control holiday lighting. Here’s the IEEE on what they thought it would take to win.

Aside from general gEEkiness, two things interested me about this contest. First, from an econ perspective, contests are just a fascinating way to spur R&D. Would you be able to get entrants, given the cost of participation and the likelihood of winning the grand prize? Answer: yes. This seems to be a reliable outcome if the goal is interested enough to the right body of would-be participants.

The second thing that I found fascinating was the goal: power density. I think most people understand the goals of efficiency, but is it important that power inverters be small? The PV inverter on the side of your house, also probably around 2kW, is maybe 20x as big as these. Is that bad? How much is it worth it to shrink such an inverter? (Now, it is true if you want to achieve power density, you must push on efficiency quite a bit, as every watt of energy lost to heat needs to be dissipated somehow, and that gets harder and harder as the device gets smaller. But in this case, though efficiencies achieved were excellent, they were not cutting edge, and the teams instead pursued extremely clever cooling approaches.)

I wonder what target market Google has in mind for these high power density inverters. Cars perhaps? In that case, density is more important than a fixed PV inverter, but still seemingly not critical to this extreme. Specific density rather than volumetric seems like it would be more important. Maybe Google never had a target in mind. For sure, there was no big reveal with the winner announcement. Maybe Google just thought that this goal was the most likely to generate innovation in this space overall, without a particular end use in mind at all — it’s certainly true that power electronics are a huge enabling piece of our renewable energy future, and perhaps it’s not getting the share of attention it deserves.

I’m not the first, though, to wonder what this contest was “really about.” I did not have to scroll far down the comments to see one from Slobodon Ćuk, a rather famous power electronics researcher, inventor of a the Ćuk inverter.

Screen Shot 2016-03-09 at 9.55.00 AMAnyway, an interesting mini-mystery, but a cool achievement regardless.

On the correct prices of fuels…

Interesting blog entry from Lucas Davis at the Haas Energy Institute, on the “correct” prices for fossil fuels. He cites a new paper from Ian Parry that tries to account for the external costs as they vary around the world.
 
I notice two points:
 
1. At least for gasoline, they are measuring the externalities of driving, not of gasoline. Bad news for EV drivers intent on saving the world on mile at a time, because most of the associated externalities are still present.
 
2. The estimated cost of carbon / GHG is small compared to the other external costs like accidents and congestion. This is a common result among economic analyses of carbon costs, and I often wonder about it. If you use a value associated with the marginal cost of abatement, I can see it being quite low. But that’s in the current context of nobody abating much anything. I wonder what it would be if you projected the marginal cost of 80% or 90% abatement. That is, if we were actually to solve the climate problem.
Or, another way of thinking about it: if GHG emissions are potentially going to make the earth uninhabitable, it seems like maybe they’re underestimating the external cost of carbon. Because there is limited cost data available for “the end of the world as we know it,” economists can be forgiven for working with the data they have but we, the careful reader should bear in mind the limits.