A postulate regarding innovation

I’ve been thinking a lot lately about success and innovation. Perhaps its because of my lack of success and innovation.

Anyway, I’ve been wondering how the arrow of causality goes with those things. Are companies successful because they are innovative, or are they innovative because they are successful.

This is not exactly a chicken-and-egg question. Google is successful and innovative. It’s pretty obvious that innovation came first. But after a few “game periods,” the situation becomes more murky. Today, Google can take risks and reach further forward into the technology pipeline for ideas than a not-yet successful entrepeneur could. In fact, a whole lot of their innovation seems not to affect their bottom line much, in part because its very hard to grow a new business at the scale of their existing cash cows. This explains (along with impatience and the opportunity to invest in their high-returning existing businesses) Google’s penchant for drowning many projects in the bathtub.

I can think of other companies that had somewhat similar behavior over history. AT&T Bell Labs and IBM TJ Watson come to mind as places that were well funded due to their parent companies enormous success (success, derived at least in part, from monopoly or other market power). And those places innovated. A lot. As in Nobel Prizes, patents galore, etc. But, despite their productive output of those labs, I don’t think they ever contributed very much to the companies’ success. I mean, the transistor! The solar cell! But AT&T didn’t pursue these businesses because they had a huge working business that didn’t have much to do with those. Am I wrong about that assessment? I hope someone more knowledgable will correct me.

Anyway, that brings me back to the titans of today, Google, Facebook, etc. And I’ll continue to wonder out loud:

  • are they innovating?
  • is the innovation similar to their predecessors?
  • are they benefiting from their innovation?
  • if not, who does, and why do they do it?

So, this gets back to my postulate, which is that, much more often than not, success drives innovation, and not the reverse. That it ever happens the other way is rare and special.

Perhaps a secondary postulate is that large, successful companies do innovate, but they have weak incentives to act aggressively on those innovations, and so their creative output goes underutilized longer than it might if it had been in the hands of a less successful organization.

Crazy?

Agnotology

I like discovering a new word, and am excited to see this one: Agnotology. I learned it today in this profile of Stanford University researcher Robert Proctor, an agnotologist.

Very succinctly, agnotology is the study of intentionally inducing ignorance, or as people I used to work with would put it: spreading FUD.

That is, the daily work of thousands of people, employed in a large segment of corporate America. Their job it is to make sure that people do not understand something, say, like vaccines safety or climate change that might interfere with profitability. I guess if “it is difficult to get a man to understand something when his salary depends on his not understanding it” then some corollary says it should be easy for another man to help many men not understand something if his salary depends on how many other men do not understand it.

Or something.

Anyway, with so much intentionally-induced ignorance pervading our universe these days, like the dark side of the force, I was happy to see that at least the activity has a name. I wish the agnotologists well, and hope they will come up with some kind of cure or vaccine that will help us contain the stupid-industrial complex that has come to so pervade our lives and politics.

Taking back Public Policy

Hold on folks, I’m about to get highly normative.

You see, I keep running into people who claim to do “public policy” for a living, or their business card says “Director of Public Policy.”

But when I talk to them, I find out they don’t know anything about public policy. Worse, they don’t care. What most of these people do for a living is “trying to win” some game.

So public policy becomes “communications,” when there is a need to convince people that something is good for them, or supporting politicians directly, when simple communications doesn’t quite cut the mustard.

At best, I would call this work advocacy. But “Director of the Stuff We Want” does not sound so good, so we get “Director of Public Policy.”

Okay, whatever, I’m not an idiot. I get how things work in the real world. Down in the scrum, there is no public good, there is no “what is best?” There are only people fighting for what they want, and we all pretend that sorta kinda over enough time, we end up with outcomes that are a reasonable balance of everyone’s interests, intensity of interests (particularly important if you like guns), and resources (particularly important if you have lots of money).

Except that process seems to be whiffing a bit these days, no?

What I wish for “public policy” would be for the field to somehow professionalize, to set norms of behavior, to set some notion of this-bullshit-is-too-much. Maybe, if so many people purporting to offer policy analysis weren’t so entirely full of crap all the time, we could one day reach the point where people would take policy analysis half seriously again.

So, in the interest of brevity, here are some signs your policy work may be pure hackery:

  • You talk in absolutes. If you’re in the business of telling someone that solar power or electric utilities or oil and gas companies or wind turbines or nuclear  are wonderful or evil, you probably are not doing public policy work. You’re just confusing people and wasting everyone’s time and attention.
  • your salary includes a bonus for successfully causing / stopping something
  • you will not admit publicly to any shortcoming of your preferred position
  • you do not even read work that comes to different conclusions than yours
  • if you arrive at conferences in a private jet

I also notice a lot of people who “do” public policy are also attorneys. That makes sense — knowing how the law works certainly helps. But lawyering and policy work should not be the same. Lawyers have a well-developed set of professional ethics centered around protecting their clients’ interests while not breaking any laws. This is flying way too low to the ground for good policy work. The policy world should aspire to a higher standard. Based on the low esteem most folks feel for the legal profession, it seems reasonable that if we ever hope for people to take policy work seriously, we’ll need to at least view “our clients” more broadly than “who pays our salary.”

So, what is public policy? Well, I think it’s the process by which the impacts of choices faced by government are predicted and the results of choices already made are evaluated. It takes honesty, humility, and a willingness to let data update your conclusions.

Back in Real Life, public policy professionals, of course, also need skills of persuasion and influence in order advocate on behalf of their conclusions (and their employers’ conclusions, natch). But for the love of god, if you skip the analytical step, you’re not doing public policy, you’re doing assholery.

 

 

A different kind of techno-utopianism

What follows is a rather meandering meditation.

Bah, techno-utopianism

There’s a lot of techno-utopianism coming out of Silicon Valley these days. Computers will do our bidding, we will all be free to pursue lives of leisure. Machines, amplifying human talent, will make sure we are all rewarded (or not) appropriately to our skills and effort.

You already know I’m skeptical. Technology today seems to take as often as it gives: you get slick, you give up control. You get free media, you are renting out your eyeballs. A lot of people seem to express powerlessness when it comes to computing.

And why shouldn’t they? They don’t control the OS on their phone, they don’t even know exactly what an OS is. They didn’t decide how Facebook should work. If they don’t like it, they can’t do much about it, except not use it — hardly an option in this world.

A better techno-utopianism

But I am a techno utopian in my own way. In my utopia, computers (and software) become flexible, capable building blocks and people understand them enough to put recompose them for their own purposes.  These blocks would be honest, real, tools, that people — programmers and non-programmers — can wield skillfully and without a sense that there is anything hidden or subtle going on underneath the hood. Basically, that we’d all be masters of our technology. I’m not saying it’s realistic, it’s just my own preferred imaginary world.

How Dave Thinks of Computers

When I started my tech career, I was an engineer in the semiconductor business. We had  computer aided design (CAD) software that helped us design chips. Logic simulators could help us test digital logic circuits. Circuit simulators could help with the analog stuff. Schematic capture tools let us draw circuits symbolically. Graphic layout tools let us draw the same circuits’ physical representation. Design rule checking tools helped us make sure our circuits conformed the manufacturing requirements. The list of CAD tools went on and on. And there was a thing about CAD tools: the were generally buggy and did not interoperate worth a damn. Two tools from the same vendor might talk, but from different vendors — forget it.

So we wrote a lot of software to slurp data from here, transform it in some way, and splat it to there. It was just what you had to do to get through the day. It was the glue that made a chip design “workflow” flow.

These glue tools were not works of software engineering art. They were hacks thrown together by skilled engineerins, but not skilled software engineers in order to get something done. The results were not handsome, not shrink-wrap ready, and not user-friendly, but were perfectly workable for our own purposes.

That experience really affected the way I view computing. To this day, I see people throw up their hands because Program X simply is incompatible with Program Y; the file formats are incompatible, undocumented, secret. Similarly, people who might write “just ok” software would never dream of trying because they do not have the time or knowledge to write Good, Proper Software.

In my utopia, that barrier would mostly go away.

The real key is knowing “how computers work.”

Khan!!!!

There is a push to teach “coding” in school these days, but I see it as simultaneously too much and too little. It’s too much in that the emphasis on learning to write software is going to be lost on many people who will never use that skill, and the knowledge of one programming language or another has a ridiculously short half-life. It is not important that every high school senior needs to be able to write an OS, or even a simple program. They do not need to understand how digital logic, or microprocessors work. And teaching them the latest framework seems pointless.

But I do want them to understand what data is, how it flows through a computer, the different ways it can be structured. When they ask a computer to do something, I want them to have a good, if vague notion of how much work that “thing” is.

That is, they should understand a computer, in the same way Kirk wants Savvis to know, “why things work on a starship.”

See, Kirk doesn’t understand warp theory or how impulse engines work, but he knows how a starship “works,” and that makes him a good captain.

How things work on a computer

Which brings me back to my utopia: I want everyone to know how things are done on a computer. Because anyone who has spent any length of time around computers knows that certain patterns emerge repeatedly — and a lot of programming has a constant vague feeling of deja-vu. That makes sense, because, more or less, computers really only do a few things (these overlap a lot, too):

  • reading data from one (or more places) in memory, doing something with it, and writing the results to another (or more) places in memory.
  • reading data from an external resource (file, network connection, usb port) or writing it to an (file, network connection, usb port, display, etc)
  • waiting for something happen, then acting

With regard to data data itself, I want people do understand basic data structural concepts:

structs, queues, lists, stacks, hashes, files — what they are and why/when they are used. They should know that they can be composited arbitrarily: structs of hashes of stacks of files containing  structs, etc.

And finally, I want people to understand something of computational complexity — what computer scientists sometimes refer to as “big-O” notation. Essentially, this is all about knowing how the difficulty of solving a problem grows with the size of the problem. It applies to the time (compute cycles) and space (memory) needed to solve a problem. Mastering this is an advanced topic in CS education, which is why it is usually introduced late-ish in CS curricula. But I’m not talking about mastery. I’m talking about awareness. Bring it in early-ish, in everyone’s curriculum!

Dave’s techno-utopia

Back to my utopia. In my utopia, computers, the Internet would not be the least bit mysterious. People would have a gut-level understanding of how it works. For example, what happens when you click search in Google.

Anyone could slap together solutions to problems using building blocks that they may or may not understand individually, but whose purpose and capabilities they do understand, using the concepts mentioned above. And if they can’t or won’t do that, at least they can articulate what they want in those terms.

In Dave’s techno utopia, people would use all kinds of software: open, proprietary, big and small, that does clever and exotic things that they might never understand. But they would also know that, under the hood, that software slurps, transforms, and splats, just like every other piece of software. Moreover, they would know how to splat and slurp from it themselves, putting together “flows” that serve their purposes.

 

Nerd alert: Google inverter challenge

A couple of years ago Google announced an electrical engineering contest with a $1M prize. The goal was build the most compact DC to AC power inverter that could meet certain requirements, namely 2kVA power output at 240 Vac 60Hz, from a 450V DC source with a 10Ω impedance. The inverter had to withstand certain ambient conditions and reliability, and it had to meet FCC interference requirements.

Fast forward a few years, and the results are in. Several finalists met the design criteria, and the grand prize winner exceeded the energy density requirements by more than 3x!

First, Congrats, to the “Red Electrical Devils!” Screen Shot 2016-03-09 at 9.56.34 AMI wish I were smart enough to have been able to participate, but my knowledge of power electronics is pretty hands-off, unless you are impressed by using TRIACs to control holiday lighting. Here’s the IEEE on what they thought it would take to win.

Aside from general gEEkiness, two things interested me about this contest. First, from an econ perspective, contests are just a fascinating way to spur R&D. Would you be able to get entrants, given the cost of participation and the likelihood of winning the grand prize? Answer: yes. This seems to be a reliable outcome if the goal is interested enough to the right body of would-be participants.

The second thing that I found fascinating was the goal: power density. I think most people understand the goals of efficiency, but is it important that power inverters be small? The PV inverter on the side of your house, also probably around 2kW, is maybe 20x as big as these. Is that bad? How much is it worth it to shrink such an inverter? (Now, it is true if you want to achieve power density, you must push on efficiency quite a bit, as every watt of energy lost to heat needs to be dissipated somehow, and that gets harder and harder as the device gets smaller. But in this case, though efficiencies achieved were excellent, they were not cutting edge, and the teams instead pursued extremely clever cooling approaches.)

I wonder what target market Google has in mind for these high power density inverters. Cars perhaps? In that case, density is more important than a fixed PV inverter, but still seemingly not critical to this extreme. Specific density rather than volumetric seems like it would be more important. Maybe Google never had a target in mind. For sure, there was no big reveal with the winner announcement. Maybe Google just thought that this goal was the most likely to generate innovation in this space overall, without a particular end use in mind at all — it’s certainly true that power electronics are a huge enabling piece of our renewable energy future, and perhaps it’s not getting the share of attention it deserves.

I’m not the first, though, to wonder what this contest was “really about.” I did not have to scroll far down the comments to see one from Slobodon Ćuk, a rather famous power electronics researcher, inventor of a the Ćuk inverter.

Screen Shot 2016-03-09 at 9.55.00 AMAnyway, an interesting mini-mystery, but a cool achievement regardless.

On the correct prices of fuels…

Interesting blog entry from Lucas Davis at the Haas Energy Institute, on the “correct” prices for fossil fuels. He cites a new paper from Ian Parry that tries to account for the external costs as they vary around the world.
 
I notice two points:
 
1. At least for gasoline, they are measuring the externalities of driving, not of gasoline. Bad news for EV drivers intent on saving the world on mile at a time, because most of the associated externalities are still present.
 
2. The estimated cost of carbon / GHG is small compared to the other external costs like accidents and congestion. This is a common result among economic analyses of carbon costs, and I often wonder about it. If you use a value associated with the marginal cost of abatement, I can see it being quite low. But that’s in the current context of nobody abating much anything. I wonder what it would be if you projected the marginal cost of 80% or 90% abatement. That is, if we were actually to solve the climate problem.
Or, another way of thinking about it: if GHG emissions are potentially going to make the earth uninhabitable, it seems like maybe they’re underestimating the external cost of carbon. Because there is limited cost data available for “the end of the world as we know it,” economists can be forgiven for working with the data they have but we, the careful reader should bear in mind the limits.

Worst environmental disaster in history?

In keeping with Betteridge’s Law: no.

My news feed is full of headlines like:

These are not from top-tier news sources, but they’re getting attention all the same. Which is too bad, because they’re all false by any reasonable SoCal gas leakmeasure. Worse, all of the above seem to deliberately misquote from a new paper published in Science. The paper does say, however:

This CH4 release is the second-largest of its kind recorded in the U.S., exceeded only by the 6 billion SCF of natural gas released in the collapse of an underground storage facility in Moss Bluff, TX in 2004, and greatly surpassing the 0.1 billion SCF of natural gas leaked from an underground storage facility near Hutchinson, KS in 2001 (25). Aliso Canyon will have by far the largest climate impact, however, as an explosion and subsequent fire during the Moss Bluff release combusted most of the leaked CH4, immediately forming CO2.

Make no doubt about it, it is a big release of methane. Equal, to the annual GHG output of 500,000 automobiles for a year.

But does that make is one of the largest environmental disasters in US history? I argue no, for a couple of reasons.

Zeroth: because of real, actual environmental disasters, some of which I’ll list below.

First: without the context of the global, continuous release of CO2, this would not affect the climate measurably. That is, by itself, it’s not a big deal.

Second: and related, there are more than 250 million cars in the US, so this is 0.2% of the GHG released by automobiles in the US annually. Maybe the automobile is the ongoing environmental disaster? (Here’s some context: The US is 15.6% of global GHG emissions, transport is 27% of that, and 35% of that is from passenger cars. By my calculations, that makes this incident about 0.0003% of global GHG emissions.)

Lets get back to some real environmental disasters? You know, like the kind that kill people, animals, and lay waste to the land and sea? Here are a list of just some pretty big man-made environmental disasters in the US:

Of course, opening up the competition to international disasters, including US-created ones, really expands the list, but you get the picture.

All this said, it’s really too bad this happened, and it will set California back on its climate goals. I was saddened to see that SoCal Gas could not cap this well quickly, or at least figure out a way to safely flare the leaking gas.

But it’s not the greatest US environmental disaster of all time. Not close.

 

 

 

Power corrupts, software ecosystems edition

I’ve written a lot of software in my day, but it turns out little of it has been for use of anyone but myself and others in the organizations for which I’ve worked.

Recently, my job afforded me a small taste of what it’s like to publish software. I wrote a small utility, an extension to a popular web browser to solve a problem with a popular web application. The extension was basically a “monkey patch” of sorts over the app in question.

Now, in order to get it up on the “web store”, there were some hoops to jump through, including submission to the company for final approval. In the process, I signed up for the mailing list that programmers use 2000px-Gnome-system-lock-screen.svgto help them deal with hiccups in the publishing process.

As it turned out, my hoop-jumping wasn’t too hard. Because I was only publishing to my own organization, this extension ultimately did not need to be reviewed by the Great And Powerful Company. But I’ve continued to follow the mailing list, because it has turned out to be fascinating.

Every day, a developer or two or three, who has toiled for months or years to create something they think is worthwhile, sends a despondent message to the list: “Please help! My app has been rejected and I don’t know why!” Various others afflicted try to provide advice for things they tried that worked or did not.

And the thing is, in many cases, nobody can help them. Because the decision was made without consultation with the developer. The developer has no access to the decision-maker whatsoever. No email, no phone call, no explanation. Was the app rejected because it violated the terms and conditions? Which ones?

The developers will have to guess, make some changes, and try their luck again. It’s got to be an infuriateing process, and a real experience of powerlessness.

This is how software is distributed today — through a small number of authorities. Google, Apple, Amazon, etc. If you want to play, you do it by their rules. Even on PCs and Macs there seems a strong push to move software distribution onto the company web stores. So far, it is still possible to put a random executable on your PC and run it — at least after clicking through a series of warnings. But will that be the case forever? We shall see.

The big companies have some good reasons for doing this. They can

  • assert quality control (of a sort. NB: plenty of crap apps in curated stores anyway)
  • help screen for security problems
  • screen out malware.

But they also have bad (that is, consumer-unfriendly) reasons to do this, like

  • monetizing other people’s work to a degree only possible in a monopoly situation
  • keeping out products that compete with their own
  • blocking apps that circumvent company-imposed limitations (like blocking frameworks and interpreters that might allow people to develop for the framework rather than the specific target OS)

All of those reasons are on top of the inevitable friction associated with dealing with a large, careless, monolithic organization and its bureaucrats, who might find it easier to reject your puny app than to take the time to understand what it’s doing and why it has merit.

Most sad to me is that the amazing freedom that computing used to have is being restricted. Most users would never use that freedom, but it was nice to have. If you could find a program, you could run it. If you could not find it, you could write it. And that is being chipped away at.

 

 

 

Apple Open Letter… eh

[ Updated below, but I’m leaving the text here as I originally wrote it. ]

 

By now, just about everyone has seen the open letter from Apple about device encryption and privacy. A lot of people are impressed that such a company with so much to lose would stand up for their customers. Eh, maybe.

I have to somewhat conflicting thoughts on the whole matter:

1)

If Apple had designed security on the iPhone properly, it would not even be possible for them to do what the government is asking. In essence, the government plan is for Apple to develop a new version of iOS that they can “upgrade” the phone to, which would bypass (or make it easier to bypass) the security on the device. Of course, it should not be possible to upgrade the OS of a phone without the consent of a verified users, so this is a bug they baked in from the beginning — for their benefit, of course, not the government’s.

Essentially, though they have not yet written the “app” that takes advantage of this backdoor, they have already created it in a sense. The letter is therefore deceptive as written.

2)

The US government can get a warrant to search anything. Anything. Any. Thing. This is has it has been since the beginning of government. They can’t go out and do so without a warrant. They can’t (well, shouldn’t) be able to pursue wholesale data mining of every single person, but they can get a warrant to break any locked box and see what’s inside.

Why should data be different?

I think the most common argument around this subject is that the government cannot be trusted with such power. That is, yes, the government may have a reasonably right to access encrypted data in certain circumstances (like decrypting known terrorist’s phones!) but the tools that allow that also give them the power to access data under less clear-cut circumstances as well.

The argument then falls into a slippery slope domain — a domain in which I’m generally unimpressed. In fact, I would dismiss it entirely if the US government hadn’t already engaged in important widespread abuse of similar powers.

Nevertheless, I think the argument that the government should not have backdoors to people’s data is one of practical controls rather than fundamental rights to be free from search.

 

I have recommendations to address both thoughts:

  1. Apple, like all manufacturers, should implement security properly, so that neither they nor any other entity possess a secret backdoor.
  2. Phone’s should have a known backdoor, a one-time password algorithm seeded at the time of manufacture, and stored and managed by a third party, such as the EFF. Any attempts to access this password, whether granted or denied, would be logged and viewable as a public record.

I don’t have a plan for sealed and secret warrants.

 

[ Update 2/17 11:30 CA time ]

So, the Internet has gone further and explained a bit more about what Apple is talking about and what the government has asked for. It seems that basically, the government wants to be able to to brute-force the device, and wants Apple to make a few changes to make that possible:

  1. that the device won’t self-wipe after too many incorrect passwords
  2. that the device will not enforce extra time-delay between attempts
  3. that the the attempts can be conducted electronically, via the port, rather than manually by the touch screen

I guess this is somehow different than Apple being able to hack their own devices, but to me, it’s still basically the same situation. They can update the OS and remove security features. That the final attack is brute force rather than a backdoor is hardly relevant.

So I’m standing behind my assessment that the Apple security is borked by design.

Privatization, aluminum sky-tube edition

This Congress still has some must-pass legislation to complete.

That includes a reauthorization bill that contains a bunch of much-needed reforms for the agency. But they slipped in a doozey of a change: complete privatization of air traffic control. The plan is to create a separate government-chartered independent non-profit to run the whole show, with the intention, of course, that it will be run much more efficiently than the ZAN-ARTCC-ATOPgovernment ever could. I liked quote from an unnamed conservative groups from another Hill article:

“To us it is an axiomatic economic principle that user-funded, user-accountable entities are far more capable of delivering innovation and timely improvements in a cost-effective manner than government agencies.”

Axiomatic, eh? Well, I think I see your problem…

Anyway, it’s worth taking a step back to think about this proposal from a few different angles. First, let’s remember what the FAA does. Really, there are three main activities:

  1. Write regulations
  2. Allocate funds for aviation-related programs (AIP and similar) and
  3. Run ATC (Note: the FAA’s ATC arm is called “ATO,” but I’ll keep calling it ATC here)

Honestly, there has always been something of a conflict between the needs of air traffic control with safety as top priority and efficiency and cost as lower priorities, and the rest of the organization’s needs. It is a small miracle that the FAA’s ATC runs the safest airspace in the world. But miracle or not, it is a fact.

Furthermore, it is also true that ATC has been slow to modernize. This is for several reasons. First, yes, government bureaucracy, of course. But there are other reasons, such as having congress habitually cut and delay funding for new systems (NB: when you are on temporary reauthorization, you don’t buy new things; programs do not progress. You just pay salaries.) Another problem is that the old systems, as cranky and obsolete as they are, work, and it’s just not a simple matter to replace a working system, tuned over decades with new technology, particularly if you require no degradation in performance in the process.

So does this justify privatization? Will a private organization do better in this respect? Well, here are some ideas for thought, in no particular order:

  • a private organization will use fees to fund itself. This might be good, because they should be able to raise all the money they need, but then again, fees might grow without control. A private organization running ATC is essentially a monopoly. Government control is a monopoly, too — except that you can use the levers of democracy to manage it
  • a fee-run organization will be mostly responsive to whomever pays the fees. In this case, it would be the airlines, and among the airlines, the majors would have the most bargaining power. Is this the best outcome? How will small carriers fare when it comes time to assign landing slots or assign routes to flight plans? How will general aviation do under such a system? Will fees designed for B747‘s coming into KEWR snuff out the C172 traffic coming into KCDW?
  • Regulatory capture is a problem for any industry-regulating government entity. Does the appointment of an all-industry board of directors for a private organization that assumes most of those functions “solve” that problem making total capture a fait accompli?
  • Will this new organization be self supporting or will it still depend on government money? How will it perform when there is an economic or industry slump? If there is a bankruptcy, who will foot the bill to keep the lights on?
  • When the inevitable budgets shortfalls come, how will labor fare? Will they have to sacrifice their contracts in order to help save the company?
  • I don’t know, but I’m just guessing, that nobody at the top of the FAA’s ATC today makes a million dollars a year. Will it be so under an private organization? If so, where might that money come from?
  • Does an emphasis on efficiency server the flying public? To that matter, do the flying public’s interests diverge from those of the airlines, and if so, how are they represented in the new organization’s decision-making?

I honestly have not considered or study this matter enough to have a strong opinion, but so much of it causes the hairs on my neck to stick out.

I’ll give the authors of this new bill credit for one thing: they managed to get the ATC union (NATCA) on board, essentially by promising continuity of their contracts and protections. I’m not sure if that comes with guarantees in perpetuity. One thing I noticed immediately is that current employees would be able to pay into the federal retirement system. New employees…

 

[ Full disclosure: I am a general aviation pilot and do not pay user fees to use ATC, and like it that way. I do understand that this is a subsidy I enjoy. ]