If you’ve spent any time in an energy economics class, you have probably seen a slide that shows the essential equivalency of a carbon tax and a cap-and-trade system, at least with respect to their ability to internalize externalities and fix a market failure. However, if you scratch the surface of the simple model used to claim this equivalency and you realize it only works if you have a good knowledge of the supply and demand curves for carbon emissions. (There are other non-equivalencies, too, like where the incidence of the costs falls.)
The equivalency idea is that for a given market clearing of carbon emissions and price, you can either set the price, and get the emissions you want, or set the emissions and you will get the price. As it turns out, nobody really has a good grip on the nature of those curves, and we live in a world of uncertainty anyway, so there actually is a rather important difference: what variable are we going to “fix” about and which one will “float,” carrying all the uncertainty: the price of the carbon emissions quantity?
I bring this up because today I read a nice blog post by Severin Borenstein which I will reduce to its essential conclusion: A carbon tax is much better than cap-and-trade. He brings up the point above, stating that businesses just are much better able to adapt when they know what the price is going to be, but there are other advantages to a tax.
First, administratively, it is much easier to set a tax than it is to legislate an active and vibrant market into existence. If you’ve lived in the world of public policy, I hope you know that Administration Matters.
Furthermore, legislatures are not fast, closed-loop control systems. They can’t adapt their rules on the fly quickly as new information comes in, and sometimes political windows close entirely, making it impossible make corrections. As a result, the ability to adjust caps in a timely manner is, at best, difficult. This is a fundamentally harder problem then getting people to agree, a priori, on what an acceptable price — one with more than a pinch of pain, but not enough to kill the patient.
So, how did we end up with cap-and-trade rather than a carbon tax? Well, certainly a big reason is the deathly allergy legislatures have to the word “tax.” Even worse: “new tax.” Perhaps that was the show-stopper right there. But it certainly did not help that we had economists (I suspect Severin was not among them) providing the conventional wisdom that a carbon tax and a cap-and-trade system are essentially interchangeable. The latter is not true, unless a wise, active, and responsive regulator, free to pursue an agreed objective is at the controls. So pretty much never.
Short post here. I notice people are writing about self-driving cars a lot. There is a lot of excitement out there about our driverless future.
I have a few thoughts, to expand on at a later day:
Apparently a lot of economic work on driving suggests that the a major externality of driving is from congestion. Simply, your being on the road slows down other people’s trips and causes them to burn more gas. It’s an externality because it is a cost of driving that you cause but don’t pay.
Now, people are projecting that a future society of driverless cars will make driving cheaper by 1) eliminating drivers (duh) and 2) getting more utilization out of cars. That is, mostly, our cars sit in parking spaces, but in a driverless world, people might not own cars so much anymore, but rent them by the trip. Such cars would be much better utilized and, in theory, cheaper on a per-trip basis.
So, if I understand my micro econ at all, people will use cars more because they’ll be cheaper. All else equal, that should increase congestion, since in our model, congestion is an externality. Et voila, a bad outcome.
But, you say, driverless cars will operate more efficiently, and make more efficient use of the roadways, and so they generate less congestion than stupid, lazy, dangerous, unpredictable human drivers. This may be so, but I will caution with a couple of ideas. First, how much less congestion will a driverless trip cause than a user-operated one? 75% as much? Half? Is this enough to offset the effect mentioned above? Maybe.
But there is something else that concerns me: the difference between soft- and hard-limits.
Congestion as we experience it today, seems to come on gradually as traffic approaches certain limits. You’ve got cars on the freeway, you add cars, things get slower. Eventually, things somewhat suddenly get a lot slower, but even then it’s certain times of the day, in certain weather, etc.
Now enter a driverless cars that utilize capacity much more effectively. Huzzah! More cars on the road getting where they want, faster. What worries me is that was is really happening is not that the limits are raised, but that we are operating the system much close to existing, real limits. Furthermore, now that automation is sucking out all the marrow from the road bone — the limits become hard walls, not gradual at all.
So, imagine traffic is flowing smoothly until a malfunction causes an accident, or a tire blows out, or there is a foreign object in the road — and suddenly the driverless cars sense the problem, resulting in a full-scale insta-jam, perhaps of epic proportions, in theory, locking up an entire city nearly instantaneously. Everyone is safely stopped, but stuck.
And even scarier than that is the notion that the programmers did not anticipate such a problem, and the car software is not smart enough to untangle it. Human drivers, for example, might, in an unusual situation, use shoulders or make illegal u-turns in order to extricate themselves from a serious problem. That’d be unacceptable in a normal situation, but perhaps the right move in an abnormal one. Have you ever had a cop the scene of an accident wave at you to do something weird? I have.
Will self-driving cars be able to improvise? This is an AI problem well beyond that of “merely” driving.”
Speaking of capacity and efficiency, I’ll be very interested to see how we make trade-offs of these versus safety. I do not think technology will make these trade-offs go away at all. Moving faster, closer will still be more dangerous than going slowly far apart. And these are the essential ingredients in better road capacity utilization.
What will be different will be how and when such decisions are made. In humans, the decision is made implicitly by the driver moment by moment. It depends on training, disposition, weather, light, fatigue, even mood. You might start out a trip cautiously and drive more recklessly later, like when you’re trying to eat fast food in your car. The track record for humans is rather poor, so I suspect that driverless cars will do much better overall.
But someone will still have to decide what is the right balance of safety and efficiency, and it might be taken out of the hands of passengers. This could go different ways. In a liability-driven culture me way end up with a system that is safer but maybe less efficient than what we have now. (call it “little old lady mode”) or we could end up with decisions by others forcing us to take on more risk than we’d prefer if we want to use the road system.
I recently read in the June IEEE Spectrum (no link, print version only) that some people are suggesting that driverless cars will be a good justification for the dismantlement of public transit. Wow, that is a bad idea of epic proportions. If, in the first half of the 21st century, the world not only continues to embrace car culture, but doubles down to the exclusion of other means of mobility, I’m going to be ill.
* * *
That was a bit more than I had intended to write. Anyway, one other thought is that driverless cars may be farther off than we thought. In a recent talk, Chris Urmson, the director of the Google car project explains that the driverless cars of our imaginations — the fully autonomous, all conditions, all mission cars — may be 30 years off or more. What will come sooner are a succession of technologies that will reduce driver workload.
So, I suspect we’ll have plenty of time to think about this. Moreover, the nearly 7% of our workforce that works in transportation will have some time to plan.
A couple of years ago Google announced an electrical engineering contest with a $1M prize. The goal was build the most compact DC to AC power inverter that could meet certain requirements, namely 2kVA power output at 240 Vac 60Hz, from a 450V DC source with a 10Ω impedance. The inverter had to withstand certain ambient conditions and reliability, and it had to meet FCC interference requirements.
Fast forward a few years, and the results are in. Several finalists met the design criteria, and the grand prize winner exceeded the energy density requirements by more than 3x!
First, Congrats, to the “Red Electrical Devils!” I wish I were smart enough to have been able to participate, but my knowledge of power electronics is pretty hands-off, unless you are impressed by using TRIACs to control holiday lighting. Here’s the IEEE on what they thought it would take to win.
Aside from general gEEkiness, two things interested me about this contest. First, from an econ perspective, contests are just a fascinating way to spur R&D. Would you be able to get entrants, given the cost of participation and the likelihood of winning the grand prize? Answer: yes. This seems to be a reliable outcome if the goal is interested enough to the right body of would-be participants.
The second thing that I found fascinating was the goal: power density. I think most people understand the goals of efficiency, but is it important that power inverters be small? The PV inverter on the side of your house, also probably around 2kW, is maybe 20x as big as these. Is that bad? How much is it worth it to shrink such an inverter? (Now, it is true if you want to achieve power density, you must push on efficiency quite a bit, as every watt of energy lost to heat needs to be dissipated somehow, and that gets harder and harder as the device gets smaller. But in this case, though efficiencies achieved were excellent, they were not cutting edge, and the teams instead pursued extremely clever cooling approaches.)
I wonder what target market Google has in mind for these high power density inverters. Cars perhaps? In that case, density is more important than a fixed PV inverter, but still seemingly not critical to this extreme. Specific density rather than volumetric seems like it would be more important. Maybe Google never had a target in mind. For sure, there was no big reveal with the winner announcement. Maybe Google just thought that this goal was the most likely to generate innovation in this space overall, without a particular end use in mind at all — it’s certainly true that power electronics are a huge enabling piece of our renewable energy future, and perhaps it’s not getting the share of attention it deserves.
I’m not the first, though, to wonder what this contest was “really about.” I did not have to scroll far down the comments to see one from Slobodon Ćuk, a rather famous power electronics researcher, inventor of a the Ćuk inverter.
Anyway, an interesting mini-mystery, but a cool achievement regardless.
1. At least for gasoline, they are measuring the externalities of driving, not of gasoline. Bad news for EV drivers intent on saving the world on mile at a time, because most of the associated externalities are still present.
2. The estimated cost of carbon / GHG is small compared to the other external costs like accidents and congestion. This is a common result among economic analyses of carbon costs, and I often wonder about it. If you use a value associated with the marginal cost of abatement, I can see it being quite low. But that’s in the current context of nobody abating much anything. I wonder what it would be if you projected the marginal cost of 80% or 90% abatement. That is, if we were actually to solve the climate problem.
Or, another way of thinking about it: if GHG emissions are potentially going to make the earth uninhabitable, it seems like maybe they’re underestimating the external cost of carbon. Because there is limited cost data available for “the end of the world as we know it,” economists can be forgiven for working with the data they have but we, the careful reader should bear in mind the limits.
These are not from top-tier news sources, but they’re getting attention all the same. Which is too bad, because they’re all false by any reasonable measure. Worse, all of the above seem to deliberately misquote from a new paper published in Science. The paper does say, however:
This CH4 release is the second-largest of its kind recorded in the U.S., exceeded only by the 6 billion SCF of natural gas released in the collapse of an underground storage facility in Moss Bluff, TX in 2004, and greatly surpassing the 0.1 billion SCF of natural gas leaked from an underground storage facility near Hutchinson, KS in 2001 (25). Aliso Canyon will have by far the largest climate impact, however, as an explosion and subsequent fire during the Moss Bluff release combusted most of the leaked CH4, immediately forming CO2.
Make no doubt about it, it is a big release of methane. Equal, to the annual GHG output of 500,000 automobiles for a year.
But does that make is one of the largest environmental disasters in US history? I argue no, for a couple of reasons.
Zeroth: because of real, actual environmental disasters, some of which I’ll list below.
First: without the context of the global, continuous release of CO2, this would not affect the climate measurably. That is, by itself, it’s not a big deal.
Lets get back to some real environmental disasters? You know, like the kind that kill people, animals, and lay waste to the land and sea? Here are a list of just some pretty big man-made environmental disasters in the US:
Of course, opening up the competition to international disasters, including US-created ones, really expands the list, but you get the picture.
All this said, it’s really too bad this happened, and it will set California back on its climate goals. I was saddened to see that SoCal Gas could not cap this well quickly, or at least figure out a way to safely flare the leaking gas.
But it’s not the greatest US environmental disaster of all time. Not close.
Having worked for an Internet of Things company, I have more than a little sympathy for Nest. It’s hard to make reliable connected things. In fact, it might be impossible — at least using today’s prevailing techniques and tools, and subject to today’s prevailing expectations for features, development cost, and time to market.
First, it should go without saying that a connected thermostat is millions or even billions of times as complex as the old, bimetallic strips that it is often replacing. You are literally replacing a single moving part that doesn’t even wear out with a complex arrangement of chips, sensors, batteries, and relays, and then you are layering on software: an operating system, communications protocols, encryption, a user interface, etc. Possibility that this witch’s brew can be more reliable than a mechanical thermostat: approximately zero.
But there is also something else at work that lessens my sympathy: culture. IoT is coming from the Internet tech world’s attempt to reach into physical devices. The results can be exciting, but we should stop for a moment to consider the culture of the Internet. This is a the culture of “go fast and break things.” Are these the people you want building devices that have physical implications in your life?
My personal experience with Internet-based services is that they, work most of the time. But they change on their own schedule. Features and APIs come and go. Sometimes your Internet connection goes out. Sometimes your device becomes unresponsive for no obvious reason, or needs to be rebooted. Sometimes websites go down for maintenance at an inconvenient time. Even when the app is working normally, experience can vary. Sometimes it’s fast, sometimes slow. Keypresses disappear into the ether, etc.
My experience building Internet-based services is even more sobering. Your modern, complex web or mobile app is made up of agglomeration of sub-services, all interacting asynchronously through REST APIs behind the scenes. Sometimes, those sub-services use other sub-services in their implementation, and you don’t even have a way of knowing what ones. Each of those links can fail for many reasons, and you must code very defensively to gracefully handle such failures. Or you can do what most apps do — punt. That’s fine for chat, but you’ll be sorely disappointed if your sprinkler kills your garden, or even if your alarm clock failures to wake you up before an important meeting.
I liked Vox.com when it came out. The card format is cool, and the detailed yet bare-bones explainers suit my approach to many aspects of news: tell me what I need to know to understand this situation.
At first, I found the decision not to host comments interesting, but not alarming. After all, everyone knows that the Internet comments section is a bubbling cesspool, right?
But I’ve been reading Vox articles now for awhile, and I’ve noticed in too many cases, when they were just blowing it: incorrect or out-of-context facts, telling one half of an argument, or missing a crucial detail. And these are the kinds of things where a letter to the editor, or a stream of informed comments, can really make an article much more useful. I notice this particularly when Vox writes about energy, a topic I have studied in depth.
Here’s an example of the sort of thing I’m talking about. In “Ignore the haters: electric cars really are greener,” they cite a new Union of Concerned Scientists report at length. But they never mention that UCS is primarily an advocacy organization, not a research one. Or that, for example, Argonne National Labs has been publishing similar research for years with similar, but with slightly more muted results. Or even that the summary in the UCS report compares EVs against normal gasoline cars, which is hardly a like-vs-like comparison, given that gasoline vehicles execute a range of missions that EVs currently cannot. As it turns out, a PHEV or even a regular hybrid does in fact outperform an EV on CO2/mile in many parts of the country, and the data in the UCS report show it. And there are additional embedded assumptions, like that the electric grid will continue to get greener. That’s probably true, but maybe not, the greening of the grid could accelerate or it could start hitting hurdles that slow it down. At the same time gasoline cars could get better or worse. Hybrids might become the norm, lower carbon fuels could become mainstream, etc. Finally, EVs cost a lot more than gas cars. For the same money, could you reduce your carbon intensity more effectively than by buying an EV? (Answer: yes.) In the end, it’s hardly journalistic to lump everyone who has questions about the superiority of EV’s as a hater.
Getting back to Vox, it’s not just the bias that I don’t like. After all, bias is a part of journalism as organic chemistry is part of life. There’s no ombudsman, there’s no straightforward place to look for corrections. (They integrate corrections directly into cards, usually by changing the text without any notation.) The whole site is a Read Only Memory. In Ezra Klein’s own words on leaving the WP to found Vox: “we were held back, not just by the technology, but by the culture of journalism.”
Indeed. So, this is the improved technology and improved culture? It’s seriously starting to turn me off. Anybody else?
About a decade ago, Alex Farrell, a professor in the UC Berkeley Energy and Resources Department, Alex Farrell, had a series of papers unpopular with environmentalists. They showed that, essentially, there was no peak oil. In fact, at prevailing prices of the time, one could profitably extract a supply of petroleum to last hundreds of years at current rates. The supply would come not just from traditional sources, but from Canadian bitumen and coal-to-liquids conversion. He also pointed out that this is a bad thing, because those alternative sources of petroleum products have ridiculously high carbon intensities. That is, they’d be much, much dirtier than regular oil.
Sadly, Professor Farrell did not live to see the story of peak oil fade from most environmentalists’ consciousness nor to see the price of oil has drop so dramatically. And, in fact, at today’s prevailing prices, influenced by fracking and cheap natural gas (which is not a short term substitute for oil but could be a long-term one), we just don’t need oil from the Canadian tar sands. There’s not really a strong economic case for it, and the environmental case is, well, awful. I guess there is still a story to be told about “continental oil independence,” but, well, that’s only physical independence. Unless we plan on declaring a state of emergency and militarily controlling oil transfer, oil is still a worldwide commodity, and if there were some kind of oil crunch, we’d take the economic gut punch all the same.
There’s a new story on BNEF explaining how the cost of wind power is now less than any other resource in the UK and Germany. That alone is confusing, because, depending on your definition of cost, that was already the case. In fact, it’s always the case: on the margin, the cost of wind energy is $0. It’s the machine you’re paying for.
But the article is even more puzzling because it then goes on to explain that what is happening is that renewable energy generation is displacing the generation from fossil units, so that their capacity factor (that is, utilization) goes down. This makes their fixed costs a larger percentage of their total costs, and makes all-in €/MWh higher than wind’s all in €/MWh.
Okay, that makes sense as far as it goes, but there’s one complication: they still need the fossil units to make the system work. That is, as solar and wind generate more energy, you may use your coal machine less, but you can’t operate the electric power system without the coal machine. Cheap, massive, and ubiquitous storage, of course could change this, but for now, we’re not there.
So that begs the question, exactly what turning point have we reached? Building more solar and wind exacerbate this situation (I will not call it a problem) so I’m pretty unclear on what this piece is saying.
I occupy a very lonely position on renewables. I want them, I think they’re important, and I think we need much more. They can be part of solving our huge climate problem. But, unlike most boosters, I don’t think renewables are a free lunch — that is, they will be, all-in, cheaper than our current system).