Designing a simple AC power meter

I’ve been interested in energy efficiency for a long time. A big part of understanding energy efficiency is understanding how devices use power, and when. Because of that, I’ve also long wanted an AC power meter. A cheap one. A decent one. A small one.

What is an AC Power Meter

An AC power meter is an instrument that can tell you how much juice a plug-load device is using. The problem is that measuring AC power is a bit tricky. This is because when dealing with AC, the current waveform that a device draws does not have to be in phase with the voltage. A load can be inductive (lagging) or capacitive (leading), and the result is that the apparent power (Volts RMS * Amps RMS) will be higher than the real power. It gets worse, though. Nonlinear loads, like switch-mode power supplies (in just about everything these days) can have current waveforms that literally have no relation to the voltage waveform.

As a result, the way modern AC power meters work is to sample the instantaneous current and voltage many times a second, in fact, many times per 60 Hz AC cycle, so that the true power can be calculated by calculating the “scalar product” of the voltage and current time series. From such calculation, you can get goodies like:

  • Real power (watts)
  • Apparent Power (VA)
  • Imaginary/Reactive Power (VAR)
  • phase angle
  • power factor

Instruments that do this well are expensive. For example, this Yokogawa WT3000E is sex on a stick as far as power meters go, but will set you back, I think more than $10k. I used one when I was at Google, and it was a sweet ride, for sure.

This is on my Christmas list, in case you’re wondering what to get me.
Cheap but effective.

On the other hand, you can get a Kill-A-Watt for $40. This is cheap and functional, but is not capable of logging data, and is totally uncalibrated. They claim 0.2% accuracy, though? My experience with them says otherwise.

Over the years I’ve built a couple of attempts at a power meters. One used a current transformer and a voltage transformer going into the ADCs of an Arduino. It sort of worked, but was a mess. Another time, I built a device that used hall-effect sensors to measure current, but I didn’t measure voltage at all. This really couldn’t measure power, but you could get an indicative sense from it.

Let’s Do This – Hardware Design

So, a few months ago, I resolved to build a proper power meter. I did a search for chips that could help me out, and lo and behold, I came across several “analog front end” chips that have all the circuitry you need to measure AC power. They do the analog to digital conversion, all the math, and give you a simple digital interface where you can query various parameters.

I settled on the Atmel ATM90E26. Reasonable basic accuracy of 0.1% built on 16b analog-to-digital converters, and best of all, about $2 in quantity 1. Furthermore, they have an app note with a sample design, and it seemed simple enough.

So I started designing. Unfortunately, I had various conflicting goals in mind:

  • Safety: like the McDLT, I want the hot to stay hot and the cool to stay cool. This means total isolation between the measurement side and the control side.
  • Small, so it can be put inside a small appliance.
  • A display, so I could read power data directly
  • An interface to a Raspberry Pi so that I could log to a μSD card, or send it via WiFi to the Internet
  • A microprocessor of its own to drive the display and do any real-time processing needed
  • An internal AC to DC power supply so that the device itself could be powered from a single AC connection.
  • Ability to measure current by way of sense resistor, current transformer, or some combination of both.
  • Ability to get reasonably accurate measurement of very small loads (like < 1W) so that I can make measurements concerning vampire power. One way to do this wile keeping precision, is to build a unit with a high value shunt resistor, which I can do if I’m rolling my own.

 

Some of these desires conflict with each other, and I made several iterations on the computer before making my first board. I ended up jettisoning the LCD and building the thing around the RPi Zero. This was primarily to make the board compact. If I wanted a display I could plug one into the Pi! I also initially went with an outboard AC/DC converter mostly because I just didn’t want to fuss with it.

Power and Isolation

In a device that’s intended to live inside a plastic box, I probably wouldn’t bother with isolation at all. The whole circuit could “ride the mains.” But because this is supposed to be a tinker-friendly board, I wanted to be able to touch the RPi without dying. Usually, this is done with something simple like optoisolators to provide galvanic isolation for data signals. But this board presented another challenge. The power measurement chip needs to be exposed to the AC (duh, so it can measure it) but it also needs DC power itself to operate.

How to power the chip and maintain isolation? This could be done with a simple “capacitive dropper” supply (simple, inefficient, sketchy), or with an isolated DC-to-DC supply (pricey and or fussy), but when I added up the optoisolators I’d need plus the DC-DC supply, I realized that a special purpose chip would be nearly cost effective and would be a lot less fuss. So I chose the AduM5411, a nifty part from Analog Devices that can forward three digital signals in one direction, one digital signal in the other direction, and provide power across the isolation barrier. And it was only like $6.

Only problem is, the AduM5411 is so good it is pure unobtanium. I’m not even sure the part really exists in the wild. So I switched to the Texas Instruments ISOW7841, a very similar part in all respects, except for the fact that it costs $10. This is the most expensive part in my BOM by far. But I have to admit, it is super easy to use and works perfectly. (As an aside, these chips do not work on optical principles at all, but on tiny little transformers being driven at high frequency. Kind cool.)

Okay, so the AC/hot part of the board is powered from the DC side of the board. But how is the DC side powered? In the first iteration, I did it from a USB connector via a 5V wall-wart.

Current Measurement

In order to measure power, the measurement chip needs to be able to measure the voltage and current, simultaneously and separately. Voltage is pretty easy. Just use a resistor network to scale it down so you don’t blow up the ADC. Current can be done one of two ways. One is to measure the voltage drop across a calibrated resistor. The resistor obviously needs to be able to handle a lot of current and it will have to be a small value to keep the voltage drop reasonably, or else the device you’re measuring will be unhappy. The current sense resistor should also have a low temperature coefficient, so that its value doesn’t change much as it warms up.

The other approach is to use a current transformer. CTs are nice in that they provide isolation, but they are large and cost a few bucks compared to the few pennies for the resistors. I did an iteration with space for a CT on the board, but later punted on that. I did leave a place where an external CT can be plugged into the board. I may never use it, though.

The Microcontroller

In this design, an Atmega 328p microcontroller sits between the Pi and the ATM90E26. It is connected to the ATM90E26 by a SPI bus and to the Pi by an I2C bus. Originally, I had thought the Atmega would have to poll the power chip frequently and integrate the total energy, but that was because I did not read the ATM90E26 data sheet closely enough. It turns out that chip can does all the math itself, including integrating energy, and so the processor was just sitting there doing conversion between I2C and SPI. I honestly could not come up with anything else useful for the Atmega to do.

This is the board after I scavenged it for some of the more expensive parts.
The first design I had fabbed.

Anyway, the good news was that this design worked straight away — hardware wise, though it turned out to be more work than I wanted to get the Atmega to do the I2C/SPI forwarding reliably. And  I didn’t even need it.

Ditch the processor!

So, using the same PCB, I made some simple hacks to bring the SPI bus pins from the measurement chip to the RPi header. I also had to add a voltage divider so that the 5V MISO signal would not destroy the not-5V-tolerant MISO pin on the RPi. The hacked board looked like this.

Look ma, no intermediary microprocessor
Board on its side, so you can see how the RPi rides along.

The RPi communicates with the power measurement chip through the TI isolation chip, and des so reliably and much faster than I2C, so I was glad to realize that I didn’t need that intermediary processor in the mix at all.

This board could be pressed into service as it was, but it has a couple of issues:

  1. First, the orientation of the Pi relative to the board saves a bit of space, but does so at the cost of having all the Pi connectors face down towards the “hot” side of the board.
  2. Second, powering the DC side of the board from the USB jack proved more annoying to me than I had anticipated. It really just bugs me to have to plug an AC measuring device into a separate wall-wart. So I knew I’d design in a PSU. I chose a MeanWell IRM-05-05 PCB mount 5V PSU.
  3. Third, this board lacked cut-outs to provide extra creepage for high voltage parts of the board that would be (or could be) at high relative voltage from each other. I think the distances were probably adequate, and it’s not like I was going for a UL listing or anything, but I still wanted slots.

So, I redesigned the board and waited a month for them to arrive from China even though I payed for expedited shipping. The new board looks like this. Some of the space where the processor had gone I replaced room for LEDs, in case I want to blink some lights.

Looking much better. Notice the Pi has all its ports pointing away from the AC. Also, the Pi is on top rather than underneath.
Better layout, no processor

 

 

 

 

 

 

 

I really need to clean off that flux.

So that is just about it for the hardware.

One last spin.

As it turns out, I will spin this board one more time. The main reason is that I want to expand it a bit and move the mounting holes to match up with a suitable enclosure. I will probably use a Hammond RP-1127 — it’s exactly the right width for an RPi.

The other reason is that someone showed me how to redraw the pads for the current sense resistors to make a “quasi” kelvin connection.

The way the current measurement is to measure the current across the sense resistor. This resistor is reasonably accurate and temperature stable, but the solder and copper traces leading to it are not, and current flowing in them will cause a voltage drop there, too. This drop will be small, but the drop across the 0.001 Ω sense resistor is small, too! So, to get the most accurate measurement, I try to measure the voltage drop exactly at the resistor pads, preferably with connections to the resistor that have no current in them. This i what Kelvin connections are.

In the case below, I achieve something like this by splitting the pads for the resistor into three pieces. The top and bottom conduct test current across the resistor, and a small, isolated sliver in the middle measure the voltage. There is no current in that sliver, so it should have no voltage drop.

The result should be better accuracy and thermal stability of the current measurements. The Kelvin connection for the current measurement looks like this. The sense resistors go betwen the right terminal of the input fuse and the tab marked “load.” The resistor landing pads are split and a separate section, in which no current will flow is for the voltage measurement.

Fake four-terminal resistor

Calibration

An instrument is only as good as its calibration, and I needed a way to calibrate this one. Unfortunately, I do not have the equipment to do it properly. Such equipment might be a programmable AC source, a high-accuracy AC load, and perhaps a bench quality power meter. What I do have access to are reasonably accurate DMMs  (Fluke 87V and HP 34401A). The former is actually in cal, the latter, well, a million years ago, I’m sure.

I calibrated the voltage by hooking the unit up to the AC mains in my house and measuring the voltage at the terminals and adjusting a register value until the reported voltage matched my meter. For current, I put a largeish mostly non-inductive load on the system (Toast-R-Oven) and measured the current with my DMM and adjusted the register until the current matched.

Calibrating power is harder, and I really don’t have the right equipment to do it properly. The ATM90E26 allows you to also set up an energy calibration separate from the voltage and current measurements, and I think it is their intention that this be done with a known load of crappy power factor. But I don’t have such a load, so I sort of cribbed a guess at the energy calibration based on my voltage and current measurements of the toaster oven. This probably gets me close for resistive loads, but is not good enough for loads with interesting power factor. Unfortunately, the whole point of an AC power meter is to get this right, so in this important way, my meter is probably importantly compromised.

The result is that this is probably not a 0.1% instrument, or even a 1% instrument, but I guess it’s good enough for me… for now. I’ll try to think of ways to improve cal without spending money for fancy equipment or a visit to a cal lab.

Okay, so now about software

One of the reasons I like working with the Raspberry Pi, is that I get a “real”, and “normal” linux operating system, with all the familiar tools, including text editors, git, and interpreted programming languages like python. Python has i2c and SPI libraries for interacting with the the RPi hardware interfaces, so it was not a big deal to create a “device driver” for the ATM90E26. In fact, such a device driver was pretty much just an exercise is getting the names of all the registers and their addresses on one page. One nice thing my device driver does is convert the data format from the ATM90E26 to normal floats. Some of the registers are scaled by 10x or 100x, some are unsigned, some are signed two’s complement, and some have a sign bit. So the device driver takes care of that.

I also wrote two sample “applications.” The first is a combination of an HTTP server app and a client app running on the meter that forwards info to the server, and the server can display it in a web browser.

The other application is simpler, but in a way, more useful: I have the RPi simply upload samples to a Google Sheet! It’s very satisfying to plug in a logger and then open a Google Sheet anywhere and see the data flowing in every few seconds.

Results

So far, I’ve been able to things like log the voltage and frequency of the mains every second for the past week. I plan to deploy a few of these around the house, where I can see how various appliances are actually used.

Here’s a picture of the voltage and frequency as measured in my work shed for most of a week starting on 2/20/2018. The data are in 5 second intervals.

You can see a diurnal voltage pattern, and the frequency is rock solid

Design Files

I have not decided if I’m going to open-source this design yet, so I’m going to keep the hardware files to myself for the time being. There is also a liability concern, should someone take my stuff and manage to burn his house down or kill himself.

But you can see my github repo with the software. Not too much documentation there, but I think the file atm90e26.py should be reasonably self-explanatory as a simple Python-based wrapper for an ATM90E26 connected to a Pi via SPI.

Future Directions

  • Better calibration
  • Better WiFi performance when device is inside a metal appliance (external antenna)
  • Switchable current ranges. Maybe with relays swapping in difference sense resistors.

 

 

Carbon Tax vs Cap-and-Trade. This is boring by now.

If you’ve spent any time in an energy economics class, you have probably seen a slide that shows the essential equivalency of a carbon tax and a cap-and-trade system, at least with respect to their ability to internalize externalities and fix a market failure. However, if you scratch the surface of the simple model used to claim this equivalency and you realize it only works if you have a good knowledge of the supply and demand curves for carbon emissions. (There are other non-equivalencies, too, like where the incidence of the costs falls.)

The equivalency idea is that for a given market clearing of carbon emissions and price, you can either set the price, and get the emissions you want, or set the emissions and you will get the price. As it turns out, nobody really has a good grip on the nature of those curves, and we live in a world of uncertainty anyway, so there actually is a rather important difference: what variable are we going to “fix” about and which one will “float,” carrying  all the uncertainty: the price of the carbon emissions quantity?

I bring this up because today I read a nice blog post by Severin Borenstein which I will reduce to its essential conclusion: A carbon tax is much better than cap-and-trade. He brings up the point above, stating that businesses just are much better able to adapt when they know what the price is going to be, but there are other advantages to a tax.

First, administratively, it is much easier to set a tax than it is to legislate an active and vibrant market into existence. If you’ve lived in the world of public policy, I hope you know that Administration Matters.

Furthermore, legislatures are not fast, closed-loop control systems. They can’t adapt their rules on the fly quickly as new information comes in, and sometimes political windows close entirely, making it impossible make corrections. As a result, the ability to adjust caps in a timely manner is, at best, difficult. This is a fundamentally harder problem then getting people to agree, a priori, on what an acceptable price — one with more than a pinch of pain, but not enough to kill the patient.

So, how did we end up with cap-and-trade rather than a carbon tax? Well, certainly a big reason is the deathly allergy legislatures have to the word “tax.” Even worse: “new tax.” Perhaps that was the show-stopper right there. But it certainly did not help that we had economists (I suspect Severin was not among them) providing the conventional wisdom that a carbon tax and a cap-and-trade system are essentially interchangeable. The latter is not true, unless a wise, active, and responsive regulator, free to pursue an agreed objective is at the controls. So pretty much never.

notes on self-driving cars

A relaxing trip to work (courtesy wikimedia)
A relaxing trip to work (courtesy wikimedia)

Short post here. I notice people are writing about self-driving cars a lot. There is a lot of excitement out there about our driverless future.

I have a few thoughts, to expand on at a later day:

I.

Apparently a lot of economic work on driving suggests that the a major externality of driving is from congestion. Simply, your being on the road slows down other people’s trips and causes them to burn more gas. It’s an externality because it is a cost of driving that you cause but don’t pay.

Now, people are projecting that a future society of driverless cars will make driving cheaper by 1) eliminating drivers (duh) and 2) getting more utilization out of cars. That is, mostly, our cars sit in parking spaces, but in a driverless world, people might not own cars so much anymore, but rent them by the trip. Such cars would be much better utilized and, in theory, cheaper on a per-trip basis.

So, if I understand my micro econ at all, people will use cars more because they’ll be cheaper. All else equal, that should increase congestion, since in our model, congestion is an externality. Et voila, a bad outcome.

II.

But, you say, driverless cars will operate more efficiently, and make more efficient use of the roadways, and so they generate less congestion than stupid, lazy, dangerous, unpredictable human drivers. This may be so, but I will caution with a couple of ideas. First, how much less congestion will a driverless trip cause than a user-operated one? 75% as much? Half? Is this enough to offset the effect mentioned above? Maybe.

But there is something else that concerns me: the difference between soft- and hard-limits.

Congestion as we experience it today, seems to come on gradually as traffic approaches certain limits. You’ve got cars on the freeway, you add cars, things get slower. Eventually, things somewhat suddenly get a lot slower, but even then it’s certain times of the day, in certain weather, etc.

Now enter a driverless cars that utilize capacity much more effectively. Huzzah! More cars on the road getting where they want, faster. What worries me is that was is really happening is not that the limits are raised, but that we are operating the system much close to existing, real limits. Furthermore, now that automation is sucking out all the marrow from the road bone — the limits become hard walls, not gradual at all.

So, imagine traffic is flowing smoothly until a malfunction causes an accident, or a tire blows out, or there is a foreign object in the road — and suddenly the driverless cars sense the problem, resulting in a full-scale insta-jam, perhaps of epic proportions, in theory, locking up an entire city nearly instantaneously. Everyone is safely stopped, but stuck.

And even scarier than that is the notion that the programmers did not anticipate such a problem, and the car software is not smart enough to untangle it. Human drivers, for example, might, in an unusual situation, use shoulders or make illegal u-turns in order to extricate themselves from a serious problem. That’d be unacceptable in a normal situation, but perhaps the right move in an abnormal one. Have you ever had a cop the scene of an accident wave at you to do something weird? I have.

Will self-driving cars be able to improvise? This is an AI problem well beyond that of “merely” driving.”

III.

Speaking of capacity and efficiency, I’ll be very interested to see how we make trade-offs of these versus safety. I do not think technology will make these trade-offs go away at all. Moving faster, closer will still be more dangerous than going slowly far apart. And these are the essential ingredients in better road capacity utilization.

What will be different will be how and when such decisions are made. In humans, the decision is made implicitly by the driver moment by moment. It depends on training, disposition, weather, light, fatigue, even mood. You might start out a trip cautiously and drive more recklessly later, like when you’re trying to eat fast food in your car. The track record for humans is rather poor, so I suspect  that driverless cars will do much better overall.

But someone will still have to decide what is the right balance of safety and efficiency, and it might be taken out of the hands of passengers. This could go different ways. In a liability-driven culture me way end up with a system that is safer but maybe less efficient than what we have now. (call it “little old lady mode”) or we could end up with decisions by others forcing us to take on more risk than we’d prefer if we want to use the road system.

IV.

I recently read in the June IEEE Spectrum (no link, print version only) that some people are suggesting that driverless cars will be a good justification for the dismantlement of public transit. Wow, that is a bad idea of epic proportions. If, in the first half of the 21st century, the world not only continues to embrace car culture, but  doubles down  to the exclusion of other means of mobility, I’m going to be ill.

 

*   *   *

 

That was a bit more than I had intended to write. Anyway, one other thought is that driverless cars may be farther off than we thought. In a recent talk, Chris Urmson, the director of the Google car project explains that the driverless cars of our imaginations — the fully autonomous, all conditions, all mission cars — may be 30 years off or more. What will come sooner are a succession of technologies that will reduce driver workload.

So, I suspect we’ll have plenty of time to think about this. Moreover, the nearly 7% of our workforce that works in transportation will have some time to plan.

 

Nerd alert: Google inverter challenge

A couple of years ago Google announced an electrical engineering contest with a $1M prize. The goal was build the most compact DC to AC power inverter that could meet certain requirements, namely 2kVA power output at 240 Vac 60Hz, from a 450V DC source with a 10Ω impedance. The inverter had to withstand certain ambient conditions and reliability, and it had to meet FCC interference requirements.

Fast forward a few years, and the results are in. Several finalists met the design criteria, and the grand prize winner exceeded the energy density requirements by more than 3x!

First, Congrats, to the “Red Electrical Devils!” Screen Shot 2016-03-09 at 9.56.34 AMI wish I were smart enough to have been able to participate, but my knowledge of power electronics is pretty hands-off, unless you are impressed by using TRIACs to control holiday lighting. Here’s the IEEE on what they thought it would take to win.

Aside from general gEEkiness, two things interested me about this contest. First, from an econ perspective, contests are just a fascinating way to spur R&D. Would you be able to get entrants, given the cost of participation and the likelihood of winning the grand prize? Answer: yes. This seems to be a reliable outcome if the goal is interested enough to the right body of would-be participants.

The second thing that I found fascinating was the goal: power density. I think most people understand the goals of efficiency, but is it important that power inverters be small? The PV inverter on the side of your house, also probably around 2kW, is maybe 20x as big as these. Is that bad? How much is it worth it to shrink such an inverter? (Now, it is true if you want to achieve power density, you must push on efficiency quite a bit, as every watt of energy lost to heat needs to be dissipated somehow, and that gets harder and harder as the device gets smaller. But in this case, though efficiencies achieved were excellent, they were not cutting edge, and the teams instead pursued extremely clever cooling approaches.)

I wonder what target market Google has in mind for these high power density inverters. Cars perhaps? In that case, density is more important than a fixed PV inverter, but still seemingly not critical to this extreme. Specific density rather than volumetric seems like it would be more important. Maybe Google never had a target in mind. For sure, there was no big reveal with the winner announcement. Maybe Google just thought that this goal was the most likely to generate innovation in this space overall, without a particular end use in mind at all — it’s certainly true that power electronics are a huge enabling piece of our renewable energy future, and perhaps it’s not getting the share of attention it deserves.

I’m not the first, though, to wonder what this contest was “really about.” I did not have to scroll far down the comments to see one from Slobodon Ćuk, a rather famous power electronics researcher, inventor of a the Ćuk inverter.

Screen Shot 2016-03-09 at 9.55.00 AMAnyway, an interesting mini-mystery, but a cool achievement regardless.

On the correct prices of fuels…

Interesting blog entry from Lucas Davis at the Haas Energy Institute, on the “correct” prices for fossil fuels. He cites a new paper from Ian Parry that tries to account for the external costs as they vary around the world.
 
I notice two points:
 
1. At least for gasoline, they are measuring the externalities of driving, not of gasoline. Bad news for EV drivers intent on saving the world on mile at a time, because most of the associated externalities are still present.
 
2. The estimated cost of carbon / GHG is small compared to the other external costs like accidents and congestion. This is a common result among economic analyses of carbon costs, and I often wonder about it. If you use a value associated with the marginal cost of abatement, I can see it being quite low. But that’s in the current context of nobody abating much anything. I wonder what it would be if you projected the marginal cost of 80% or 90% abatement. That is, if we were actually to solve the climate problem.
Or, another way of thinking about it: if GHG emissions are potentially going to make the earth uninhabitable, it seems like maybe they’re underestimating the external cost of carbon. Because there is limited cost data available for “the end of the world as we know it,” economists can be forgiven for working with the data they have but we, the careful reader should bear in mind the limits.

Worst environmental disaster in history?

In keeping with Betteridge’s Law: no.

My news feed is full of headlines like:

These are not from top-tier news sources, but they’re getting attention all the same. Which is too bad, because they’re all false by any reasonable SoCal gas leakmeasure. Worse, all of the above seem to deliberately misquote from a new paper published in Science. The paper does say, however:

This CH4 release is the second-largest of its kind recorded in the U.S., exceeded only by the 6 billion SCF of natural gas released in the collapse of an underground storage facility in Moss Bluff, TX in 2004, and greatly surpassing the 0.1 billion SCF of natural gas leaked from an underground storage facility near Hutchinson, KS in 2001 (25). Aliso Canyon will have by far the largest climate impact, however, as an explosion and subsequent fire during the Moss Bluff release combusted most of the leaked CH4, immediately forming CO2.

Make no doubt about it, it is a big release of methane. Equal, to the annual GHG output of 500,000 automobiles for a year.

But does that make is one of the largest environmental disasters in US history? I argue no, for a couple of reasons.

Zeroth: because of real, actual environmental disasters, some of which I’ll list below.

First: without the context of the global, continuous release of CO2, this would not affect the climate measurably. That is, by itself, it’s not a big deal.

Second: and related, there are more than 250 million cars in the US, so this is 0.2% of the GHG released by automobiles in the US annually. Maybe the automobile is the ongoing environmental disaster? (Here’s some context: The US is 15.6% of global GHG emissions, transport is 27% of that, and 35% of that is from passenger cars. By my calculations, that makes this incident about 0.0003% of global GHG emissions.)

Lets get back to some real environmental disasters? You know, like the kind that kill people, animals, and lay waste to the land and sea? Here are a list of just some pretty big man-made environmental disasters in the US:

Of course, opening up the competition to international disasters, including US-created ones, really expands the list, but you get the picture.

All this said, it’s really too bad this happened, and it will set California back on its climate goals. I was saddened to see that SoCal Gas could not cap this well quickly, or at least figure out a way to safely flare the leaking gas.

But it’s not the greatest US environmental disaster of all time. Not close.

 

 

 

Culture of Jankery

The New York Times has a new article on Nest, describing how a software glitch allowed units to discharge completely and become non-functional. We’re all used to semi-functional gadgets and Internet services, but when it comes to thermostats we expect a higher standard of performance. After all, when thermostats go wro1043px-Nest_front_officialng, people can get sick and pipes can freeze. Or take a look at the problems of Nest Protect, a famously buggy smoke detector. Nest is an important case, because these are supposed to be the closest thing we have to grownups in IoT right now!

Having worked for an Internet of Things company, I have more than a little sympathy for Nest. It’s hard to make reliable connected things. In fact, it might be impossible — at least using today’s prevailing techniques and tools, and subject to today’s prevailing expectations for features, development cost, and time to market.

First, it should go without saying that a connected thermostat is millions or even billions of times as complex as the old, bimetallic strips that it is often replacing. You are literally replacing a single moving part that doesn’t even wear out with a complex arrangement of chips, sensors, batteries, and relays, and then you are layering on software: an operating system, communications protocols, encryption, a user interface, etc. Possibility that this witch’s brew can be more reliable than a mechanical thermostat: approximately zero.

But there is also something else at work that lessens my sympathy: culture. IoT is coming from the Internet tech world’s attempt to reach into physical devices. The results can be exciting, but we should stop for a moment to consider the culture of the Internet. This is a the culture of “go fast and break things.” Are these the people you want building devices that have physical implications in your life?

My personal experience with Internet-based services is that they, work most of the time. But they change on their own schedule. Features and APIs come and go. Sometimes your Internet connection goes out. Sometimes your device becomes unresponsive for no obvious reason, or needs to be rebooted. Sometimes websites go down for maintenance at an inconvenient time. Even when the app is working normally, experience can vary. Sometimes it’s fast, sometimes slow. Keypresses disappear into the ether, etc.

My experience building Internet-based services is even more sobering. Your modern, complex web or mobile app is made up of agglomeration of sub-services, all interacting asynchronously through REST APIs behind the scenes. Sometimes, those sub-services use other sub-services in their implementation, and you don’t even have a way of knowing what ones. Each of those links can fail for many reasons, and you must code very defensively to gracefully handle such failures. Or you can do what most apps do — punt. That’s fine for chat, but you’ll be sorely disappointed if your sprinkler kills your garden, or even if your alarm clock failures to wake you up before an important meeting.

And let’s not even talk of security, a full-blown disaster in the making. I’ll let James Mickens cover that territory.

Anyhoo, I still hold hope for IoT, but success is far from certain.

 

 

Non Vox populi

I liked Vox.com when it came out. The card format is cool, and the detailed yet bare-bones explainers suit my approach to many aspects of news: tell me what I need to know to understand this situation.

At first, I found the decision not to host comments interesting, but not alarming. After all, everyone knows that the Internet comments section is a bubbling cesspool, right?

But I’ve been reading Vox articles now for awhile, and I’ve noticed in too many cases, when they were just blowing it: incorrect or out-of-context facts, telling one half of an argument, or missing a crucial detail. And these are the kinds of things where a letter to the editor, or a stream of informed comments, can really make an article much more useful. I notice this particularly when Vox writes about energy, a topic I have studied in depth.

Here’s an example of the sort of thing I’m talking about. In “Ignore the haters: electric cars really are greener,” they cite a new Union of Concerned Scientists report at length. But they never mention that UCS is primarily an advocacy organization, not a research one. Or that, for example, Argonne National Labs has been publishing similar research for years with similar, but with slightly more muted results. Or even that the summary in the UCS report compares EVs against normal gasoline cars, which is hardly a like-vs-like comparison, given that gasoline vehicles execute a range of missions that EVs currently cannot. As it turns out, a PHEV or even a regular hybrid does in fact outperform an EV on CO2/mile in many parts of the country, and the data in the UCS report show it. And there are additional embedded assumptions, like that the electric grid will continue to get greener. That’s probably true, but maybe not, the greening of the grid could accelerate or it could start hitting hurdles that slow it down. At the same time gasoline cars could get better or worse. Hybrids might become the norm, lower carbon fuels could become mainstream, etc. Finally, EVs cost a lot more than gas cars. For the same money, could you reduce your carbon intensity more effectively than by buying an EV? (Answer: yes.) In the end, it’s hardly journalistic to lump everyone who has questions about the superiority of EV’s as a hater.

Getting back to Vox, it’s not just the bias that I don’t like. After all, bias is a part of journalism as organic chemistry is part of life. There’s no ombudsman, there’s no straightforward place to look for corrections. (They integrate corrections directly into cards, usually by changing the text without any notation.) The whole site is a Read Only Memory. In Ezra Klein’s own words on leaving the WP to found Vox: “we were held back, not just by the technology, but by the culture of journalism.”

Indeed. So, this is the improved technology and improved culture? It’s seriously starting to turn me off. Anybody else?

Leave it in the ground

About a decade ago, Alex Farrell, a professor in the UC Berkeley Energy and Resources Department, Alex Farrell, had a series of papers unpopular with environmentalists. They showed that, essentially, there was no peak oil. In fact, at prevailing prices of the time, one could profitably extract a supply of petroleum to last hundreds of years at current rates. The supply would come not just from traditional sources, but from Canadian bitumen and coal-to-liquids conversion. He also pointed out that this is a bad thing, because those alternative sources of petroleum products have ridiculously high carbon intensities. That is, they’d be much, much dirtier than regular oil.

Sadly, Professor Farrell did not live to see the story of peak oil fade from most environmentalists’ consciousness nor to see the price of oil has drop so dramatically. And, in fact, at today’s prevailing prices, influenced by fracking and cheap natural gas (which is not a short term substitute for oil but could be a long-term one), we just don’t need oil from the Canadian tar sands. There’s not really a strong economic case for it, and the environmental case is, well, awful. I guess there is still a story to be told about “continental oil independence,” but, well, that’s only physical independence. Unless we plan on declaring a state of emergency and militarily controlling oil transfer, oil is still a worldwide commodity,  and if there were some kind of oil crunch, we’d take the economic gut punch all the same.

I think Obama made the correct decision today, to nix the Keystone Pipeline.

Score one for common sense.

Why must energy reporting always mislead?

There’s a new story on BNEF explaining how the cost of wind power is now less than any other resource in the UK and Germany. That alone is confusing, because, depending on your definition of cost, that was already the case. In fact, it’s always the case: on the margin, the cost of wind energy is $0. It’s the machine you’re paying for.

But the article is even more puzzling because it then goes on to explain that what is happening is that renewable energy generation is displacing the generation from fossil units, so that their capacity factor (that is, utilization) goes down. This makes their fixed costs a larger percentage of their total costs, and makes all-in €/MWh higher than wind’s all in €/MWh.

Okay, that makes sense as far as it goes, but there’s one complication: they still need the fossil units to make the system work. That is, as solar and wind generate more energy, you may use your coal machine less, but you can’t operate the electric power system without the coal machine. Cheap, massive, and ubiquitous storage, of course could change this, but for now, we’re not there.

So that begs the question, exactly what turning point have we reached? Building more solar and wind exacerbate this situation (I will not call it a problem) so I’m pretty unclear on what this piece is saying.

I occupy a very lonely position on renewables. I want them, I think they’re important, and I think we need much more. They can be part of solving our huge climate problem. But, unlike most boosters, I don’t think renewables are a free lunch — that is, they will be, all-in, cheaper than our current system).