Designing a simple AC power meter

I’ve been interested in energy efficiency for a long time. A big part of understanding energy efficiency is understanding how devices use power, and when. Because of that, I’ve also long wanted an AC power meter. A cheap one. A decent one. A small one.

What is an AC Power Meter

An AC power meter is an instrument that can tell you how much juice a plug-load device is using. The problem is that measuring AC power is a bit tricky. This is because when dealing with AC, the current waveform that a device draws does not have to be in phase with the voltage. A load can be inductive (lagging) or capacitive (leading), and the result is that the apparent power (Volts RMS * Amps RMS) will be higher than the real power. It gets worse, though. Nonlinear loads, like switch-mode power supplies (in just about everything these days) can have current waveforms that literally have no relation to the voltage waveform.

As a result, the way modern AC power meters work is to sample the instantaneous current and voltage many times a second, in fact, many times per 60 Hz AC cycle, so that the true power can be calculated by calculating the “scalar product” of the voltage and current time series. From such calculation, you can get goodies like:

  • Real power (watts)
  • Apparent Power (VA)
  • Imaginary/Reactive Power (VAR)
  • phase angle
  • power factor

Instruments that do this well are expensive. For example, this Yokogawa WT3000E is sex on a stick as far as power meters go, but will set you back, I think more than $10k. I used one when I was at Google, and it was a sweet ride, for sure.

This is on my Christmas list, in case you’re wondering what to get me.
Cheap but effective.

On the other hand, you can get a Kill-A-Watt for $40. This is cheap and functional, but is not capable of logging data, and is totally uncalibrated. They claim 0.2% accuracy, though? My experience with them says otherwise.

Over the years I’ve built a couple of attempts at a power meters. One used a current transformer and a voltage transformer going into the ADCs of an Arduino. It sort of worked, but was a mess. Another time, I built a device that used hall-effect sensors to measure current, but I didn’t measure voltage at all. This really couldn’t measure power, but you could get an indicative sense from it.

Let’s Do This – Hardware Design

So, a few months ago, I resolved to build a proper power meter. I did a search for chips that could help me out, and lo and behold, I came across several “analog front end” chips that have all the circuitry you need to measure AC power. They do the analog to digital conversion, all the math, and give you a simple digital interface where you can query various parameters.

I settled on the Atmel ATM90E26. Reasonable basic accuracy of 0.1% built on 16b analog-to-digital converters, and best of all, about $2 in quantity 1. Furthermore, they have an app note with a sample design, and it seemed simple enough.

So I started designing. Unfortunately, I had various conflicting goals in mind:

  • Safety: like the McDLT, I want the hot to stay hot and the cool to stay cool. This means total isolation between the measurement side and the control side.
  • Small, so it can be put inside a small appliance.
  • A display, so I could read power data directly
  • An interface to a Raspberry Pi so that I could log to a μSD card, or send it via WiFi to the Internet
  • A microprocessor of its own to drive the display and do any real-time processing needed
  • An internal AC to DC power supply so that the device itself could be powered from a single AC connection.
  • Ability to measure current by way of sense resistor, current transformer, or some combination of both.
  • Ability to get reasonably accurate measurement of very small loads (like < 1W) so that I can make measurements concerning vampire power. One way to do this wile keeping precision, is to build a unit with a high value shunt resistor, which I can do if I’m rolling my own.


Some of these desires conflict with each other, and I made several iterations on the computer before making my first board. I ended up jettisoning the LCD and building the thing around the RPi Zero. This was primarily to make the board compact. If I wanted a display I could plug one into the Pi! I also initially went with an outboard AC/DC converter mostly because I just didn’t want to fuss with it.

Power and Isolation

In a device that’s intended to live inside a plastic box, I probably wouldn’t bother with isolation at all. The whole circuit could “ride the mains.” But because this is supposed to be a tinker-friendly board, I wanted to be able to touch the RPi without dying. Usually, this is done with something simple like optoisolators to provide galvanic isolation for data signals. But this board presented another challenge. The power measurement chip needs to be exposed to the AC (duh, so it can measure it) but it also needs DC power itself to operate.

How to power the chip and maintain isolation? This could be done with a simple “capacitive dropper” supply (simple, inefficient, sketchy), or with an isolated DC-to-DC supply (pricey and or fussy), but when I added up the optoisolators I’d need plus the DC-DC supply, I realized that a special purpose chip would be nearly cost effective and would be a lot less fuss. So I chose the AduM5411, a nifty part from Analog Devices that can forward three digital signals in one direction, one digital signal in the other direction, and provide power across the isolation barrier. And it was only like $6.

Only problem is, the AduM5411 is so good it is pure unobtanium. I’m not even sure the part really exists in the wild. So I switched to the Texas Instruments ISOW7841, a very similar part in all respects, except for the fact that it costs $10. This is the most expensive part in my BOM by far. But I have to admit, it is super easy to use and works perfectly. (As an aside, these chips do not work on optical principles at all, but on tiny little transformers being driven at high frequency. Kind cool.)

Okay, so the AC/hot part of the board is powered from the DC side of the board. But how is the DC side powered? In the first iteration, I did it from a USB connector via a 5V wall-wart.

Current Measurement

In order to measure power, the measurement chip needs to be able to measure the voltage and current, simultaneously and separately. Voltage is pretty easy. Just use a resistor network to scale it down so you don’t blow up the ADC. Current can be done one of two ways. One is to measure the voltage drop across a calibrated resistor. The resistor obviously needs to be able to handle a lot of current and it will have to be a small value to keep the voltage drop reasonably, or else the device you’re measuring will be unhappy. The current sense resistor should also have a low temperature coefficient, so that its value doesn’t change much as it warms up.

The other approach is to use a current transformer. CTs are nice in that they provide isolation, but they are large and cost a few bucks compared to the few pennies for the resistors. I did an iteration with space for a CT on the board, but later punted on that. I did leave a place where an external CT can be plugged into the board. I may never use it, though.

The Microcontroller

In this design, an Atmega 328p microcontroller sits between the Pi and the ATM90E26. It is connected to the ATM90E26 by a SPI bus and to the Pi by an I2C bus. Originally, I had thought the Atmega would have to poll the power chip frequently and integrate the total energy, but that was because I did not read the ATM90E26 data sheet closely enough. It turns out that chip can does all the math itself, including integrating energy, and so the processor was just sitting there doing conversion between I2C and SPI. I honestly could not come up with anything else useful for the Atmega to do.

This is the board after I scavenged it for some of the more expensive parts.
The first design I had fabbed.

Anyway, the good news was that this design worked straight away — hardware wise, though it turned out to be more work than I wanted to get the Atmega to do the I2C/SPI forwarding reliably. And  I didn’t even need it.

Ditch the processor!

So, using the same PCB, I made some simple hacks to bring the SPI bus pins from the measurement chip to the RPi header. I also had to add a voltage divider so that the 5V MISO signal would not destroy the not-5V-tolerant MISO pin on the RPi. The hacked board looked like this.

Look ma, no intermediary microprocessor
Board on its side, so you can see how the RPi rides along.

The RPi communicates with the power measurement chip through the TI isolation chip, and des so reliably and much faster than I2C, so I was glad to realize that I didn’t need that intermediary processor in the mix at all.

This board could be pressed into service as it was, but it has a couple of issues:

  1. First, the orientation of the Pi relative to the board saves a bit of space, but does so at the cost of having all the Pi connectors face down towards the “hot” side of the board.
  2. Second, powering the DC side of the board from the USB jack proved more annoying to me than I had anticipated. It really just bugs me to have to plug an AC measuring device into a separate wall-wart. So I knew I’d design in a PSU. I chose a MeanWell IRM-05-05 PCB mount 5V PSU.
  3. Third, this board lacked cut-outs to provide extra creepage for high voltage parts of the board that would be (or could be) at high relative voltage from each other. I think the distances were probably adequate, and it’s not like I was going for a UL listing or anything, but I still wanted slots.

So, I redesigned the board and waited a month for them to arrive from China even though I payed for expedited shipping. The new board looks like this. Some of the space where the processor had gone I replaced room for LEDs, in case I want to blink some lights.

Looking much better. Notice the Pi has all its ports pointing away from the AC. Also, the Pi is on top rather than underneath.
Better layout, no processor








I really need to clean off that flux.

So that is just about it for the hardware.

One last spin.

As it turns out, I will spin this board one more time. The main reason is that I want to expand it a bit and move the mounting holes to match up with a suitable enclosure. I will probably use a Hammond RP-1127 — it’s exactly the right width for an RPi.

The other reason is that someone showed me how to redraw the pads for the current sense resistors to make a “quasi” kelvin connection.

The way the current measurement is to measure the current across the sense resistor. This resistor is reasonably accurate and temperature stable, but the solder and copper traces leading to it are not, and current flowing in them will cause a voltage drop there, too. This drop will be small, but the drop across the 0.001 Ω sense resistor is small, too! So, to get the most accurate measurement, I try to measure the voltage drop exactly at the resistor pads, preferably with connections to the resistor that have no current in them. This i what Kelvin connections are.

In the case below, I achieve something like this by splitting the pads for the resistor into three pieces. The top and bottom conduct test current across the resistor, and a small, isolated sliver in the middle measure the voltage. There is no current in that sliver, so it should have no voltage drop.

The result should be better accuracy and thermal stability of the current measurements. The Kelvin connection for the current measurement looks like this. The sense resistors go betwen the right terminal of the input fuse and the tab marked “load.” The resistor landing pads are split and a separate section, in which no current will flow is for the voltage measurement.

Fake four-terminal resistor


An instrument is only as good as its calibration, and I needed a way to calibrate this one. Unfortunately, I do not have the equipment to do it properly. Such equipment might be a programmable AC source, a high-accuracy AC load, and perhaps a bench quality power meter. What I do have access to are reasonably accurate DMMs  (Fluke 87V and HP 34401A). The former is actually in cal, the latter, well, a million years ago, I’m sure.

I calibrated the voltage by hooking the unit up to the AC mains in my house and measuring the voltage at the terminals and adjusting a register value until the reported voltage matched my meter. For current, I put a largeish mostly non-inductive load on the system (Toast-R-Oven) and measured the current with my DMM and adjusted the register until the current matched.

Calibrating power is harder, and I really don’t have the right equipment to do it properly. The ATM90E26 allows you to also set up an energy calibration separate from the voltage and current measurements, and I think it is their intention that this be done with a known load of crappy power factor. But I don’t have such a load, so I sort of cribbed a guess at the energy calibration based on my voltage and current measurements of the toaster oven. This probably gets me close for resistive loads, but is not good enough for loads with interesting power factor. Unfortunately, the whole point of an AC power meter is to get this right, so in this important way, my meter is probably importantly compromised.

The result is that this is probably not a 0.1% instrument, or even a 1% instrument, but I guess it’s good enough for me… for now. I’ll try to think of ways to improve cal without spending money for fancy equipment or a visit to a cal lab.

Okay, so now about software

One of the reasons I like working with the Raspberry Pi, is that I get a “real”, and “normal” linux operating system, with all the familiar tools, including text editors, git, and interpreted programming languages like python. Python has i2c and SPI libraries for interacting with the the RPi hardware interfaces, so it was not a big deal to create a “device driver” for the ATM90E26. In fact, such a device driver was pretty much just an exercise is getting the names of all the registers and their addresses on one page. One nice thing my device driver does is convert the data format from the ATM90E26 to normal floats. Some of the registers are scaled by 10x or 100x, some are unsigned, some are signed two’s complement, and some have a sign bit. So the device driver takes care of that.

I also wrote two sample “applications.” The first is a combination of an HTTP server app and a client app running on the meter that forwards info to the server, and the server can display it in a web browser.

The other application is simpler, but in a way, more useful: I have the RPi simply upload samples to a Google Sheet! It’s very satisfying to plug in a logger and then open a Google Sheet anywhere and see the data flowing in every few seconds.


So far, I’ve been able to things like log the voltage and frequency of the mains every second for the past week. I plan to deploy a few of these around the house, where I can see how various appliances are actually used.

Here’s a picture of the voltage and frequency as measured in my work shed for most of a week starting on 2/20/2018. The data are in 5 second intervals.

You can see a diurnal voltage pattern, and the frequency is rock solid

Design Files

I have not decided if I’m going to open-source this design yet, so I’m going to keep the hardware files to myself for the time being. There is also a liability concern, should someone take my stuff and manage to burn his house down or kill himself.

But you can see my github repo with the software. Not too much documentation there, but I think the file should be reasonably self-explanatory as a simple Python-based wrapper for an ATM90E26 connected to a Pi via SPI.

Future Directions

  • Better calibration
  • Better WiFi performance when device is inside a metal appliance (external antenna)
  • Switchable current ranges. Maybe with relays swapping in difference sense resistors.



This Old Clock -or- nobody will do IOT maintenance

Plenty of people have written about the fact that in a world of companies selling IOT hardware, there is little or no incentive for them to maintain the software running on that hardware. Those people are right. But not only is there little incentive, keeping an IOT device current is actually fiendishly difficult — as I was reminded this past weekend.


I have an IOT alarm clock I built myself, back in 2011. It was based around a Raspberry Pi Model 1B, running Raspbian Wheezy. The software I wrote to imeplement the clock is simple, consisting of three major components:

  1. An interface to the Google Calendar API, so it knows when I want to get up
  2. An interface to an LCD Display so I can see the time and see when it plans to wake me next.
  3. An interface to GPIO to drive a solenoid, which rings a physical chime. I wasn’t going for a wimpy electronic beeping; I wanted some Zen-level physical dinging.

Now, when I created this clock about seven years ago, my go-to language for this sort of thing was Perl. You can quibble with that choice, but Perl was my Swiss army knife at the time, and it also solved certain problems that other languages didn’t. For one, Perl has a fantastic no-no

How long will it work?

nsense library for wrapping C code: Inline. You can basically “inline” C function right in your Perl, and it “just works.” This was really important for talking to the chip GPIO for the dinger and for the LCD, which were supported only in C — at the time.

One drawback of using Perl is that Google has never supported it for accessing their APIs. That is, Perl can generate an HTTP transaction just as well as the next language, but Google also provides nice wrapper code for a list of languages which they’ve made it pretty clear will never, ever include Perl. But someone else had written a similar wrapper for Perl, so I grabbed that and got things up and running. Over the years, that has turned out to be a pain as Google has revamped their Calendar API twice in that time, and my clock just broke each time. Fixing it as a pain, but I did it just to keep the project running.

Let’s get current!

So, on Friday, after thinking about all the exploits floating around and the fact that I was running a full-fledged OS on a clock on my home network, I decided I should really update all the software on the clock. Raspbian had moved from Debian 7 (Wheezy) to Debian 8 (Jessie) to Debian 9 (Stretch) in the intervening years, so the first step was to update the OS. Twice.

This went poorly. The update process was excruciatingly slow on this single-core processor, taking hours, and occasionally stopping entirely to ask me a question (“you want to overwrite this file?”). I managed to get the first update done, but the second update died entirely when the SD card holding everything filled up after the installer decided it needed to create a huge swapfile.

So I got a new SD card and installed Stretch on that cleanly. It was also pretty quick, and if you do a network install, you won’t need to do any package updates immediately after. (Microsoft could learn a lesson from that.) After the OS came up, I copied over my software and tried to get it running. No dice.

So sonorous

You won’t be surprised to hear that some things had changed:

  • The Perl libraries for Google had changed quite a bit over the years, so installing the new ones generated a bunch of errors. Overall, the main pain in this was that some of these libraries can be found in the Raspbian package manager, and some need to be installed from cpan. I prefer OS repository packages when available because they update along with the OS. Everything I install from cpan is just a snapshot that may may need to be installed after the next OS update, and worse, experience shows that the installation process can sometimes go from simple to epic if some underlying untracked dependency changes. But when you install from cpan, it installs dependency from cpan, even if the dependencies can be found in the OS repos. This basically sucks.

    Anyway, the changes in the Perl libraries were mostly for the better, to make the Perl API better map to the way Google worked, but still, it required digging into my old Perl code and looking at the Google docs.

  • The LCD interface is in two parts. A C-based daemon from a package called lcdproc, and my client code in Perl that talks to the daemon. For the new OS I needed to rebuild that daemon from source. Luckily, lcdproc had not advanced in 7 years, so I could just rebuild the old code.  This was particularly lucky because I had made a big patch to the hardware driver to talk to my particular i2c expander that drover the LCD controller. I’m glad I did not have to figure out how to apply that patch to some completely new, changed version.
  • Raspbian Stretch switched from System V init to systemd, so my startup stuff, which was init based needed to be changed to systemd unit files. This was not too painful, and I actually like systemd for daemons, but it took a little while to create the files, set permissions, fix my mistakes, yadda.

    Overall, this whole project was not really that complicated in retrospect, but taking more or less an entire weekend day, it sure felt like a never-ending series of missteps and annoyances.

Getting Really Current

I should probably rewrite the clock in Python.

  • Python now has a mature library for talking to Raspberry Pi GPIO. It’s clean and simple.
  • Python has always had better Google integration, courtesy of Google. It would a pleasure to switch to this.
  • I had already written Python bindings to talk to the LCD daemon. I don’t remember doing that, but apparently this is not the first time I’ve considered doing a Python rewrite.

But there are two roadblocks. First, technically, being a clock, this code is time-sensitive, and so the Perl version has multiple threads. There is basically a thread that increments every second and various worker threads. The modern Pythonic way to accomplish the same thing (without threads — which Python has never done well and never will) is to use asyncio. Not to get into the details too deep, but I have some issues with asyncio. It’s complicated and it requires an all-or-nothing approach. Your whole program, even the non async parts need to be asyncio-ified, because they will otherwise block the parts that are.

Second, I just don’t want to. Writing code that does the same thing as other code is no fun.


Anyway, today my alarm clock works exactly as it did in 2011, but it is running on a current version of Perl with current libraries on a current OS. It only took me the better part of my weekend. 🙁

Whose going to do this for the IOT power outlet or window shade controller you bought at Fry’s?


Brother, can you spare a CPU cycle?

Are you familiar with Bitcoin and other crypto-currencies? These are currencies not supported by any government, which can be traded anonymously and somewhat frictionlessly. They are gaining traction among people who want to make illegal transaction, who want to avoid taxes, and who just want freedom. And now, increasingly they are being used not as a currency for trade, but as an investment. As a result, people are working hard to make more Bitcoin, which is a complex mathematical operation called mining. Some organizations have set up huge computer farms employing custom hardware to do nothing more than mine bitcoin.

And now, reports are surfacing that various websites are embedding javascript in their files that surreptitiously mine bitcoin on your computer while you read their site. When I first heard of this, I was rather upset. After all, big, evil website people are using a facility in my browser to run code on my computer that doesn’t benefit me in any way! They are stealing my cpu, making my computer sluggish, and costing me real money in wasted power. On a cell phone, they’re even draining my battery? How dare they?

[ Also, from a pure engineering standpoint, when there are people out there using special-purpose computer chips to mine Bitcoin, can it possibly make sense to try to do the same using Javascript on my cellphone? The answer is yes, if you’re not they one paying for the phone or the electricity. ]

Anway, after some time, I calmed down and realized that this isn’t so bad, and it could even be … good?

You see, one cannot look at something we don’t like in a vacuum. It must be compared to real alternatives. We hear over and over from the advertising industry that websites need to make money. (not this one — ed.) That’s what pays for the content, the computers, and the personnel. Ads make the “free” Internet as we know it possible.

But ads suck. They are ugly and intrusive. They involves a third party — the advertiser in every page I visit. There’s me, there’s the website, and then there’s this guy over in the corner who wants to sell me viagra. Because the money is coming from the advertiser, he gets a say in the content of the site. Furthermore, he gets to know that I visited the site, and can start to collect all kinds of information on my browsing history, eventually creating a dossier on my personal habits that he will use to target me for the rest of my life. And finally, he gets to suck up my cpu cycles and my limited screen real estate in order to serve me his ads. It’s maddening!

I don’t like it, have never liked it, and would much prefer a subscription supported Internet. But that’s never going to happen, so I’m told.

So how is letting people mine bitcoin better?

  • no screen real-estate
  • no data collection
  • no third party

Sure, they’re sucking up my CPU and battery just as the advertisers, but probably no worse, and perhaps that’s a fair price to pay.

Now, there are some problems with this approach that would have to be dealt with. First, I’m not sure Bitcoin mining is really a productive use of CPU cycles, and Bitcoin may itself be just a flash in the pan. So perhaps the world will consider other, better ways to monetize my cpu cycles, maybe selling them to someone like AWS or Google, which will then remarket them for normal productive purposes. Second, I think for such a system to be fair, the user’s need to know what is going on. There should be a way to know what a site is “costing” you. And finally, we need an easy and straightforward way for users to say “no”, and then, of course, the website would be perfectly in their rights to say “no” to serving up content. Turning off Javascript entirely is not a great solution, as Javascript is just too embedded in modern web to give up.

So, here’s a business idea for you. Create a company that offers websites to host the company’s javascript on their sites in return for payment. No data is collected, but CPU cycles are consumed if the user allows it, and the site owner is informed if they do not. The syndicate in turn remarkets the CPU cycles as a service to customers, something lightweight and context-free, like Amazon Lambda.

Electricity started out with small local generators, even “home” generators, then increasingly centralized for a long time, and today, there is a big push for “distributed” generation, which is basically decentralized power generation, but maintaining a connection to the power grid.

Computing started out small on home computers and has become increasingly centralized in big data centers. Will the next step to reverse that pattern?

Making weird stuff

An interesting aspect of my job is that I am sometimes asked to do weird stuff. I like weird stuff, so this is a good.

Recently, I was asked to build “turkey detector.” You see, my boss wanted a demo that we shows that we can help scientists deploy sensors, and collect and process the data from them. Furthermore, we wanted a demo that would show machine learning in action.

Oh, did I mention that there are a lot of wild turkeys strutting around this campus?

So we figured, hey, let’s deploy some cameras, take pictures, send them to a turkey classifier model, and put the results on website. What could be easier?

There are some interesting constraints:

  • not having a lot of spare time to do this (have other more pressing responsibilities)
  • minimal resources
  • no wired electrical or network access in the most turkey-friendly outdoor areas

I added a few constraints of my own, to make things more interesting:

  • the cameras need to be able to withstand the weather and operate without physical interaction for a long time. Not that we need these cameras to stay up forever, but a real camera trap should be able to last.
  • don’t use proprietary hardware or software — everything open source (well, almost everything, as you’ll see)

Commercial, already-built camera traps exist, but they, as far as I know, do not sync up with wifi and do not keep themselves charged. You have to go out to change batteries and collect your memory card. Bah.

Electronic Hardware

For the computer, I went with the Raspberry Pi Zero W after starting with a Raspberry Pi 3. These are ARM-based circuit board with built-in WiFi and a special port for attaching a camera. The “3” has a multi-core process and more ports. The Zero is slower but smaller and uses about 1/2 to 1/3 the power of the Pi 3.

I like the RPi platform. It’s reasonably open, simple to use (its Raspbian OS is basically like any Debian-based Linux), and crazy cheap. The Pi Zero W is $10! For the camera I used the companion “PiCamera 2” designed to go with the RPi. It’s an 8Mpixel tiny phone camera jobbie, fixed focus and fixed aperture, about $30.

Getting a hard-wired power to the unit would be out of the question, so this needs to work from battery. I ended up using a single LiPo cell, 3.7V 4.4Ah. This is enough to power the Pi for about a day without any new charge, but it’s not enough to go two days or run overnigh. To charge, two small solar 6V solar panels,  3.5W each would do that job. The panels require a charge controller to adjust the panel output to the battery. Also, the Pi requires 5V, and the battery only puts out ~3.5-4V, so a boost converter to make a stable 5V is also required. The panels were a huge ripoff, at $11/Wp and I’m not thrilled with the cost and quality of the charge controller and boost converter either, but they do work.

Here’s a picture of all the kit, in a cardboard box in my backyard. Well, almost all the kit. An RPi 3 is pictured, which I moved away from because of its power use. Also, there are two panels in the operating camera.

On a sunny, or moderately sunny day, there is enough power to operate the camera and charge the battery. On a cloudy day, the battery drains slowly, or doesn’t drain, but doesn’t charge either.

Either way, I needed a solution to deal with night. As it happens, the RPi has neither a clock to keep time while it’s off, nor a means of turning itself off or on. Because of this, I built a small companion board with an Attiny84A microcontroller connected to a FET transistor. The Attiny actually turns the RPi on in the morning and off at night, thus saving precious power. The Attiny itself does not draw all that much power, so can run continuously.

The communications protocol between the processors is primitive, but functional. The RPi has two signal wires going to the Attiny. One is pulsed periodically to tell the Attiny that the RPi is still functioning. If the pulses stop, the Attiny waits a few minutes and then turns of the power, then waits a few more minutes and turns it back on again. The other pin is used to tell the Attiny that the RPi wants to be turned off. After getting a pulse on this pin, the Attiny shuts down the RPi for an hour. The RPi also gets a low battery signal from the boost converter, which it can use to determine that it should shut itself down (cleanly) and then request to the Attiny that it be turned off. I try to avoid shutting down the Pi willy-nilly, because the filesystem might be corrupted.

I said that the RPi has no clock. When it boots it tries to connect to a network and then get the time from a time server. Once it has done this, it can proceed with normal operation and keep good time while it’s running. If it can’t get the time from the Internet, it asks to be shut down again to try again later. The RPi decides it’s time to be shut off for the night by comparing the time with sunset, as calculated from a solar ephemeris library.

All said, the power system I came up with is basically just barely adequate, and even when the battery simply cannot run the system, the unit turns off in a controlled fashion and, assuming the battery eventually charges again, the Pi will reboot eventually and get back up.

A next gen camera (already in the works) will have a much bigger battery and charging system. On e-bay, one can get 20W or 25W panels kits with charge controller for about $1/Wp for the panel, as they should be. These charge controllers are designed for 12V lead-acid batteries, though, so I’ll need to use a nice alarm system type AGM battery. A nice thing about most of these charge controllers is that they tend to have USB charger ports, so I do not need the 5V buck controller. Everything is large, though, and setting up a rack to hold the large panel is a problem I have not yet solved. But overall, the lesson I’m learning is that everything is easier when you have power to spare.

The Attiny watchdog circuit works pretty well, but it was a hand-made hack on a proto board and the communication “protocol” is pretty lame.  Since deploying the first camera, I have designed a board to replace my hack on subsequent cameras. The new board is powered by an Atmega328p, which is the same processor that the Arduino uses. I am abandoned the Attiny because I want to use i2c to communicate and the 328p has an i2c hardware module. You can bit-bang (that is, do it in software) i2c with the Attiny, but the RPi i2c controller has a bug which makes it unreliable with slower i2c devices. Anyway, the i2c interface allows transferring more complex messages between the processors, like “shut down in 3 minutes and then wait 7 hours 47 minutes before starting me up again.”  The new board just plugs into the RPi and you plug the power cable into it rather than the RPi, so it’ll be unfussy to setup.

The board design:

Finished board in action:


The software side of things was comparatively simple and only took a few hours to get up and running. (I’ve spent a lot more time on it since, though!) On the RPi, a python script snaps pictures every few seconds. It compares each image to the previous one it took, and if they are sufficiently different (that is, something in the scene has changed), it sends the image to a server. If the picture is the same as the last, the server is only pinged to let it know the camera is still alive. Hours can go by without any pictures being sent.

On the server, the images are analyzed using the ML model to determine if there are turkeys. I did not have a sufficient training set of turkey / non-turkey images to build a custom model, so I am using a pre-cooked Amazon AWS model called Rekognition to ID the poultry. This is my one concession to proprietary “cloud” stuff. Rekognition is idiot-proof, so maybe no the best demo of ML chops, but, eh. One thing about using AWS is that it costs money, so the optimization of not sending redundant images is important for not racking up a huge bill.

The server is written in NodeJS, and receives and processes the pictures as well as hosting a simple website. All communication is JSON messages over REST over HTTPS.

When it comes to software, I have an ongoing war with myself. I like to keep things simple for me (not so much typing) but also like to keep things actually simple (not reliant on large, complex frameworks and libraries that bring in zillons of dependencies and things I don’t understand and can’t easily maintain). To this end, I tried to stick to libraries available from apt and even then, not too much. On the RPi, I used the standard camera and GPIO libraries that come with Raspbian, and installed the python3 modules requests and scikit-image. (I chose not to use OpenCV, which is a shame, because it looks cool. But there is no pre-built package and I didn’t want to build it from source. Building complex things from source on the Pi takes a loooong time, trust me!) On the server, I used Node with Express and I think no other modules — though to be fair, package management in Node is a breeze anyway.

Oh, and for course there is some code running on the Attiny and there is some HTML and Javascript for the client side — so this little project encompasses four or five separate languages, depending on how you count. I think I could have done the server in Python, but I’m still grappling with concurrency in Python. Maybe one day I’ll figure.

Code, in all its uncommented, late-night hacking glory is here:

Putting it in a Box

Probably the hardest part of this project for me was figuring out how to do it physically. Getting a proper waterproof box was easy. But how to mount the panel to the box, and then mount both of them to a tree or light stanchion was quite tricky for his non-mechanical engineer. I spent quite some time poking around Home Depot trying to figure out how to make it work. In the end, I bought a bunch of angle aluminum and start cutting and drilling and filing and screwing until I got something that more or less worked. It was a lot of effort, though, and doesn’t look very good. I really wished I could offload this part to someone more mechanically inclined than me.

Anyway, that’s it. We finally got the first camera deployed and after fixing a few bugs, it has started catching turkeys.

Does it Work?

You can see the system in operation here: This is my “personal” dev server, and so it may be up or down or not showing pictures when you visit. Also, the second camera pictured is showing my office and will do so for the time being.

Here are some turkeys we caught today:

Moore’s last sigh

I have a strange perspective on Moore’s Law that I can’t seem to shake.

The common expression of Moore’s Law is that transistor density on an integrated circuit grows exponentially. The typical time constant is a doubling every 18 to 24 months. Over the years, Moore’s Law has been remarkably stable. Good folks argue about if and when it will come to and end, or if it already has. People also argue about whether Moore’s Law itself was endogenous to semiconductor scaling; that is, whether the Law became a goal and so became self-fulfilling.

Here’s my take: Rather than observing a constant stream of innovation in semiconductors, what we have witnessed over the last 50 years or so has been the slow, logical expansion of a single innovation: that of the planarized transistor and integrated circuit made from them. The integrated circuit is credited to Jack Kilby who demonstrated the first IC in 1958. However, the basis of real chips is the planar transistor, invented by Jean Hoerni at Fairchild in 1959.

From there, the entirety of the history of Moore’s law is a logical and inevitable consequence. The exponential growth was not due to a stream of genius innovation, but an entirely canny and methodical march of engineering, taking an idea to its logical conclusion: larger wafers, smaller lithography, more metal layers, thinner gate oxides, etc. The logical conclusion being electronic devices that operate on the 100-103 numbers of electrons at a time. It is those limits, along with thermal limits that are the endgame we see today. (There are other complications, like deep-UV lithography that appear very difficult to solve, but can probably be solved at some price.)

I don’t want to belittle the work of so many brilliant engineers who have toiled hard in the salt mines of chip design. Of course, they (we!) have brought the world a fantastic technology. But if you back out just a bit on timescale, I think it’s easy to see that Moore’s Law is not telling you as much about electronics and computers as it is describing a state of the last 50 years.

We have lived in a period of exponential improvement in electronics. That period, like all periods of exponential change, will end; perhaps already has. At any but the smallest timescales, major technology innovations look like step functions followed by a longer and partially overlapping period of diffusion into society. Aeronautics, combustion engines, solar cells, wind turbines, you name it.

None of this makes me sad, though I wished airplanes were still getting faster and better. In the multi-generational mad-dash to take semiconductors to their limits, we’ve probably passed over lots of side opportunities to use chips in novel ways, ways that require more design attention per transistor than total transistors. I hope that we will see more novel electronic devices in the future, as brains that were focused on more and faster start to look for other ways to do interesting things in electronics.



SJWs vs. Engineers

This week has had more than its fair share of depressing news, but I had a personally depressing moment yesterday when I saw that one of my favored very nerdy chat groups had an explosive thread about how “social justice warriors” are ruining engineering. This chat group is usually quite apolitical and consists mostly of electrical engineers of various stripes and skill levels helping each other out with their projects. Need a filter with a certain response? Need to know how to safely interface a triac to a microcontroller? Want to know how to write VHDL? Measure the time between two pulses on the order of nanoseconds? Calculate the feedpoint impedance of a certain dipole antenna? This is the place for all that.

Well, for the last couple of days it’s also been the place to hear men complain about Social Justice Warriors who want to ruin engineering by making it more amenable to women.

I don’t have the energy or time to break down what a bunch of toxic baloney such protestation is. It’s been covered well enough in the articles and threads discussing the infamous Google memo.  In short, though you can (and right wingers do) almost always find someone on the left saying something dumb, or, more often, something that requires a effort plus a wealth of context to understand (and typical academic writing exacerbates this problem), questioning why women do not often pursue engineering careers remains perfectly valid. Doing so does not imply that you ultimately expect male/female participation in engineering to be 50/50, but it does mean that you want whatever ratio ultimately emerges to be based on the preferences and aptitudes of the individuals involved, rather than, say, the preferences of their would-be professors, colleagues, mentors, companies they might work for, parents, etc. It also doesn’t mean that there are not systematic differences between the sexes. It only means that each individual’s opportunities depend on their own particular gifts, not the average of some particular group of which they might be a member.

Is this rocket science? Are we seriously still debating this shit?

Part of my consternation comes from my particular boundary-straddling lifestyle. I like to tell people I am an engineer by training and temperament. But I also live in the world of policy analysis and social science. And I’ll tell you, I’m tired of hearing engineers and social scientists insult each other and disparage the way the other group thinks. The reality is that both groups could use a dose of the others’ discipline. Social scientists, particularly ones who want to implement programs, could learn a lot from the grim conservative (small “c”) pragmatism that engineers bring to problem-solving — the understanding that nature doesn’t want your machine (or program) to work, and you have to design your program so that it works despite nature. Similarly, engineers really need to know much more about human behavior, human experience, and history. Knowing how your creations will affect people may slow you down, but it will make your work so much more valuable in the long run with less potential for negative consequences.

Anyway, I want a hat that says “Engineers for Social Justice.”


IoT information security will never come under the prevailing business model

The business model for smart devices in the home is shaping up to be simple and bad: cheap hardware and no service contracts. That sounds great for consumers — after all, why should I pay $100 for a smart power outlet made of a $0.40 microcontroller and a $1 relay, and why should I have to pay a monthly fee to switch it — but it is going to have serious negative ramifications.

Let me start by saying that many bits have already been spilled about basic IoT security:

  • making sure that messages sent to and from your device back to the manufacturer cannot be faked or intercepted
  • making sure that your IoT device is not hacked remotely, turning it into someone else’s IoT device
  • making sure that your data, when it is at rest in the vendor’s systems is not stolen and misused


As things stand, none of that is going to happen satisfactorily, primarily because of incompatible incentives. When you sell a device for the raw cost of its hardware, with minimal markups and no opportunity for ongoing revenue, you also have no incentive for ongoing security work. Or any kind of work for that matter. If you bought the device on the “razor + blade” model, where the device was cheap, but important revenue was based on your continued use of the product, things might be different.

Worse than that, however, in order to find new revenue streams (immediately, or at potential future streams), vendors have strong incentives to collect all the data they can from the device. You do not know — even when the devices are operating as designed — exactly what they are doing. They are in essence little listening bugs willingly planted all over your home, and you do not know what kind of information they are exfiltrating, nor do you know who is ultimately receiving that information.

I think there is a solution to this problem, if people want it, and it requires two basic parts to work properly:


We need a business model for smart devices that puts strong incentives in place for vendors to continue to support their products. This will never happen with the cheapie Fry’s Electronics special IoT Doohickey of the Week. Instead, we probably need a real engagement with sticks (liability) and carrots (enhanced revenue) that are driven by ongoing contractual engagement. That is, money should continue to flow.


We need a standardized protocol for IoT that provides for a gateway at the home, and encrypted data on both sides of the gateway, but with the gateway owner having access to the encryption keys on the inner side of the gateway. The standardized protocol would have fields for the vendor name and hosts, as well as a human readable json-style payload — and a rule that nothing can be double-encrypted in the payload, keeping it from the eyes of the user.

Under such an arrangement, users, or their gateways acting as proxies for them, could monitor what is coming and going. You could program your gateway, for example, to block unnecessary information from http messages sent by your device.

Of course, the vendors, seeing the blocked information might decide not to provide their service, and that’s their right, but at least everyone would know the score.


Will this happen? Well, I think vendors with the long view of things would probably see #1 as appealing. Users will not, perhaps. But that is because users are not fully aware of the consequences of inviting someone else to monitor their activities. Perhaps people will think differently after a few sensational misuses of their data.

Vendors will fight #2 mightily. Of course, they could ignore it completely, with the potential antidote that a large number of users who insist on it becoming excluded from their total available market. With a critical mass of people using gateways that implement #2, I think we could tip things, but it right now it seems a long shot.


I am quite pessimistic about all this. I don’t think we’ll see #1 or #2 unless something spectacularly bad happens first.


For the record, I do use a few IoT devices in my home. There are two flavors: those I built myself and those I bought. For the self-built, they exist entirely within my network and do not interact with any external server. I obviously know what they do. For those I bought, they they exist on a DMZ style network up with no access to my home network at all (at least if my router is working as intended). This mitigates the worry of pwned devices accessing my computer and files, but does not stop them from sending whatever they collect back to the mothership.


machines don’t think but they can still be unknowable

I still read Slashdot for my tech news (because I’m old, I guess) and came across this article, AI Training Algorithms Susceptible to Backdoors, Manipulation. The article cites a paper that shows how the training data for a “deep” machine learning algorithms can be subtly poisoned (intentionally or otherwise) such that the algorithm can be trained to react abnormally to inputs that don’t seem abnormal to humans.

For example, an ML algorithm for self-driving cars might be programmed to recognize stop signs, by showing it thousands of stop signs as well as thousands of things that are not stop signs, and telling it which is which. Afterwords, when shown new pictures, the algorithm does a good job classifying them into the correct categories.

But lets say someone added a few pictures of stop signs with Post-It notes stuck on them into the “non stop sign” pile? The program would learn to recognize a stop sign with a sticky on it as a non stop sign. Unless you test your algorithm with pictures of stop signs with sticky notes on them (and why would you even think of that?), you’ll never know that your algorithm will happily misclassify them. Et voila, you have created a way to selectively get self driving cars to zip through stop signs like they weren’t there. This is bad.

What caught my eye about this research is that the authors seem not to fully grasp that this is not a computer problem or an algorithm problem. It is a more general problem that philosophers, logicians, and semiologists have grappled with for a long time. I see it as a sign of the intellectual poverty of most programmers’ education that they did not properly categorize this issue.

Everyone has different terms for it, and I don’t know jack about philosophy, but it really boils down to:

  • Can you know what someone else is thinking?
  • Can you know how their brain works?
  • Can you know they perceive the same things you perceive the same way?

You can’t.

Your brain is wholly isolated from the brains of everyone else. You can’t really know what’s going on inside their heads, except so much as they tell you, and for that, even if everyone is trying to be honest, we are limited by “language” and the mapping of symbols in your language to “meaning” in the heads of the speaker and listener can never truly be known. Sorry!

Now in reality, we seem to get by.  if someone says he is hungry, that probably means he wants food. But what if someone tells you there is no stop sign at the intersection? Does he know what a stop sign is? Is he lying to you? How is his vision? Can he see colors? What if the light is kinda funny? All you can do is rely on your experience with that person’s ability to identify stop signs to know if he’ll give you the right answer. Maybe you can lean on the fact that he’s a licensed driver. However, you don’t know  how his wet neural net has been trained by life experience and you have to make a guess about the adequacy of his sign-identification skills.

These deep learning algorithms, neural nets and the like, are not much like human brains, but they do have this in common with our brains: they are too complex to be made sense of. That is, we can’t look at the connections of neurons in the brain nor can we look at some parameters of a trained neural network and say, “oh, those are about sticky notes on stop signs. That is, all those coefficients are uninterpretable.

We’re stuck doing what we have done with people since forever: we “train” them, then we “test” them, and we hope to G-d that the test we gave covers all the scenarios they’ll face. It works, mostly, kinda, except when it doesn’t. (See every pilot-induced aviation accident, ever.)

I find it somewhat ironic that statisticians have worked hard to build models whose coefficients can be interpreted, but engineers are racing to build things around more sophisticated models that do neat things, but whose inner workings can’t quite be understood. Interpreting model coefficients is part of how how scientists assess the quality of their models and how they use them to tell stories about the world. But with the move to “AI” and deep learning, we’re giving that up. We are gaining the ability to build sophisticated tools that can do incredible things, but we can only assess their overall external performance — their F scores — with limited ability to look under the hood.


Liberal Misogyny Detected

I really like insulting Donald Trump and his coterie contemptibles. They’re the worst. The whole administration is a brightly burning trash fire so yuge, it can probably be seen from space. Parliamentarian aliens receiving our radio transmissions on the planet Zepton are right now debating whether they should conquer (or vaporize) Earth on humanitarian grounds.

I really do not like these people.

I insult because I can, because it makes me feel a little better, and because it’s about the only power I have over this administration. In fact, I like insulting Trump so much that I wrote software so that I could insult him hundreds of times a day on each of thousands of other people’s computers. It’s the sort of thing that helps keep me going.

As part of that project, I’ve tried to maintain certain editorial standards for insults. I have three basic criteria:

  1. First and foremost, insults should be funny. Mean is fine, even encouraged, but funny is non-negotiable. That’s why I shamelessly stole the lion’s share of my insults from Jezebel, where a team of professional Trump trollers labor night and day for our benefit.
  2. The second requirement I have for my insults is that they should be specific to the person. A reasonably intelligent person should, with high confidence, know from the insult which despicable person is being insulted. This is reasonably easy for Trump. Nobody will be confused about “Orange Julius Caesar” or “Hairpiece Come to Life”. But it gets harder when you want to go after the rest of the swamp creatures. These are OK: “Angry Second-Assistant High School Football Coach” Pence, “White Nationalist Potato Sack” Bannon, “Apple-Cheeked Hate Goblin” Sessions, and “Eddie Munster Understudy” Ryan.
  3. The final requirement is that the insults not be racist, sexist, or any other kind of -ist. I think this is really a restatement of the first requirement: the insult should be about the person as an individual.

I’ve particularly struggled with Spicer and Conway. They are pathetic creatures, who literally lie for a living. But they are rather generic pathetic creatures, and it’s hard to come up with insults for either that would not be mistaken for someone else. That’s why, in my software, I have disabled them by default. The lists are too short and I’m just not proud of them. Most of the Spicer and Conway insults include references to their job titles or name, which is weak comedic sauce. (Others have lampooned them much more successfully than I have. Melissa McCarthy skewers Spicer brilliantly by showing the ridiculousness of his impotent, misdirected rage.)

Because I am in constant need for fresh insults, particularly for the back-benchers, I created this web form to let fans of Detrumpify help me out. So far, I’ve received more than 500 suggestions. There have been real gems in there (eg: “King Leer”), but also a lot of dreck that fails on the requirements above.

Among all the suggestions, the ones for Conway have been the worst by far, and I’ve not accepted any of them into Detrumpify. Without exception, they refer to how ugly she is, or her sexual organs, or her sexual behavior, or her lack of suitability as a sexual partner, or something cruel (and sexual) that the author would like to do to her. None of this is remotely OK.

Many suggested insults relating to her appearance. I’ve got mixed feelings on this. Very many of the Trump and Trump boot licker insults are linked to appearance. Trump really is orange, and that will always be funny. Sessions really does resemble a Keebler Elf and Bannon does resemble a corpse. Does that mean it’s okay to make fun of Conway’s appearance, too? Yes, I think it does. However, in practice, it’s hard to do so in way that is more specific than simply calling her sexually unworthy.

What kills me is that this crap is coming from my team, the supposedly unsexist one, or at least the less sexist one. Or, at the very very least, the one that is embarrassed of its own sexism and is consciously working against its check own unconscious biases? Oh, no? That’s not happening? Oh, shit. I guess that means we get to squander half our collective talent pool and make massive “Trump Sized” mistakes indefinitely.

So, as we endlessly debate whether sexism was a factor in the 2016 presidential election, I have new reason to feel common cause with women everywhere who have direct “no duh”-level data that sexism is alive and well on the left. I’ve got a spreadsheet full of direct evidence, supplied willingly by sexists who hate Trump. In the words of Der Gropenführer: sad.

PS – Though we don’t see all that much of her these days, I do still very much want to insult Ms. Conway. She certainly deserves it. Who can crack this nut?


Tragicomedy of the Contraposicommons

I was going to call this entry the “Tragedy of the Anticommons” but economists have already coined anticommons to refer to something entirely different than what I want to talk about. (An anticommons is something that would be socially beneficial as a commons, but for reason of law or raw power, is controlled by a private actor to the detriment of most. For example, patent intellectual property.)

Today’s post is about resources that actually become more valuable as more people that use them. This is something like the network effect that many Internet services enjoy, but applies broadly to many societal projects. I’m talking about things like schools and insurance markets, libraries, emergency preparedness, etc that benefit from wide participation.

I decided to write this post after I found out that several of our son’s primary-grade compatriots will not be returning to public school next year. Instead, they will be going to various private institutions. Interestingly, some of the parents took the time to write messages to the left-behinds explaining that their decision was not due to any kind of inadequacy of the school, but just a desire to do “what’s best” for their child.

It’s difficult to argue that a parent should not do “what’s best” for his or her kid, but I’m going to take a shot at it anyway, because that approach to parenting taken to its logical endpoint, is deeply antisocial. And while I think US history doesn’t have many examples of destructive trends continuing “to their logical endpoints,” I fear that this time may be different.

Your child in school is not just consuming an education — he or she is part of someone else’s education, too. In important ways, school is a team sport, and everyone does better if more people participate. This is not only true kids whose parents have lots of spare time and resources to participate in school activities. It is just as true among kids whose parents do not have the time or money to participate heavily. All children bring a unique combination of gifts, talent, and complexities that enhance the learning experience for others. Attending a school with rich participation across the socioeconomic spectrum enlarges everyone’s world view, to everyone’s benefit.

Now, am I asking people with the ability to opt for a private education to altruistically sacrifice their children for someone else’s benefit? I guess the answer is a definite “kinda.” Kinda, because I think it is a sacrifice only if everyone acts unilaterally, and that is the essence of the problem.

You see, there is a prisoner’s dilemma type of situation going on here. If we all send our kids  to public school, we are all invested together, and we will want the appropriate resources brought to bear on their education. The result is likely a pretty good school. But if enough people “bail” the school will be deprived of their children’s participation. Furthermore, “exiters” have a strong incentive not to continue to have significant resources provided to the school. Such resources come from taxes and exiters and their children will derive no direct benefit from them. Perhaps nobody they personally even know will derive any such benefit. (There are plenty of indirect benefits to educating other people’s kids, but that’s another article.) As more and more people peel off, the school is diminished and the incentive to peel off becomes ever greater — the dreaded death spiral. At last the school is left only with the students from families unable or uninterested in leaving. (NB: I am not suggesting that the “unable” and “uninterested” go together in any way, only that that’s who will be left at such a school.)

In the language of game theory, we reach a Nash equilibrium. Everyone who cares and has the means has defected, and the public schools are ruined. The interesting thing about Nash equilibria is that a better and cheaper outcome is possible for everyone if participants trust each other and cooperate. (After all, private school costs a lot and serious studies show that they underperform public schools.) So, I’m not suggesting altruism per se, but something more akin to an enlightened model of cooperation.

But today’s blog post isn’t even about public primary education. It is about the general phenomenon, which I fear is widespread, of people “pulling the ripcord” on important societal institutions and resources — bailing out, to varying degrees, based on their ability to do so. Consider:

  • Gated communities, private security forces, and even gun ownership for protection, represent people rejecting the utility of civil policing.
  • Water filters and bottled water are people rejecting the need for a reliable water supply.
  • Skipping vaccinations are people rejecting the public health system
  • Sending kids to private colleges and universities is pulling the ripcord on state university systems, and the many, many societal benefits that come with them (open research, an educated populace, etc)
  • Uber, and eventually, self-driving cars represent rejecting transit (perhaps shredding a ripcord that was pulled long ago with the widespread adoption of the automobile)
  • Pulling out of subscription news outlets, leaving them to cheapen and become less valuable

Here are some you may not have heard of that I think are coming:

  • Private air travel as an escape from the increasingly unpleasant airline system, with decreased investment, safety, and reliability of the latter over time
  • Completely energy self-sufficient homes (with storage) as an escape from the electric system, with a total cost well in excess of an integrated electric system

Another example: as many of you know, I’ve also been dabbling quite a bit in amateur radio. In that hobby, I have discovered a large contingent of hams who are prepping for TEOTWAWKI — The End Of The World As We Know It. They are stocking up on food, water, and ammunition in preparation for society’s total collapse. What I find upsetting about this is that such prepping probably makes collapse more likely. These people are not pouring their resources into community emergency prep groups nor or they the types to advocate for taxation to pay for robust emergency services. Instead, they’re putting resources into holes in their backyards.

All of these are rational decisions in a narrow sense (even the anti-vax thing) and probably cause little or no harm assuming few others make the same decision. However, once many people start to make these decisions, things can unravel quite quickly. (Regarding anti-vax, these effects can already be seen in outbreaks in certain communities with a high penetration of opt-outs.)

Perhaps some of these are “OK”, even better than OK. People abandoning broadcast television for subscription media is bad for broadcast television, but maybe that’s fine; we certainly don’t owe TV anything. (On the other hand, major broadcast networks that had to satisfy a huge swath of the populace at once were forced into compromises that had certain societal benefits: like everyone getting more or less the same news.)

But I worry that we are in a time of dangerously anti-social behavior, enhanced by a party whose ideology seems not only to reject socialism but to reject the idea that “society” exists at all. In the process, they seem willing to destroy not only social programs that deliberately transfer wealth (social insurance, etc), but also any social institution built on wide group participation. Even among people not predisposed to “exit,” growing inequality and the fear of its consequences may encourage — or even force  — people to consider  pulling the ripcord, too. The result, if this accelerates, could be disaster.