Brother, can you spare a CPU cycle?

Are you familiar with Bitcoin and other crypto-currencies? These are currencies not supported by any government, which can be traded anonymously and somewhat frictionlessly. They are gaining traction among people who want to make illegal transaction, who want to avoid taxes, and who just want freedom. And now, increasingly they are being used not as a currency for trade, but as an investment. As a result, people are working hard to make more Bitcoin, which is a complex mathematical operation called mining. Some organizations have set up huge computer farms employing custom hardware to do nothing more than mine bitcoin.

And now, reports are surfacing that various websites are embedding javascript in their files that surreptitiously mine bitcoin on your computer while you read their site. When I first heard of this, I was rather upset. After all, big, evil website people are using a facility in my browser to run code on my computer that doesn’t benefit me in any way! They are stealing my cpu, making my computer sluggish, and costing me real money in wasted power. On a cell phone, they’re even draining my battery? How dare they?

[ Also, from a pure engineering standpoint, when there are people out there using special-purpose computer chips to mine Bitcoin, can it possibly make sense to try to do the same using Javascript on my cellphone? The answer is yes, if you’re not they one paying for the phone or the electricity. ]

Anway, after some time, I calmed down and realized that this isn’t so bad, and it could even be … good?

You see, one cannot look at something we don’t like in a vacuum. It must be compared to real alternatives. We hear over and over from the advertising industry that websites need to make money. (not this one — ed.) That’s what pays for the content, the computers, and the personnel. Ads make the “free” Internet as we know it possible.

But ads suck. They are ugly and intrusive. They involves a third party — the advertiser in every page I visit. There’s me, there’s the website, and then there’s this guy over in the corner who wants to sell me viagra. Because the money is coming from the advertiser, he gets a say in the content of the site. Furthermore, he gets to know that I visited the site, and can start to collect all kinds of information on my browsing history, eventually creating a dossier on my personal habits that he will use to target me for the rest of my life. And finally, he gets to suck up my cpu cycles and my limited screen real estate in order to serve me his ads. It’s maddening!

I don’t like it, have never liked it, and would much prefer a subscription supported Internet. But that’s never going to happen, so I’m told.

So how is letting people mine bitcoin better?

  • no screen real-estate
  • no data collection
  • no third party

Sure, they’re sucking up my CPU and battery just as the advertisers, but probably no worse, and perhaps that’s a fair price to pay.

Now, there are some problems with this approach that would have to be dealt with. First, I’m not sure Bitcoin mining is really a productive use of CPU cycles, and Bitcoin may itself be just a flash in the pan. So perhaps the world will consider other, better ways to monetize my cpu cycles, maybe selling them to someone like AWS or Google, which will then remarket them for normal productive purposes. Second, I think for such a system to be fair, the user’s need to know what is going on. There should be a way to know what a site is “costing” you. And finally, we need an easy and straightforward way for users to say “no”, and then, of course, the website would be perfectly in their rights to say “no” to serving up content. Turning off Javascript entirely is not a great solution, as Javascript is just too embedded in modern web to give up.

So, here’s a business idea for you. Create a company that offers websites to host the company’s javascript on their sites in return for payment. No data is collected, but CPU cycles are consumed if the user allows it, and the site owner is informed if they do not. The syndicate in turn remarkets the CPU cycles as a service to customers, something lightweight and context-free, like Amazon Lambda.

Electricity started out with small local generators, even “home” generators, then increasingly centralized for a long time, and today, there is a big push for “distributed” generation, which is basically decentralized power generation, but maintaining a connection to the power grid.

Computing started out small on home computers and has become increasingly centralized in big data centers. Will the next step to reverse that pattern?

Making weird stuff

An interesting aspect of my job is that I am sometimes asked to do weird stuff. I like weird stuff, so this is a good.

Recently, I was asked to build “turkey detector.” You see, my boss wanted a demo that we shows that we can help scientists deploy sensors, and collect and process the data from them. Furthermore, we wanted a demo that would show machine learning in action.

Oh, did I mention that there are a lot of wild turkeys strutting around this campus?

So we figured, hey, let’s deploy some cameras, take pictures, send them to a turkey classifier model, and put the results on website. What could be easier?

There are some interesting constraints:

  • not having a lot of spare time to do this (have other more pressing responsibilities)
  • minimal resources
  • no wired electrical or network access in the most turkey-friendly outdoor areas

I added a few constraints of my own, to make things more interesting:

  • the cameras need to be able to withstand the weather and operate without physical interaction for a long time. Not that we need these cameras to stay up forever, but a real camera trap should be able to last.
  • don’t use proprietary hardware or software — everything open source (well, almost everything, as you’ll see)

Commercial, already-built camera traps exist, but they, as far as I know, do not sync up with wifi and do not keep themselves charged. You have to go out to change batteries and collect your memory card. Bah.

Electronic Hardware

For the computer, I went with the Raspberry Pi Zero W after starting with a Raspberry Pi 3. These are ARM-based circuit board with built-in WiFi and a special port for attaching a camera. The “3” has a multi-core process and more ports. The Zero is slower but smaller and uses about 1/2 to 1/3 the power of the Pi 3.

I like the RPi platform. It’s reasonably open, simple to use (its Raspbian OS is basically like any Debian-based Linux), and crazy cheap. The Pi Zero W is $10! For the camera I used the companion “PiCamera 2” designed to go with the RPi. It’s an 8Mpixel tiny phone camera jobbie, fixed focus and fixed aperture, about $30.

Getting a hard-wired power to the unit would be out of the question, so this needs to work from battery. I ended up using a single LiPo cell, 3.7V 4.4Ah. This is enough to power the Pi for about a day without any new charge, but it’s not enough to go two days or run overnigh. To charge, two small solar 6V solar panels,  3.5W each would do that job. The panels require a charge controller to adjust the panel output to the battery. Also, the Pi requires 5V, and the battery only puts out ~3.5-4V, so a boost converter to make a stable 5V is also required. The panels were a huge ripoff, at $11/Wp and I’m not thrilled with the cost and quality of the charge controller and boost converter either, but they do work.

Here’s a picture of all the kit, in a cardboard box in my backyard. Well, almost all the kit. An RPi 3 is pictured, which I moved away from because of its power use. Also, there are two panels in the operating camera.

On a sunny, or moderately sunny day, there is enough power to operate the camera and charge the battery. On a cloudy day, the battery drains slowly, or doesn’t drain, but doesn’t charge either.

Either way, I needed a solution to deal with night. As it happens, the RPi has neither a clock to keep time while it’s off, nor a means of turning itself off or on. Because of this, I built a small companion board with an Attiny84A microcontroller connected to a FET transistor. The Attiny actually turns the RPi on in the morning and off at night, thus saving precious power. The Attiny itself does not draw all that much power, so can run continuously.

The communications protocol between the processors is primitive, but functional. The RPi has two signal wires going to the Attiny. One is pulsed periodically to tell the Attiny that the RPi is still functioning. If the pulses stop, the Attiny waits a few minutes and then turns of the power, then waits a few more minutes and turns it back on again. The other pin is used to tell the Attiny that the RPi wants to be turned off. After getting a pulse on this pin, the Attiny shuts down the RPi for an hour. The RPi also gets a low battery signal from the boost converter, which it can use to determine that it should shut itself down (cleanly) and then request to the Attiny that it be turned off. I try to avoid shutting down the Pi willy-nilly, because the filesystem might be corrupted.

I said that the RPi has no clock. When it boots it tries to connect to a network and then get the time from a time server. Once it has done this, it can proceed with normal operation and keep good time while it’s running. If it can’t get the time from the Internet, it asks to be shut down again to try again later. The RPi decides it’s time to be shut off for the night by comparing the time with sunset, as calculated from a solar ephemeris library.

All said, the power system I came up with is basically just barely adequate, and even when the battery simply cannot run the system, the unit turns off in a controlled fashion and, assuming the battery eventually charges again, the Pi will reboot eventually and get back up.

A next gen camera (already in the works) will have a much bigger battery and charging system. On e-bay, one can get 20W or 25W panels kits with charge controller for about $1/Wp for the panel, as they should be. These charge controllers are designed for 12V lead-acid batteries, though, so I’ll need to use a nice alarm system type AGM battery. A nice thing about most of these charge controllers is that they tend to have USB charger ports, so I do not need the 5V buck controller. Everything is large, though, and setting up a rack to hold the large panel is a problem I have not yet solved. But overall, the lesson I’m learning is that everything is easier when you have power to spare.

The Attiny watchdog circuit works pretty well, but it was a hand-made hack on a proto board and the communication “protocol” is pretty lame.  Since deploying the first camera, I have designed a board to replace my hack on subsequent cameras. The new board is powered by an Atmega328p, which is the same processor that the Arduino uses. I am abandoned the Attiny because I want to use i2c to communicate and the 328p has an i2c hardware module. You can bit-bang (that is, do it in software) i2c with the Attiny, but the RPi i2c controller has a bug which makes it unreliable with slower i2c devices. Anyway, the i2c interface allows transferring more complex messages between the processors, like “shut down in 3 minutes and then wait 7 hours 47 minutes before starting me up again.”  The new board just plugs into the RPi and you plug the power cable into it rather than the RPi, so it’ll be unfussy to setup.

The board design:

Finished board in action:

Software

The software side of things was comparatively simple and only took a few hours to get up and running. (I’ve spent a lot more time on it since, though!) On the RPi, a python script snaps pictures every few seconds. It compares each image to the previous one it took, and if they are sufficiently different (that is, something in the scene has changed), it sends the image to a server. If the picture is the same as the last, the server is only pinged to let it know the camera is still alive. Hours can go by without any pictures being sent.

On the server, the images are analyzed using the ML model to determine if there are turkeys. I did not have a sufficient training set of turkey / non-turkey images to build a custom model, so I am using a pre-cooked Amazon AWS model called Rekognition to ID the poultry. This is my one concession to proprietary “cloud” stuff. Rekognition is idiot-proof, so maybe no the best demo of ML chops, but, eh. One thing about using AWS is that it costs money, so the optimization of not sending redundant images is important for not racking up a huge bill.

The server is written in NodeJS, and receives and processes the pictures as well as hosting a simple website. All communication is JSON messages over REST over HTTPS.

When it comes to software, I have an ongoing war with myself. I like to keep things simple for me (not so much typing) but also like to keep things actually simple (not reliant on large, complex frameworks and libraries that bring in zillons of dependencies and things I don’t understand and can’t easily maintain). To this end, I tried to stick to libraries available from apt and even then, not too much. On the RPi, I used the standard camera and GPIO libraries that come with Raspbian, and installed the python3 modules requests and scikit-image. (I chose not to use OpenCV, which is a shame, because it looks cool. But there is no pre-built package and I didn’t want to build it from source. Building complex things from source on the Pi takes a loooong time, trust me!) On the server, I used Node with Express and I think no other modules — though to be fair, package management in Node is a breeze anyway.

Oh, and for course there is some code running on the Attiny and there is some HTML and Javascript for the client side — so this little project encompasses four or five separate languages, depending on how you count. I think I could have done the server in Python, but I’m still grappling with concurrency in Python. Maybe one day I’ll figure.

Code, in all its uncommented, late-night hacking glory is here: https://github.com/djacobow/turkeycam.

Putting it in a Box

Probably the hardest part of this project for me was figuring out how to do it physically. Getting a proper waterproof box was easy. But how to mount the panel to the box, and then mount both of them to a tree or light stanchion was quite tricky for his non-mechanical engineer. I spent quite some time poking around Home Depot trying to figure out how to make it work. In the end, I bought a bunch of angle aluminum and start cutting and drilling and filing and screwing until I got something that more or less worked. It was a lot of effort, though, and doesn’t look very good. I really wished I could offload this part to someone more mechanically inclined than me.

Anyway, that’s it. We finally got the first camera deployed and after fixing a few bugs, it has started catching turkeys.

Does it Work?

You can see the system in operation here: https://skunkworks.lbl.gov/turkeycam. This is my “personal” dev server, and so it may be up or down or not showing pictures when you visit. Also, the second camera pictured is showing my office and will do so for the time being.

Here are some turkeys we caught today:

Moore’s last sigh

I have a strange perspective on Moore’s Law that I can’t seem to shake.

The common expression of Moore’s Law is that transistor density on an integrated circuit grows exponentially. The typical time constant is a doubling every 18 to 24 months. Over the years, Moore’s Law has been remarkably stable. Good folks argue about if and when it will come to and end, or if it already has. People also argue about whether Moore’s Law itself was endogenous to semiconductor scaling; that is, whether the Law became a goal and so became self-fulfilling.

Here’s my take: Rather than observing a constant stream of innovation in semiconductors, what we have witnessed over the last 50 years or so has been the slow, logical expansion of a single innovation: that of the planarized transistor and integrated circuit made from them. The integrated circuit is credited to Jack Kilby who demonstrated the first IC in 1958. However, the basis of real chips is the planar transistor, invented by Jean Hoerni at Fairchild in 1959.

From there, the entirety of the history of Moore’s law is a logical and inevitable consequence. The exponential growth was not due to a stream of genius innovation, but an entirely canny and methodical march of engineering, taking an idea to its logical conclusion: larger wafers, smaller lithography, more metal layers, thinner gate oxides, etc. The logical conclusion being electronic devices that operate on the 100-103 numbers of electrons at a time. It is those limits, along with thermal limits that are the endgame we see today. (There are other complications, like deep-UV lithography that appear very difficult to solve, but can probably be solved at some price.)

I don’t want to belittle the work of so many brilliant engineers who have toiled hard in the salt mines of chip design. Of course, they (we!) have brought the world a fantastic technology. But if you back out just a bit on timescale, I think it’s easy to see that Moore’s Law is not telling you as much about electronics and computers as it is describing a state of the last 50 years.

We have lived in a period of exponential improvement in electronics. That period, like all periods of exponential change, will end; perhaps already has. At any but the smallest timescales, major technology innovations look like step functions followed by a longer and partially overlapping period of diffusion into society. Aeronautics, combustion engines, solar cells, wind turbines, you name it.

None of this makes me sad, though I wished airplanes were still getting faster and better. In the multi-generational mad-dash to take semiconductors to their limits, we’ve probably passed over lots of side opportunities to use chips in novel ways, ways that require more design attention per transistor than total transistors. I hope that we will see more novel electronic devices in the future, as brains that were focused on more and faster start to look for other ways to do interesting things in electronics.

 

 

SJWs vs. Engineers

This week has had more than its fair share of depressing news, but I had a personally depressing moment yesterday when I saw that one of my favored very nerdy chat groups had an explosive thread about how “social justice warriors” are ruining engineering. This chat group is usually quite apolitical and consists mostly of electrical engineers of various stripes and skill levels helping each other out with their projects. Need a filter with a certain response? Need to know how to safely interface a triac to a microcontroller? Want to know how to write VHDL? Measure the time between two pulses on the order of nanoseconds? Calculate the feedpoint impedance of a certain dipole antenna? This is the place for all that.

Well, for the last couple of days it’s also been the place to hear men complain about Social Justice Warriors who want to ruin engineering by making it more amenable to women.

I don’t have the energy or time to break down what a bunch of toxic baloney such protestation is. It’s been covered well enough in the articles and threads discussing the infamous Google memo.  In short, though you can (and right wingers do) almost always find someone on the left saying something dumb, or, more often, something that requires a effort plus a wealth of context to understand (and typical academic writing exacerbates this problem), questioning why women do not often pursue engineering careers remains perfectly valid. Doing so does not imply that you ultimately expect male/female participation in engineering to be 50/50, but it does mean that you want whatever ratio ultimately emerges to be based on the preferences and aptitudes of the individuals involved, rather than, say, the preferences of their would-be professors, colleagues, mentors, companies they might work for, parents, etc. It also doesn’t mean that there are not systematic differences between the sexes. It only means that each individual’s opportunities depend on their own particular gifts, not the average of some particular group of which they might be a member.

Is this rocket science? Are we seriously still debating this shit?

Part of my consternation comes from my particular boundary-straddling lifestyle. I like to tell people I am an engineer by training and temperament. But I also live in the world of policy analysis and social science. And I’ll tell you, I’m tired of hearing engineers and social scientists insult each other and disparage the way the other group thinks. The reality is that both groups could use a dose of the others’ discipline. Social scientists, particularly ones who want to implement programs, could learn a lot from the grim conservative (small “c”) pragmatism that engineers bring to problem-solving — the understanding that nature doesn’t want your machine (or program) to work, and you have to design your program so that it works despite nature. Similarly, engineers really need to know much more about human behavior, human experience, and history. Knowing how your creations will affect people may slow you down, but it will make your work so much more valuable in the long run with less potential for negative consequences.

Anyway, I want a hat that says “Engineers for Social Justice.”

 

IoT information security will never come under the prevailing business model

The business model for smart devices in the home is shaping up to be simple and bad: cheap hardware and no service contracts. That sounds great for consumers — after all, why should I pay $100 for a smart power outlet made of a $0.40 microcontroller and a $1 relay, and why should I have to pay a monthly fee to switch it — but it is going to have serious negative ramifications.

Let me start by saying that many bits have already been spilled about basic IoT security:

  • making sure that messages sent to and from your device back to the manufacturer cannot be faked or intercepted
  • making sure that your IoT device is not hacked remotely, turning it into someone else’s IoT device
  • making sure that your data, when it is at rest in the vendor’s systems is not stolen and misused

 

As things stand, none of that is going to happen satisfactorily, primarily because of incompatible incentives. When you sell a device for the raw cost of its hardware, with minimal markups and no opportunity for ongoing revenue, you also have no incentive for ongoing security work. Or any kind of work for that matter. If you bought the device on the “razor + blade” model, where the device was cheap, but important revenue was based on your continued use of the product, things might be different.

Worse than that, however, in order to find new revenue streams (immediately, or at potential future streams), vendors have strong incentives to collect all the data they can from the device. You do not know — even when the devices are operating as designed — exactly what they are doing. They are in essence little listening bugs willingly planted all over your home, and you do not know what kind of information they are exfiltrating, nor do you know who is ultimately receiving that information.

I think there is a solution to this problem, if people want it, and it requires two basic parts to work properly:

1.

We need a business model for smart devices that puts strong incentives in place for vendors to continue to support their products. This will never happen with the cheapie Fry’s Electronics special IoT Doohickey of the Week. Instead, we probably need a real engagement with sticks (liability) and carrots (enhanced revenue) that are driven by ongoing contractual engagement. That is, money should continue to flow.

2.

We need a standardized protocol for IoT that provides for a gateway at the home, and encrypted data on both sides of the gateway, but with the gateway owner having access to the encryption keys on the inner side of the gateway. The standardized protocol would have fields for the vendor name and hosts, as well as a human readable json-style payload — and a rule that nothing can be double-encrypted in the payload, keeping it from the eyes of the user.

Under such an arrangement, users, or their gateways acting as proxies for them, could monitor what is coming and going. You could program your gateway, for example, to block unnecessary information from http messages sent by your device.

Of course, the vendors, seeing the blocked information might decide not to provide their service, and that’s their right, but at least everyone would know the score.

 

Will this happen? Well, I think vendors with the long view of things would probably see #1 as appealing. Users will not, perhaps. But that is because users are not fully aware of the consequences of inviting someone else to monitor their activities. Perhaps people will think differently after a few sensational misuses of their data.

Vendors will fight #2 mightily. Of course, they could ignore it completely, with the potential antidote that a large number of users who insist on it becoming excluded from their total available market. With a critical mass of people using gateways that implement #2, I think we could tip things, but it right now it seems a long shot.

 

I am quite pessimistic about all this. I don’t think we’ll see #1 or #2 unless something spectacularly bad happens first.

 

For the record, I do use a few IoT devices in my home. There are two flavors: those I built myself and those I bought. For the self-built, they exist entirely within my network and do not interact with any external server. I obviously know what they do. For those I bought, they they exist on a DMZ style network up with no access to my home network at all (at least if my router is working as intended). This mitigates the worry of pwned devices accessing my computer and files, but does not stop them from sending whatever they collect back to the mothership.

 

machines don’t think but they can still be unknowable

I still read Slashdot for my tech news (because I’m old, I guess) and came across this article, AI Training Algorithms Susceptible to Backdoors, Manipulation. The article cites a paper that shows how the training data for a “deep” machine learning algorithms can be subtly poisoned (intentionally or otherwise) such that the algorithm can be trained to react abnormally to inputs that don’t seem abnormal to humans.

For example, an ML algorithm for self-driving cars might be programmed to recognize stop signs, by showing it thousands of stop signs as well as thousands of things that are not stop signs, and telling it which is which. Afterwords, when shown new pictures, the algorithm does a good job classifying them into the correct categories.

But lets say someone added a few pictures of stop signs with Post-It notes stuck on them into the “non stop sign” pile? The program would learn to recognize a stop sign with a sticky on it as a non stop sign. Unless you test your algorithm with pictures of stop signs with sticky notes on them (and why would you even think of that?), you’ll never know that your algorithm will happily misclassify them. Et voila, you have created a way to selectively get self driving cars to zip through stop signs like they weren’t there. This is bad.

What caught my eye about this research is that the authors seem not to fully grasp that this is not a computer problem or an algorithm problem. It is a more general problem that philosophers, logicians, and semiologists have grappled with for a long time. I see it as a sign of the intellectual poverty of most programmers’ education that they did not properly categorize this issue.

Everyone has different terms for it, and I don’t know jack about philosophy, but it really boils down to:

  • Can you know what someone else is thinking?
  • Can you know how their brain works?
  • Can you know they perceive the same things you perceive the same way?

You can’t.

Your brain is wholly isolated from the brains of everyone else. You can’t really know what’s going on inside their heads, except so much as they tell you, and for that, even if everyone is trying to be honest, we are limited by “language” and the mapping of symbols in your language to “meaning” in the heads of the speaker and listener can never truly be known. Sorry!

Now in reality, we seem to get by.  if someone says he is hungry, that probably means he wants food. But what if someone tells you there is no stop sign at the intersection? Does he know what a stop sign is? Is he lying to you? How is his vision? Can he see colors? What if the light is kinda funny? All you can do is rely on your experience with that person’s ability to identify stop signs to know if he’ll give you the right answer. Maybe you can lean on the fact that he’s a licensed driver. However, you don’t know  how his wet neural net has been trained by life experience and you have to make a guess about the adequacy of his sign-identification skills.

These deep learning algorithms, neural nets and the like, are not much like human brains, but they do have this in common with our brains: they are too complex to be made sense of. That is, we can’t look at the connections of neurons in the brain nor can we look at some parameters of a trained neural network and say, “oh, those are about sticky notes on stop signs. That is, all those coefficients are uninterpretable.

We’re stuck doing what we have done with people since forever: we “train” them, then we “test” them, and we hope to G-d that the test we gave covers all the scenarios they’ll face. It works, mostly, kinda, except when it doesn’t. (See every pilot-induced aviation accident, ever.)

I find it somewhat ironic that statisticians have worked hard to build models whose coefficients can be interpreted, but engineers are racing to build things around more sophisticated models that do neat things, but whose inner workings can’t quite be understood. Interpreting model coefficients is part of how how scientists assess the quality of their models and how they use them to tell stories about the world. But with the move to “AI” and deep learning, we’re giving that up. We are gaining the ability to build sophisticated tools that can do incredible things, but we can only assess their overall external performance — their F scores — with limited ability to look under the hood.

 

Liberal Misogyny Detected

I really like insulting Donald Trump and his coterie contemptibles. They’re the worst. The whole administration is a brightly burning trash fire so yuge, it can probably be seen from space. Parliamentarian aliens receiving our radio transmissions on the planet Zepton are right now debating whether they should conquer (or vaporize) Earth on humanitarian grounds.

I really do not like these people.

I insult because I can, because it makes me feel a little better, and because it’s about the only power I have over this administration. In fact, I like insulting Trump so much that I wrote software so that I could insult him hundreds of times a day on each of thousands of other people’s computers. It’s the sort of thing that helps keep me going.

As part of that project, I’ve tried to maintain certain editorial standards for insults. I have three basic criteria:

  1. First and foremost, insults should be funny. Mean is fine, even encouraged, but funny is non-negotiable. That’s why I shamelessly stole the lion’s share of my insults from Jezebel, where a team of professional Trump trollers labor night and day for our benefit.
  2. The second requirement I have for my insults is that they should be specific to the person. A reasonably intelligent person should, with high confidence, know from the insult which despicable person is being insulted. This is reasonably easy for Trump. Nobody will be confused about “Orange Julius Caesar” or “Hairpiece Come to Life”. But it gets harder when you want to go after the rest of the swamp creatures. These are OK: “Angry Second-Assistant High School Football Coach” Pence, “White Nationalist Potato Sack” Bannon, “Apple-Cheeked Hate Goblin” Sessions, and “Eddie Munster Understudy” Ryan.
  3. The final requirement is that the insults not be racist, sexist, or any other kind of -ist. I think this is really a restatement of the first requirement: the insult should be about the person as an individual.

I’ve particularly struggled with Spicer and Conway. They are pathetic creatures, who literally lie for a living. But they are rather generic pathetic creatures, and it’s hard to come up with insults for either that would not be mistaken for someone else. That’s why, in my software, I have disabled them by default. The lists are too short and I’m just not proud of them. Most of the Spicer and Conway insults include references to their job titles or name, which is weak comedic sauce. (Others have lampooned them much more successfully than I have. Melissa McCarthy skewers Spicer brilliantly by showing the ridiculousness of his impotent, misdirected rage.)

Because I am in constant need for fresh insults, particularly for the back-benchers, I created this web form to let fans of Detrumpify help me out. So far, I’ve received more than 500 suggestions. There have been real gems in there (eg: “King Leer”), but also a lot of dreck that fails on the requirements above.

Among all the suggestions, the ones for Conway have been the worst by far, and I’ve not accepted any of them into Detrumpify. Without exception, they refer to how ugly she is, or her sexual organs, or her sexual behavior, or her lack of suitability as a sexual partner, or something cruel (and sexual) that the author would like to do to her. None of this is remotely OK.

Many suggested insults relating to her appearance. I’ve got mixed feelings on this. Very many of the Trump and Trump boot licker insults are linked to appearance. Trump really is orange, and that will always be funny. Sessions really does resemble a Keebler Elf and Bannon does resemble a corpse. Does that mean it’s okay to make fun of Conway’s appearance, too? Yes, I think it does. However, in practice, it’s hard to do so in way that is more specific than simply calling her sexually unworthy.

What kills me is that this crap is coming from my team, the supposedly unsexist one, or at least the less sexist one. Or, at the very very least, the one that is embarrassed of its own sexism and is consciously working against its check own unconscious biases? Oh, no? That’s not happening? Oh, shit. I guess that means we get to squander half our collective talent pool and make massive “Trump Sized” mistakes indefinitely.

So, as we endlessly debate whether sexism was a factor in the 2016 presidential election, I have new reason to feel common cause with women everywhere who have direct “no duh”-level data that sexism is alive and well on the left. I’ve got a spreadsheet full of direct evidence, supplied willingly by sexists who hate Trump. In the words of Der Gropenführer: sad.


PS – Though we don’t see all that much of her these days, I do still very much want to insult Ms. Conway. She certainly deserves it. Who can crack this nut?

 

Tragicomedy of the Contraposicommons

I was going to call this entry the “Tragedy of the Anticommons” but economists have already coined anticommons to refer to something entirely different than what I want to talk about. (An anticommons is something that would be socially beneficial as a commons, but for reason of law or raw power, is controlled by a private actor to the detriment of most. For example, patent intellectual property.)

Today’s post is about resources that actually become more valuable as more people that use them. This is something like the network effect that many Internet services enjoy, but applies broadly to many societal projects. I’m talking about things like schools and insurance markets, libraries, emergency preparedness, etc that benefit from wide participation.

I decided to write this post after I found out that several of our son’s primary-grade compatriots will not be returning to public school next year. Instead, they will be going to various private institutions. Interestingly, some of the parents took the time to write messages to the left-behinds explaining that their decision was not due to any kind of inadequacy of the school, but just a desire to do “what’s best” for their child.

It’s difficult to argue that a parent should not do “what’s best” for his or her kid, but I’m going to take a shot at it anyway, because that approach to parenting taken to its logical endpoint, is deeply antisocial. And while I think US history doesn’t have many examples of destructive trends continuing “to their logical endpoints,” I fear that this time may be different.

Your child in school is not just consuming an education — he or she is part of someone else’s education, too. In important ways, school is a team sport, and everyone does better if more people participate. This is not only true kids whose parents have lots of spare time and resources to participate in school activities. It is just as true among kids whose parents do not have the time or money to participate heavily. All children bring a unique combination of gifts, talent, and complexities that enhance the learning experience for others. Attending a school with rich participation across the socioeconomic spectrum enlarges everyone’s world view, to everyone’s benefit.

Now, am I asking people with the ability to opt for a private education to altruistically sacrifice their children for someone else’s benefit? I guess the answer is a definite “kinda.” Kinda, because I think it is a sacrifice only if everyone acts unilaterally, and that is the essence of the problem.

You see, there is a prisoner’s dilemma type of situation going on here. If we all send our kids  to public school, we are all invested together, and we will want the appropriate resources brought to bear on their education. The result is likely a pretty good school. But if enough people “bail” the school will be deprived of their children’s participation. Furthermore, “exiters” have a strong incentive not to continue to have significant resources provided to the school. Such resources come from taxes and exiters and their children will derive no direct benefit from them. Perhaps nobody they personally even know will derive any such benefit. (There are plenty of indirect benefits to educating other people’s kids, but that’s another article.) As more and more people peel off, the school is diminished and the incentive to peel off becomes ever greater — the dreaded death spiral. At last the school is left only with the students from families unable or uninterested in leaving. (NB: I am not suggesting that the “unable” and “uninterested” go together in any way, only that that’s who will be left at such a school.)

In the language of game theory, we reach a Nash equilibrium. Everyone who cares and has the means has defected, and the public schools are ruined. The interesting thing about Nash equilibria is that a better and cheaper outcome is possible for everyone if participants trust each other and cooperate. (After all, private school costs a lot and serious studies show that they underperform public schools.) So, I’m not suggesting altruism per se, but something more akin to an enlightened model of cooperation.

But today’s blog post isn’t even about public primary education. It is about the general phenomenon, which I fear is widespread, of people “pulling the ripcord” on important societal institutions and resources — bailing out, to varying degrees, based on their ability to do so. Consider:

  • Gated communities, private security forces, and even gun ownership for protection, represent people rejecting the utility of civil policing.
  • Water filters and bottled water are people rejecting the need for a reliable water supply.
  • Skipping vaccinations are people rejecting the public health system
  • Sending kids to private colleges and universities is pulling the ripcord on state university systems, and the many, many societal benefits that come with them (open research, an educated populace, etc)
  • Uber, and eventually, self-driving cars represent rejecting transit (perhaps shredding a ripcord that was pulled long ago with the widespread adoption of the automobile)
  • Pulling out of subscription news outlets, leaving them to cheapen and become less valuable

Here are some you may not have heard of that I think are coming:

  • Private air travel as an escape from the increasingly unpleasant airline system, with decreased investment, safety, and reliability of the latter over time
  • Completely energy self-sufficient homes (with storage) as an escape from the electric system, with a total cost well in excess of an integrated electric system

Another example: as many of you know, I’ve also been dabbling quite a bit in amateur radio. In that hobby, I have discovered a large contingent of hams who are prepping for TEOTWAWKI — The End Of The World As We Know It. They are stocking up on food, water, and ammunition in preparation for society’s total collapse. What I find upsetting about this is that such prepping probably makes collapse more likely. These people are not pouring their resources into community emergency prep groups nor or they the types to advocate for taxation to pay for robust emergency services. Instead, they’re putting resources into holes in their backyards.

All of these are rational decisions in a narrow sense (even the anti-vax thing) and probably cause little or no harm assuming few others make the same decision. However, once many people start to make these decisions, things can unravel quite quickly. (Regarding anti-vax, these effects can already be seen in outbreaks in certain communities with a high penetration of opt-outs.)

Perhaps some of these are “OK”, even better than OK. People abandoning broadcast television for subscription media is bad for broadcast television, but maybe that’s fine; we certainly don’t owe TV anything. (On the other hand, major broadcast networks that had to satisfy a huge swath of the populace at once were forced into compromises that had certain societal benefits: like everyone getting more or less the same news.)

But I worry that we are in a time of dangerously anti-social behavior, enhanced by a party whose ideology seems not only to reject socialism but to reject the idea that “society” exists at all. In the process, they seem willing to destroy not only social programs that deliberately transfer wealth (social insurance, etc), but also any social institution built on wide group participation. Even among people not predisposed to “exit,” growing inequality and the fear of its consequences may encourage — or even force  — people to consider  pulling the ripcord, too. The result, if this accelerates, could be disaster.

Marketing Genius

Over the past couple of months, in stolen moments and late night coding sessions, I’ve quietly been inventing a little piece of ham radio gear intended to facilitate using one’s radio remotely.

I thought it would be a good way to try my hand at designing a useful product, from start to finish, including complete documentation, packaging, etc.

I posted about it to a few ham radio forums and it turned out nobody was interested, so instead of making a product, I’m just throwing it up on github for people to ignore forever.

I’m no marketing genius.

 

 

Rigminder 2017-2017 RIP

Cold Turkey

This morning, I started my regular morning ritual as usual. I got up, complained about my back, put bread in the toaster, water in the kettle, and then went to my phone to see what’s happening.

Except that last thing didn’t work. Facebook wouldn’t load.

Why? Because my better half convinced me that it was time to take a Face-cation. Last night we logged into our accounts and let each other change our passwords. As a result, we are unable to access our own accounts, locked in a pact of mutual Facebook stasis.

I can say, that several times already today I have pretty much instinctually opened a tab to read FB. In my browser, just typing the first letter ‘f’ is all it takes to open that page. Each time I’ve been greeted by a password prompt. Poor me.

Well, if FB is my heroin, let this plug be my methadone. We’ll see how that goes.