A simple and cheap PM2.5 Air Quality Meter

The air quality where I live has suddenly thrust itself into our consciousness. I regularly visit the Bay Area Air Quality Management District site and PurpleAir to find out what is happening in my neighborhood.

However, I wanted to know what the air quality was inside my house. Of course, I could just buy a PurpleAir sensor for nearly $200, but it turns out that the technology inside those sensors is something called the Plantower PMS5003. My friend Jeremy pointed me towards these ingenious little sensors draw in air and blow it in front of a laser, where dust particles scatter some of the light, which is detected by photodiodes. (Datasheet here) For various reasons you’ve read online, this is not the same as the “gold-standard” technology that EPA PM sensors use, but studies by some scientists suggest it’s not bad. And it’s cheap. You can get this module from Adafruit for $40 or on eBay for even less. I bought mine from an eBay seller for $20, but it came without a break-out board, making wiring a bit more difficult, but not too hard. It’s only three wires after all!

Thing is, the module is doing the heavy lifting, doing all the signal processing, but it’s not a complete system by itself. You need a computer to do something with the data.

I found that the Raspberry Pi Zero W makes a nice little companion for the PMS5003. The Pi is powered by a USB adapter, and has a 5V pin from which you can steal the necessary power for the sensor. You can send the transmitted serial data from the sensor right into the UART0 receive pin of the Raspberry Pi. That plus a little bit of software, et voila, a simple sensor platform.

At first, I coded mine just to write the results, every second to a .csv file that I could load into a spreadsheet. But the copying and pasting quickly got old, so I decided to make a little webserver instead. Mine is set up on my home wifi and I can access it from my computer or phone just by going to its web address. It presents the data in various levels of time averaging: the first graph is second by second, then minute by minute, hour by hour, and day by day.

It’s a simple enough project that I think just about anybody could do.

Here’s the code on GitHub.

Here’s a picture of the simple UI I made:

Here’s what it looks like:

Making weird stuff

An interesting aspect of my job is that I am sometimes asked to do weird stuff. I like weird stuff, so this is a good.

Recently, I was asked to build “turkey detector.” You see, my boss wanted a demo that we shows that we can help scientists deploy sensors, and collect and process the data from them. Furthermore, we wanted a demo that would show machine learning in action.

Oh, did I mention that there are a lot of wild turkeys strutting around this campus?

So we figured, hey, let’s deploy some cameras, take pictures, send them to a turkey classifier model, and put the results on website. What could be easier?

There are some interesting constraints:

  • not having a lot of spare time to do this (have other more pressing responsibilities)
  • minimal resources
  • no wired electrical or network access in the most turkey-friendly outdoor areas

I added a few constraints of my own, to make things more interesting:

  • the cameras need to be able to withstand the weather and operate without physical interaction for a long time. Not that we need these cameras to stay up forever, but a real camera trap should be able to last.
  • don’t use proprietary hardware or software — everything open source (well, almost everything, as you’ll see)

Commercial, already-built camera traps exist, but they, as far as I know, do not sync up with wifi and do not keep themselves charged. You have to go out to change batteries and collect your memory card. Bah.

Electronic Hardware

For the computer, I went with the Raspberry Pi Zero W after starting with a Raspberry Pi 3. These are ARM-based circuit board with built-in WiFi and a special port for attaching a camera. The “3” has a multi-core process and more ports. The Zero is slower but smaller and uses about 1/2 to 1/3 the power of the Pi 3.

I like the RPi platform. It’s reasonably open, simple to use (its Raspbian OS is basically like any Debian-based Linux), and crazy cheap. The Pi Zero W is $10! For the camera I used the companion “PiCamera 2” designed to go with the RPi. It’s an 8Mpixel tiny phone camera jobbie, fixed focus and fixed aperture, about $30.

Getting a hard-wired power to the unit would be out of the question, so this needs to work from battery. I ended up using a single LiPo cell, 3.7V 4.4Ah. This is enough to power the Pi for about a day without any new charge, but it’s not enough to go two days or run overnigh. To charge, two small solar 6V solar panels,  3.5W each would do that job. The panels require a charge controller to adjust the panel output to the battery. Also, the Pi requires 5V, and the battery only puts out ~3.5-4V, so a boost converter to make a stable 5V is also required. The panels were a huge ripoff, at $11/Wp and I’m not thrilled with the cost and quality of the charge controller and boost converter either, but they do work.

Here’s a picture of all the kit, in a cardboard box in my backyard. Well, almost all the kit. An RPi 3 is pictured, which I moved away from because of its power use. Also, there are two panels in the operating camera.

On a sunny, or moderately sunny day, there is enough power to operate the camera and charge the battery. On a cloudy day, the battery drains slowly, or doesn’t drain, but doesn’t charge either.

Either way, I needed a solution to deal with night. As it happens, the RPi has neither a clock to keep time while it’s off, nor a means of turning itself off or on. Because of this, I built a small companion board with an Attiny84A microcontroller connected to a FET transistor. The Attiny actually turns the RPi on in the morning and off at night, thus saving precious power. The Attiny itself does not draw all that much power, so can run continuously.

The communications protocol between the processors is primitive, but functional. The RPi has two signal wires going to the Attiny. One is pulsed periodically to tell the Attiny that the RPi is still functioning. If the pulses stop, the Attiny waits a few minutes and then turns of the power, then waits a few more minutes and turns it back on again. The other pin is used to tell the Attiny that the RPi wants to be turned off. After getting a pulse on this pin, the Attiny shuts down the RPi for an hour. The RPi also gets a low battery signal from the boost converter, which it can use to determine that it should shut itself down (cleanly) and then request to the Attiny that it be turned off. I try to avoid shutting down the Pi willy-nilly, because the filesystem might be corrupted.

I said that the RPi has no clock. When it boots it tries to connect to a network and then get the time from a time server. Once it has done this, it can proceed with normal operation and keep good time while it’s running. If it can’t get the time from the Internet, it asks to be shut down again to try again later. The RPi decides it’s time to be shut off for the night by comparing the time with sunset, as calculated from a solar ephemeris library.

All said, the power system I came up with is basically just barely adequate, and even when the battery simply cannot run the system, the unit turns off in a controlled fashion and, assuming the battery eventually charges again, the Pi will reboot eventually and get back up.

A next gen camera (already in the works) will have a much bigger battery and charging system. On e-bay, one can get 20W or 25W panels kits with charge controller for about $1/Wp for the panel, as they should be. These charge controllers are designed for 12V lead-acid batteries, though, so I’ll need to use a nice alarm system type AGM battery. A nice thing about most of these charge controllers is that they tend to have USB charger ports, so I do not need the 5V buck controller. Everything is large, though, and setting up a rack to hold the large panel is a problem I have not yet solved. But overall, the lesson I’m learning is that everything is easier when you have power to spare.

The Attiny watchdog circuit works pretty well, but it was a hand-made hack on a proto board and the communication “protocol” is pretty lame.  Since deploying the first camera, I have designed a board to replace my hack on subsequent cameras. The new board is powered by an Atmega328p, which is the same processor that the Arduino uses. I am abandoned the Attiny because I want to use i2c to communicate and the 328p has an i2c hardware module. You can bit-bang (that is, do it in software) i2c with the Attiny, but the RPi i2c controller has a bug which makes it unreliable with slower i2c devices. Anyway, the i2c interface allows transferring more complex messages between the processors, like “shut down in 3 minutes and then wait 7 hours 47 minutes before starting me up again.”  The new board just plugs into the RPi and you plug the power cable into it rather than the RPi, so it’ll be unfussy to setup.

The board design:

Finished board in action:

Software

The software side of things was comparatively simple and only took a few hours to get up and running. (I’ve spent a lot more time on it since, though!) On the RPi, a python script snaps pictures every few seconds. It compares each image to the previous one it took, and if they are sufficiently different (that is, something in the scene has changed), it sends the image to a server. If the picture is the same as the last, the server is only pinged to let it know the camera is still alive. Hours can go by without any pictures being sent.

On the server, the images are analyzed using the ML model to determine if there are turkeys. I did not have a sufficient training set of turkey / non-turkey images to build a custom model, so I am using a pre-cooked Amazon AWS model called Rekognition to ID the poultry. This is my one concession to proprietary “cloud” stuff. Rekognition is idiot-proof, so maybe no the best demo of ML chops, but, eh. One thing about using AWS is that it costs money, so the optimization of not sending redundant images is important for not racking up a huge bill.

The server is written in NodeJS, and receives and processes the pictures as well as hosting a simple website. All communication is JSON messages over REST over HTTPS.

When it comes to software, I have an ongoing war with myself. I like to keep things simple for me (not so much typing) but also like to keep things actually simple (not reliant on large, complex frameworks and libraries that bring in zillons of dependencies and things I don’t understand and can’t easily maintain). To this end, I tried to stick to libraries available from apt and even then, not too much. On the RPi, I used the standard camera and GPIO libraries that come with Raspbian, and installed the python3 modules requests and scikit-image. (I chose not to use OpenCV, which is a shame, because it looks cool. But there is no pre-built package and I didn’t want to build it from source. Building complex things from source on the Pi takes a loooong time, trust me!) On the server, I used Node with Express and I think no other modules — though to be fair, package management in Node is a breeze anyway.

Oh, and for course there is some code running on the Attiny and there is some HTML and Javascript for the client side — so this little project encompasses four or five separate languages, depending on how you count. I think I could have done the server in Python, but I’m still grappling with concurrency in Python. Maybe one day I’ll figure.

Code, in all its uncommented, late-night hacking glory is here: https://github.com/djacobow/turkeycam.

Putting it in a Box

Probably the hardest part of this project for me was figuring out how to do it physically. Getting a proper waterproof box was easy. But how to mount the panel to the box, and then mount both of them to a tree or light stanchion was quite tricky for his non-mechanical engineer. I spent quite some time poking around Home Depot trying to figure out how to make it work. In the end, I bought a bunch of angle aluminum and start cutting and drilling and filing and screwing until I got something that more or less worked. It was a lot of effort, though, and doesn’t look very good. I really wished I could offload this part to someone more mechanically inclined than me.

Anyway, that’s it. We finally got the first camera deployed and after fixing a few bugs, it has started catching turkeys.

Does it Work?

You can see the system in operation here: https://skunkworks.lbl.gov/turkeycam. This is my “personal” dev server, and so it may be up or down or not showing pictures when you visit. Also, the second camera pictured is showing my office and will do so for the time being.

Here are some turkeys we caught today:

Moore’s last sigh

I have a strange perspective on Moore’s Law that I can’t seem to shake.

The common expression of Moore’s Law is that transistor density on an integrated circuit grows exponentially. The typical time constant is a doubling every 18 to 24 months. Over the years, Moore’s Law has been remarkably stable. Good folks argue about if and when it will come to and end, or if it already has. People also argue about whether Moore’s Law itself was endogenous to semiconductor scaling; that is, whether the Law became a goal and so became self-fulfilling.

Here’s my take: Rather than observing a constant stream of innovation in semiconductors, what we have witnessed over the last 50 years or so has been the slow, logical expansion of a single innovation: that of the planarized transistor and integrated circuit made from them. The integrated circuit is credited to Jack Kilby who demonstrated the first IC in 1958. However, the basis of real chips is the planar transistor, invented by Jean Hoerni at Fairchild in 1959.

From there, the entirety of the history of Moore’s law is a logical and inevitable consequence. The exponential growth was not due to a stream of genius innovation, but an entirely canny and methodical march of engineering, taking an idea to its logical conclusion: larger wafers, smaller lithography, more metal layers, thinner gate oxides, etc. The logical conclusion being electronic devices that operate on the 100-103 numbers of electrons at a time. It is those limits, along with thermal limits that are the endgame we see today. (There are other complications, like deep-UV lithography that appear very difficult to solve, but can probably be solved at some price.)

I don’t want to belittle the work of so many brilliant engineers who have toiled hard in the salt mines of chip design. Of course, they (we!) have brought the world a fantastic technology. But if you back out just a bit on timescale, I think it’s easy to see that Moore’s Law is not telling you as much about electronics and computers as it is describing a state of the last 50 years.

We have lived in a period of exponential improvement in electronics. That period, like all periods of exponential change, will end; perhaps already has. At any but the smallest timescales, major technology innovations look like step functions followed by a longer and partially overlapping period of diffusion into society. Aeronautics, combustion engines, solar cells, wind turbines, you name it.

None of this makes me sad, though I wished airplanes were still getting faster and better. In the multi-generational mad-dash to take semiconductors to their limits, we’ve probably passed over lots of side opportunities to use chips in novel ways, ways that require more design attention per transistor than total transistors. I hope that we will see more novel electronic devices in the future, as brains that were focused on more and faster start to look for other ways to do interesting things in electronics.

 

 

machines don’t think but they can still be unknowable

I still read Slashdot for my tech news (because I’m old, I guess) and came across this article, AI Training Algorithms Susceptible to Backdoors, Manipulation. The article cites a paper that shows how the training data for a “deep” machine learning algorithms can be subtly poisoned (intentionally or otherwise) such that the algorithm can be trained to react abnormally to inputs that don’t seem abnormal to humans.

For example, an ML algorithm for self-driving cars might be programmed to recognize stop signs, by showing it thousands of stop signs as well as thousands of things that are not stop signs, and telling it which is which. Afterwords, when shown new pictures, the algorithm does a good job classifying them into the correct categories.

But lets say someone added a few pictures of stop signs with Post-It notes stuck on them into the “non stop sign” pile? The program would learn to recognize a stop sign with a sticky on it as a non stop sign. Unless you test your algorithm with pictures of stop signs with sticky notes on them (and why would you even think of that?), you’ll never know that your algorithm will happily misclassify them. Et voila, you have created a way to selectively get self driving cars to zip through stop signs like they weren’t there. This is bad.

What caught my eye about this research is that the authors seem not to fully grasp that this is not a computer problem or an algorithm problem. It is a more general problem that philosophers, logicians, and semiologists have grappled with for a long time. I see it as a sign of the intellectual poverty of most programmers’ education that they did not properly categorize this issue.

Everyone has different terms for it, and I don’t know jack about philosophy, but it really boils down to:

  • Can you know what someone else is thinking?
  • Can you know how their brain works?
  • Can you know they perceive the same things you perceive the same way?

You can’t.

Your brain is wholly isolated from the brains of everyone else. You can’t really know what’s going on inside their heads, except so much as they tell you, and for that, even if everyone is trying to be honest, we are limited by “language” and the mapping of symbols in your language to “meaning” in the heads of the speaker and listener can never truly be known. Sorry!

Now in reality, we seem to get by.  if someone says he is hungry, that probably means he wants food. But what if someone tells you there is no stop sign at the intersection? Does he know what a stop sign is? Is he lying to you? How is his vision? Can he see colors? What if the light is kinda funny? All you can do is rely on your experience with that person’s ability to identify stop signs to know if he’ll give you the right answer. Maybe you can lean on the fact that he’s a licensed driver. However, you don’t know  how his wet neural net has been trained by life experience and you have to make a guess about the adequacy of his sign-identification skills.

These deep learning algorithms, neural nets and the like, are not much like human brains, but they do have this in common with our brains: they are too complex to be made sense of. That is, we can’t look at the connections of neurons in the brain nor can we look at some parameters of a trained neural network and say, “oh, those are about sticky notes on stop signs. That is, all those coefficients are uninterpretable.

We’re stuck doing what we have done with people since forever: we “train” them, then we “test” them, and we hope to G-d that the test we gave covers all the scenarios they’ll face. It works, mostly, kinda, except when it doesn’t. (See every pilot-induced aviation accident, ever.)

I find it somewhat ironic that statisticians have worked hard to build models whose coefficients can be interpreted, but engineers are racing to build things around more sophisticated models that do neat things, but whose inner workings can’t quite be understood. Interpreting model coefficients is part of how how scientists assess the quality of their models and how they use them to tell stories about the world. But with the move to “AI” and deep learning, we’re giving that up. We are gaining the ability to build sophisticated tools that can do incredible things, but we can only assess their overall external performance — their F scores — with limited ability to look under the hood.

 

Marketing Genius

Over the past couple of months, in stolen moments and late night coding sessions, I’ve quietly been inventing a little piece of ham radio gear intended to facilitate using one’s radio remotely.

I thought it would be a good way to try my hand at designing a useful product, from start to finish, including complete documentation, packaging, etc.

I posted about it to a few ham radio forums and it turned out nobody was interested, so instead of making a product, I’m just throwing it up on github for people to ignore forever.

I’m no marketing genius.

 

 

Rigminder 2017-2017 RIP

Cold Turkey

This morning, I started my regular morning ritual as usual. I got up, complained about my back, put bread in the toaster, water in the kettle, and then went to my phone to see what’s happening.

Except that last thing didn’t work. Facebook wouldn’t load.

Why? Because my better half convinced me that it was time to take a Face-cation. Last night we logged into our accounts and let each other change our passwords. As a result, we are unable to access our own accounts, locked in a pact of mutual Facebook stasis.

I can say, that several times already today I have pretty much instinctually opened a tab to read FB. In my browser, just typing the first letter ‘f’ is all it takes to open that page. Each time I’ve been greeted by a password prompt. Poor me.

Well, if FB is my heroin, let this plug be my methadone. We’ll see how that goes.

Nostalgia: Airspace Edition. The end of the road for VORs

The FAA is in the process of redesigning the Class B airspace around SFO airport, and it signals an interesting  shift in air navigation: the requirement that everyone in the airspace be able to navigate by means of GPS.

They are undertaking the redesign primarily to make flying around SFO quieter and more fuel efficient. The new shape will allow steeper descents at or near “flight idle” — meaning the planes can just sort of glide in, burning less gas and making less noise. As a side benefit, they will be able to raise the bottom of the airspace in certain places so that it is easier for aircraft not going to SFO to operate underneath.

As far as I’m concerned, that’s all good, but I noticed something interesting about the new and old design. Here’s the old design:

This picture, or one like it, will be familiar to most pilots. It’s a bunch of concentric circles with lines radiating out from it, dividing it into sectored rings. The numbers represent the top and bottom of those sections, in hundreds of feet. This is the classic “inverted wedding cake” of a Class B airspace. In 3D, it looks something like this, but more complicated.

This design was based around the VOR, a radio navigation system, that could tell you what azimuth (radial) you are relative to a fixed station, such as the VOR transmitter on the field at SFO. A second system, usually coupled with a VOR, called DME, allows you to know your distance from the station. Together, you can know your exact position, but because of this “polar coordinate” way of knowing your position, designs intended to be flown by VOR+DME tend to be made of slices and sectors of circles.

The new proposed design does away with this entirely.

Basically, they just drew lines any which way, wherever it made sense. This map is almost un-navigable by VOR and DME. It takes a lot of knob twisting and fiddling to establish your exact position if it is not based on an arc or radial. Basically, this map is intended for aircraft with GPS.

All of this is well and good, I guess. GPS has been ubiquitous in every phone, every iPad and every pilot’s flight bag for a long time.

I learned to fly in a transitional era, when GPS existed, but the aircraft mostly had 2 VOR receivers and a DME. My flight instructor would never have let me use a GPS as a mean of primary navigation. Sure, for help, but I needed to be able to steer the plane without it, because the only “legal” navigation system in the plane were the VORs. I still feel a bit guilty when I just punch up “direct to” in my GPS and follow the purple line. It feels like cheating.

But it’s not, I guess. Time marches on. Today, new aircraft all have built-in GPS, but a lot of older ones don’t. And if they’re going to fly under the SFO Class B airspace, they’re going to need to use one of those iPads to know where they are relative to those airspace boundaries. And strictly, speaking, they probably should get panel-mounted GPS as well.

 

 

The end of computing as a hobby?

I grew up with computers. We got our first machine, an Atari 800, when I was only 8 or 9. An 8-bitter with hardware sprites. 48 KiB of memory, and a cassette tape trive, this was only one step removed from the Atari 2600 game console. Very nearly useless, this was a machine for enthusiasts and hobbyists.

Over time, computers became less useless, as well as more “user-friendly,” but they — particularly the PC style machines — kept the doors open to hobbyists and tinkerers.

The Bad News

I think, however, that that era has come to an end, and I’m saddened. I see three basic trends that have killed it.

The first is that the network-connected world is dangerous. You can’t just fire up any old executable you find on the Internet in order to see what it does. It might do something Awful.

The second is that the closed ecosystem app stores of the world, aiming for a super smooth experience, have raised the quality bar for participation — particularly for “polish.” You simply cannot publish ugly, but highly functional software today.

The third problem is that you can’t make interesting software today without interacting with several systems in the cloud. Your app, hosted on a server, talks to a database, another app, and a half dozen other APIs: a link shortener, a video encoder, etc. And these APIs change constantly. There is no commitment to backward compatibility — something that was an iron-clad requirement of the PC era.

Trend one is a painful fact of life. Trend two could be reversed if the manufacturers had any incentive to do so. They do not. Trend three, I think is the worse, because it is wholly unnecessary. Say what you want about the “Wintel duopoly,” but they did not punish developers like modern companies do.

Together, these things pretty much lock out the casual developer. I’ve learned this the hard way as I try to push forward in my free time with a few open-source apps in a post PC world. It is one thing for a paid programmer to maintain a piece of software and deal, however grudgingly, with every email that comes from Google telling you that you need to update your code, again. But the hobbyist who wrote something cool for his friends, that worked for six months and then broke, is kind of stuck. Does he want to run a zero-revenue company that “supports” his app in perpetuity?

This makes me sad, because I wonder what we’re missing. As many of your know, I have gotten into ham radio. There’s a lot of cool ham-authored software out there. It’s ugly. It’s clunky. But some of it does amazing things, like implement modems that forward-error-correct a message and then put it into a ridiculously narrow signal that can reach around the world. Today, that software still runs on Windows, usually coded against the old Win32 or even Win16 libraries. It gets passed around in zip files and people run unsigned executables without installers. It’s the last hacky platform standing, but not for long.

The Good News

Of course, if the PC, Mac, i-device, and household gadget becomes more and more locked off, there is an exciting antidote: Arduino, Raspberry Pi, Beaglebone, and the entire maker world. People are building cool stuff. It’s cheap, it’s fun, and the barriers to entry, though intellectually a bit higher than the “PC” are pretty damn low. Furthermore, the ecosystems around these products are refreshingly chaotic and more than slightly anti-corporate.

One of the nice things about this platforms is that they are self-contained and so pose little threat to data other than what you put on them. On the other hand, they are full-fledged computer and are as exploitable as any other.

If you make something cool that runs on a Raspberry Pi, there’s still pretty little chance every kid at school will soon have it and run it, but then again, maybe there never was.

 

More election nerdism

Keeping up my streak of mildly entertaining, though basically useless Chrome Extensions, I have create a very tiny extension that keeps the Nate Silver fivethirtyeight predictions in your Chrome toolbar at all times.

You can choose which of Silver’s models is displayed, and clicking brings up more detail as well as links to a few other sites making predictions. Check it out!

For those who are interested in such things, the code is up on github. It’s actually a reasonably minimalist example of a “browser action” extension.

screen-shot-2016-10-05-at-11-09-46-am

Campaigning in an Alternate Universe

I’ve been bouncing around just at the edge of my 2016 presidential campaign overload limit, and the other night’s debate and associated post-debate blogging sent me right through it.

Yes, I was thrilled to see my preferred candidate (Hermione) outperform the other candidate (Wormtail), but all the post-debate analysis and gloating made me weary.

Then, I thought about the important issues facing this country, the ones that keep me up at night worrying for my kids, the ones that were not discussed in the debate, or if they were, only through buzzwords and hand waves, and I got depressed. Because there is precious little in the campaign that addresses them. (To be fair, Clinton’s platform touches on most of these, and Trump’s covers a few, though neither as clearly or candidly as I’d like.)

So, without further ado, I present my list of campaign issues I’d like to see discussed, live, in a debate. If you are a debate moderator from an alternate universe who has moved through an interdimensional portal to our universe, consider using some of these questions:

 

1.

How do we deal with the employment effects of rapid technological change? File the effects of globalization under the same category, because technological change is a driver of that as well. I like technology and am excited about its many positive possibilities, but you don’t have to be a “Luddite” to say that it has already eliminated a lot of jobs and will eliminate many more. History has shown us that whenever technology takes away work,  it eventually gives it back, but I have two issues with that. First, it is certainly possible that “this time it’s different. Second, and more worrisome, history also shows that the time gap between killing jobs and creating new ones can be multigenerational. Furthermore, it’s not clear that the same people who had the old jobs will be able to perform the new ones, even if they were immediately available.

Luddites
from Wikimedia

This is a setup for an extended period of immiseration for working people. And, by the way, don’t think you’ll be immune to this because you’re a professional or because you’re in STEM. Efficiency is coming to your workplace.

It’s a big deal.

I don’t have a fantastic solution to offer, but HRC’s platform, without framing the issue just as I have, does include the idea of major infrastructure reinvestment, which could cushion this effect.

Bonus: how important should work be? Should every able person have/need a job? Why or why not?

2.

Related to this is growing inequality. The technology is allowing fewer and fewer people to capture more and more surplus. Should we try to reverse that, and if so, how do we do so? Answering this means answering some very fundamental questions about what is fairness that I don’t think have been seriously broached.

Occupy
from Wikimedia

Sanders built his campaign on this, and Clinton’s platform talks about economic justice, but certainly does not frame it so starkly.

What has been discussed, at least in the nerd blogosphere, are the deleterious effects of inequality: its (probably) corrosive effect on democracy as well as its challenge to the core belief that everyone gets a chance in America.

Do we try to reverse this or not, and if so, how?

 

3.

from Wikimedia
from Wikimedia

Speaking of chances, our public education system has been an important, perhaps the important engine of upward mobility in the US. What are we going to do to strengthen our education system so that it continues to improve and serve everyone? This is an issue that spans preschool to university. Why are we systematically trying to defund, dismantle, weaken, and privatize these institutions? Related, how have our experiments in making education more efficient been working? What have we learned from them?

4.

Justice. Is our society just and fair? Are we measuring it? Are we progressing? Are we counting everyone? Are people getting a fair shake? Is everyone getting equal treatment under the law?

BLM
from Wikimedia

I’m absolutely talking about racial justice here, but also gender, sexual orientation, economic, environmental, you name it.

If you think the current situation is just, how do you explain recent shootings, etc? If you think it is not just, how do you see fixing it? Top-down or bottom-up? What would you say to a large or even majority constituency that is less (or more) concerned about these issues than you yourself are?

 

5.

flood102405
From Wikimedia

Climate change. What can be done about it at this point, and what are we willing to do? Related, given that we are probably already seeing the effects of climate change, what can be done to help those adversely effected, and should we be doing anything to help them? Who are the beneficiaries of climate change or the processes that contribute to climate change, and should we transfer wealth to benefit those harmed? Should the scope of these questions extend internationally?

 

6.

Rebuilding and protecting our physical infrastructure. I think both candidates actually agree on this, but I didn’t hear much about plans and scope. We have aging:asr-9_radar_antenna

  • electric
  • rail
  • natural gas
  • telecom
  • roads and bridges
  • air traffic control
  • airports
  • water
  • ports
  • internet

What are we doing to modernize them, how much will it cost? What are the costs of not doing it? What are the barriers that are getting in the way of major upgrades of these infrastructures, and what are we going to do to overcome them?

Also, which of these can be hardened and protected, and at what cost? Should we attempt to do so?

 

7.

Military power. What is it for, what are its limits? How will you decide when and how to deploy military power? Defending the US at home is pretty straightforward, but defending military interests abroad is a bit more complex.

U.S. Soldiers depart Forward Operating Base Baylough, Afghanistan, June 16, 2010, to conduct a patrol. The Soldiers are from 1st Platoon, Delta Company, 1st Battalion, 4th Infantry Regiment. (DoD photo by Staff Sgt. William Tremblay, U.S. Army/Released)
DoD photo

Do the candidates have established doctrines that they intend follow? What do they think is possible to accomplish with US military power and what is not? What will trigger US military engagement? Under what circumstances do we disengage from a conflict? What do you think of the US’s record in military adventures and do you think that tells you anything about the types of problems we should try to solve using the US military?

7-a. Bonus. What can we do to stop nuclear proliferation in DPRK? Compare and contrast Iran and DPRK and various containment strategies that might be deployed.