This Old Clock -or- nobody will do IOT maintenance

Plenty of people have written about the fact that in a world of companies selling IOT hardware, there is little or no incentive for them to maintain the software running on that hardware. Those people are right. But not only is there little incentive, keeping an IOT device current is actually fiendishly difficult — as I was reminded this past weekend.

Background

I have an IOT alarm clock I built myself, back in 2011. It was based around a Raspberry Pi Model 1B, running Raspbian Wheezy. The software I wrote to imeplement the clock is simple, consisting of three major components:

  1. An interface to the Google Calendar API, so it knows when I want to get up
  2. An interface to an LCD Display so I can see the time and see when it plans to wake me next.
  3. An interface to GPIO to drive a solenoid, which rings a physical chime. I wasn’t going for a wimpy electronic beeping; I wanted some Zen-level physical dinging.

Now, when I created this clock about seven years ago, my go-to language for this sort of thing was Perl. You can quibble with that choice, but Perl was my Swiss army knife at the time, and it also solved certain problems that other languages didn’t. For one, Perl has a fantastic no-no

How long will it work?

nsense library for wrapping C code: Inline. You can basically “inline” C function right in your Perl, and it “just works.” This was really important for talking to the chip GPIO for the dinger and for the LCD, which were supported only in C — at the time.

One drawback of using Perl is that Google has never supported it for accessing their APIs. That is, Perl can generate an HTTP transaction just as well as the next language, but Google also provides nice wrapper code for a list of languages which they’ve made it pretty clear will never, ever include Perl. But someone else had written a similar wrapper for Perl, so I grabbed that and got things up and running. Over the years, that has turned out to be a pain as Google has revamped their Calendar API twice in that time, and my clock just broke each time. Fixing it as a pain, but I did it just to keep the project running.

Let’s get current!

So, on Friday, after thinking about all the exploits floating around and the fact that I was running a full-fledged OS on a clock on my home network, I decided I should really update all the software on the clock. Raspbian had moved from Debian 7 (Wheezy) to Debian 8 (Jessie) to Debian 9 (Stretch) in the intervening years, so the first step was to update the OS. Twice.

This went poorly. The update process was excruciatingly slow on this single-core processor, taking hours, and occasionally stopping entirely to ask me a question (“you want to overwrite this file?”). I managed to get the first update done, but the second update died entirely when the SD card holding everything filled up after the installer decided it needed to create a huge swapfile.

So I got a new SD card and installed Stretch on that cleanly. It was also pretty quick, and if you do a network install, you won’t need to do any package updates immediately after. (Microsoft could learn a lesson from that.) After the OS came up, I copied over my software and tried to get it running. No dice.

So sonorous

You won’t be surprised to hear that some things had changed:

  • The Perl libraries for Google had changed quite a bit over the years, so installing the new ones generated a bunch of errors. Overall, the main pain in this was that some of these libraries can be found in the Raspbian package manager, and some need to be installed from cpan. I prefer OS repository packages when available because they update along with the OS. Everything I install from cpan is just a snapshot that may may need to be installed after the next OS update, and worse, experience shows that the installation process can sometimes go from simple to epic if some underlying untracked dependency changes. But when you install from cpan, it installs dependency from cpan, even if the dependencies can be found in the OS repos. This basically sucks.

    Anyway, the changes in the Perl libraries were mostly for the better, to make the Perl API better map to the way Google worked, but still, it required digging into my old Perl code and looking at the Google docs.

  • The LCD interface is in two parts. A C-based daemon from a package called lcdproc, and my client code in Perl that talks to the daemon. For the new OS I needed to rebuild that daemon from source. Luckily, lcdproc had not advanced in 7 years, so I could just rebuild the old code.¬† This was particularly lucky because I had made a big patch to the hardware driver to talk to my particular i2c expander that drover the LCD controller. I’m glad I did not have to figure out how to apply that patch to some completely new, changed version.
  • Raspbian Stretch switched from System V init to systemd, so my startup stuff, which was init based needed to be changed to systemd unit files. This was not too painful, and I actually like systemd for daemons, but it took a little while to create the files, set permissions, fix my mistakes, yadda.

    Overall, this whole project was not really that complicated in retrospect, but taking more or less an entire weekend day, it sure felt like a never-ending series of missteps and annoyances.

Getting Really Current

I should probably rewrite the clock in Python.

  • Python now has a mature library for talking to Raspberry Pi GPIO. It’s clean and simple.
  • Python has always had better Google integration, courtesy of Google. It would a pleasure to switch to this.
  • I had already written Python bindings to talk to the LCD daemon. I don’t remember doing that, but apparently this is not the first time I’ve considered doing a Python rewrite.

But there are two roadblocks. First, technically, being a clock, this code is time-sensitive, and so the Perl version has multiple threads. There is basically a thread that increments every second and various worker threads. The modern Pythonic way to accomplish the same thing (without threads — which Python has never done well and never will) is to use asyncio. Not to get into the details too deep, but I have some issues with asyncio. It’s complicated and it requires an all-or-nothing approach. Your whole program, even the non async parts need to be asyncio-ified, because they will otherwise block the parts that are.

Second, I just don’t want to. Writing code that does the same thing as other code is no fun.

Alas

Anyway, today my alarm clock works exactly as it did in 2011, but it is running on a current version of Perl with current libraries on a current OS. It only took me the better part of my weekend. ūüôĀ

Whose going to do this for the IOT power outlet or window shade controller you bought at Fry’s?

 

Brother, can you spare a CPU cycle?

Are you familiar with Bitcoin and other crypto-currencies? These are currencies not supported by any government, which can be traded anonymously and somewhat frictionlessly. They are gaining traction among people who want to make illegal transaction, who want to avoid taxes, and who just want freedom. And now, increasingly they are being used not as a currency for trade, but as an investment. As a result, people are working hard to make more Bitcoin, which is a complex mathematical operation called mining. Some organizations have set up huge computer farms employing custom hardware to do nothing more than mine bitcoin.

And now, reports are surfacing that various websites are¬†embedding javascript in their files that surreptitiously mine bitcoin¬†on your computer while you read their site. When I first heard of this, I was rather upset. After all, big, evil website people are using a facility in my browser to run code on my computer that doesn’t benefit me in any way! They are stealing my cpu, making my computer sluggish, and costing me real money in wasted power. On a cell phone, they’re even draining my battery? How dare they?

[ Also, from a pure engineering standpoint, when there are people out there using special-purpose computer chips to mine Bitcoin, can it possibly make sense to try to do the same using Javascript on my cellphone? The answer is yes, if you’re not they one paying for the phone or the electricity. ]

Anway, after some time, I calmed down and realized that this isn’t so bad, and it could even be … good?

You see, one cannot look at something we don’t like in a vacuum. It must be compared to real alternatives. We hear over and over from the advertising industry that websites need to make money. (not this one — ed.) That’s what pays for the content, the computers, and the personnel. Ads make the “free” Internet as we know it possible.

But ads suck. They are ugly and intrusive. They involves a third party — the advertiser in every page I visit. There’s me, there’s the website, and then there’s this guy over in the corner who wants to sell me viagra. Because the money is coming from the advertiser, he gets a say in the content of the site. Furthermore, he gets to know that I visited the site, and can start to collect all kinds of information on my browsing history, eventually creating a dossier on my personal habits that he will use to target me for the rest of my life. And finally, he gets to suck up my cpu cycles and my limited screen real estate in order to serve me his ads. It’s maddening!

I don’t like it, have never liked it, and would much prefer a subscription supported Internet. But that’s never going to happen, so I’m told.

So how is letting people mine bitcoin better?

  • no screen real-estate
  • no data collection
  • no third party

Sure, they’re sucking up my CPU and battery just as the advertisers, but probably no worse, and perhaps that’s a fair price to pay.

Now, there are some problems with this approach that would have to be dealt with. First, I’m not sure Bitcoin mining is really a productive use of CPU cycles, and Bitcoin may itself be just a flash in the pan. So perhaps the world will consider other, better ways to monetize my cpu cycles, maybe selling them to someone like AWS or Google, which will then remarket them for normal productive purposes. Second, I think for such a system to be fair, the user’s need to know what is going on. There should be a way to know what a site is “costing” you. And finally, we need an easy and straightforward way for users to say “no”, and then, of course, the website would be perfectly in their rights to say “no” to serving up content. Turning off Javascript entirely is not a great solution, as Javascript is just too embedded in modern web to give up.

So, here’s a business idea for you. Create a company that offers websites to host the company’s javascript on their sites in return for payment. No data is collected, but CPU cycles are consumed if the user allows it, and the site owner is informed if they do not. The syndicate in turn remarkets the CPU cycles as a service to customers, something lightweight and context-free, like Amazon Lambda.

Electricity started out with small local generators, even “home” generators, then increasingly centralized for a long time, and today, there is a big push for “distributed” generation, which is basically decentralized power generation, but maintaining a connection to the power grid.

Computing started out small on home computers and has become increasingly centralized in big data centers. Will the next step to reverse that pattern?

IoT information security will never come under the prevailing business model

The business model for smart devices in the home is shaping up to be simple and bad: cheap hardware and no service contracts. That sounds great for consumers — after all, why should I pay $100 for a smart power outlet made of a $0.40 microcontroller and a $1 relay, and why should I have to pay a monthly fee to switch it — but it is going to have serious negative ramifications.

Let me start by saying that many bits have already been spilled about basic IoT security:

  • making sure that messages sent to and from your device back to the manufacturer cannot be faked or intercepted
  • making sure that your IoT device is not hacked remotely, turning it into someone else’s IoT device
  • making sure that your data, when it is at rest in the vendor’s systems is not stolen and misused

 

As things stand, none of that is going to happen satisfactorily, primarily because of incompatible incentives. When you sell a device for the raw cost of its hardware, with minimal markups and no opportunity for ongoing revenue, you also have no incentive for ongoing security work. Or any kind of work for that matter. If you bought the device on the “razor + blade” model, where the device was cheap, but important revenue was based on your continued use of the product, things might be different.

Worse than that, however, in order to find new revenue streams (immediately, or at potential future streams), vendors have strong incentives to collect all the data they can from the device. You do not know — even when the devices are operating as designed — exactly what they are doing. They are in essence little listening bugs willingly planted all over your home, and you do not know what kind of information they are exfiltrating, nor do you know who is ultimately receiving that information.

I think there is a solution to this problem, if people want it, and it requires two basic parts to work properly:

1.

We need a business model for smart devices that puts strong incentives in place for vendors to continue to support their products. This will never happen with the cheapie Fry’s Electronics special IoT Doohickey of the Week. Instead, we probably need a real engagement with sticks (liability) and carrots (enhanced revenue) that are driven by ongoing contractual engagement. That is, money should continue to flow.

2.

We need a standardized protocol for IoT that provides for a gateway at the home, and encrypted data on both sides of the gateway, but with the gateway owner having access to the encryption keys on the inner side of the gateway. The standardized protocol would have fields for the vendor name and hosts, as well as a human readable json-style payload — and a rule that nothing can be double-encrypted in the payload, keeping it from the eyes of the user.

Under such an arrangement, users, or their gateways acting as proxies for them, could monitor what is coming and going. You could program your gateway, for example, to block unnecessary information from http messages sent by your device.

Of course, the vendors, seeing the blocked information might decide not to provide their service, and that’s their right, but at least everyone would know the score.

 

Will this happen? Well, I think vendors with the long view of things would probably see #1 as appealing. Users will not, perhaps. But that is because users are not fully aware of the consequences of inviting someone else to monitor their activities. Perhaps people will think differently after a few sensational misuses of their data.

Vendors will fight #2 mightily. Of course, they could ignore it completely, with the potential antidote that a large number of users who insist on it becoming excluded from their total available market. With a critical mass of people using gateways that implement #2, I think we could tip things, but it right now it seems a long shot.

 

I am quite pessimistic about all this. I don’t think we’ll see #1 or #2 unless something spectacularly bad happens first.

 

For the record, I do use a few IoT devices in my home. There are two flavors: those I built myself and those I bought. For the self-built, they exist entirely within my network and do not interact with any external server. I obviously know what they do. For those I bought, they they exist on a DMZ style network up with no access to my home network at all (at least if my router is working as intended). This mitigates the worry of pwned devices accessing my computer and files, but does not stop them from sending whatever they collect back to the mothership.

 

STEM vs STS

IEEE Spectrum (what, you don’t get it delivered?) recently published a short article about the relationship between STEM and STS.

STEM, as most of us know, is Science Technology Engineering and Mathematics. Pundits the world over like to remind us how important it is that we graduate as many STEM folks as possible. (That notion is wrong, by the way. We should encourage people who like STEM to pursue STEM.)

STS is less commonly known. That’s “Science and Technology in Society,” and the name describes well enough. STS people study science itself: its processes, people, culture, and outcomes.

I believe I am one of a relatively small cohort of people who are both STEM-y and somewhat STS-y. The former, I get from my engineering degree and my natural proclivity to figure out how things work and to make my own working things. The latter I get from my policy training, which included an introduction to some of the basic concepts in that field. (My wife, an STS scholar herself is also a big factor!)

But I think the seeds of my STS-orientation came much earlier in life, when I was still an undergraduate in engineering school. My engineering program, at the University of Virginia, required all undergraduates to write a thesis, and that thesis had to address important STS concepts like engineering ethics. It was not just the thesis, either. My BSEE required several classes at SEAS’s own engineering humanities program, with required books, such as¬†To Engineer Is Human: The Role of Failure in Successful Design¬†(Petroski), ¬†The Design of Everyday Things (Norman), Normal Accidents (Perrow), The Civilized Engineer (Florman) and, of course, Frankenstein (Shelly). At the time we wondered, why, at a world-class university, would the school of engineering host¬†its own humanities classes? Now I can see that there was something truly cutting-edge about it. (It’s not like we were barred from taking classes outside the engineering school.)

Perhaps because I was indoctrinated at a young age, or because the concepts are right, I firmly believe that an engineer who works without considering the consequences of his creativity is at risk of creating less valuable things than he might. We can all easily conjure a list of¬†the “blockbuster bad ideas” of the 20th century (mustard gas, nuclear weapons, etc). But even when the engineering output is an unalloyed good,¬†with a bit of STS consideration, it is entirely possibly that something even better could have been created. Also, I just find it kind of bizarre that STEM folks might be discouraged from thinking about what there work means. I guess its part of the myth of the objectivity of science that there is no meaning to think about. That’s¬†wrong about science, and it should be prima facie obviously incorrect about engineering, which is by definition, a process directed by human desires.

But this kind of more holistic thinking isn’t particularly common, and as a result, places like Silicon Valley seem to be pretty bad at considering consequences. When you’re racing to create something, who has time to stop and think about its implications, much less let those implications determine the course of development? One simple example: hundreds of years of history led to the universally accepted notion that the integrity of a sealed letter should be maintained by all couriers involved in its delivery. When email came along, no such consideration was made. Why? How would the Internet as a means of communications have evolved if privacy were a consideration from the get go? Could the Internet have been “better?” (Yes, duh.)

Anyway, the IEEE article seems to conclude that most of the barriers to getting STEM folks to take on STS thinking are due to the culture of STEM. Though there is truth to that, it’s not the whole story, by far. For example, STS, philosphy, and policy folks have their own jargon and shibboleths, and it’s not easy for someone not trained in the game to participate. Furthermore, even when you do have something to add, I have found the policy crowd rather hostile to direct participation from STEM folks. One reason is that STEM folks are very analytical, and want to talk about all sides of an issue. On the other hand, policy people, at least non-academic “practicing” policy people are usually focused on a predetermined desired outcome, and the whishy-washiness of the engineers is not very welcome or useful to their campaign. It doesn’t help that engineers often expect carefully curated analysis¬†to “speak for itself.” It doesn’t. I can also attest, again, from firsthand experience, that analysis is not highly prized in policy circles. Analysis¬†comes with strings attached: subtlety, complexity, and confounding factors that are of no help when you are¬†trying to persuade.

It’s also important to remember that most engineers work for someone else. They make their living realizing others’ goals. As such, their leeway to affect the direction of their work is limited and to engage in too much STS thinking is to risk their livelihoods.

And finally, in our toxically overspecialized world, it’s just punishing to be a “boundary spanner.” There are no rewards, and it’s a lot of work.¬†If you have the skills, it is very difficult to find employment that will draw meaningfully on both reservoirs of knowledge. This, perhaps, has been the biggest frustration of my career, as I have bounced between these worlds repeatedly, missing one while in the other.

Finally, a parting shot: If you want to bring STS concepts to the fore, you need to bring them to the people with power. Those are¬†not the heads-down STEM practitioners, those are the C-suite masters of the universe. Let’s see some STS thinking more deeply integrated into the curricula at top business schools. Not just an ethics class to check a requisite box, but something more integrated that leads students to think holistically about their companies’ activities and products rather than, say, applying some post-hoc¬†greenwashing or CSR.

How to pay for the Internet, part 0xDEAF0001

Today’s Wall Street Journal had an article about Facebook, in which they promise to change the way the serve advertising in order to defeat ad blockers. This quote, from an FB spokesperson was choice:

“Facebook is ad-supported. Ads are a part of the Facebook experience; they‚Äôre not a tack on”

I’ll admit, I use an ad block a lot of the time. It’s not that I’m anti ads totally, but I am definitely utter trash, garbage, useless ads that suck of compute and network resources, cause the page to load much more slowly, and often enough, include malware and tracking. The problem is most acute on the mobile devices, where bandwidth, CPU power, and pixels are all in short supply, and yet it’s harder to block ads there. In fact, you really can’t do it without rooting your phone or doing all your browsing through a proxy.

The ad-supported Internet is just The Worst. I know, I know, I’ve had plenty of people explain to me that that ship has sailed, but I can still hate our ad-supported present and future.

  • Today’s ads suck, and they seem to be getting worse. Based on trends in the per ad revenue, it appears that most of the world agrees with this. They are less and less valuable.
  • Ads create perverse incentives for content creators. Their customer is the advertising client, and the reader is the product. In a pay for service model, you are the customer.
  • Ads are an attack vector for malware.
  • Ads use resources on your computer. Sure, the pay the content provider, but the cpu cycles on your computer are stolen.

I’m sure I could come up with 50 sucky things about Internet advertising, but I think it’s overdetermined. What is good about it is that it provides a way for content generators to make money, and so far, nothing else has worked.

The sad situation is that people do not want to pay for the Internet. We shell out $50 or more each month for access to the Internet, but nobody wants to pay for the Internet itself. Why not? The corrosive¬†effect of an ad-driven Internet is so ubiquitous that people cannot even see it anymore. Because we don’t “pay” for anything on the Internet, everything loses its value. Journalism? Gone. Music? I have 30k songs (29.5k about¬†which I do not care one whit) on my iThing.

Here is a prescription for a better Internet:

  1. Paywall every goddam thing
  2. Create non-profit syndicates that exist to attract member websites and collect subscription revenue on their behalf, distributing it according to clicks, or views, or whatever, at minimal cost.
  3. Kneecap all the rentier Internet businesses like Google and Facebook. They’re not very innovative and there is no justification for their outsized profits and “revenue requirements.”¬†There is a solid¬†case for economic regulation of Internet businesses with strong network effects. Do it.

I know this post is haphazard and touches on a bunch of unrelated ideas. If there is one idea I’d like to convey is: let’s get over our addiction to free stuff. It ain’t free.

 

 

Nerding while sweating

I was slowly cranking my way of Claremont Avenue the other day on my trusty Bianchi when I started wondering why I was so slow. Well, that was easy. I’m pretty heavy and I’m somewhat out of shape. But which is more important, which would have a bigger impact if improved?

First, I used a website like this one to determine the average grade over a certain familiar portion of the route. In this case, it was 13.3%. I also have a speedometer on my bike that tells that I average about 5 mph over that stretch. Finally, I weigh about 100 kg, and my bike is another 10 kg.

So, given that the energy to raise a mass up h height is m*g*h, the power to raise a mass at r rate is m*g*r.

Result:

claremont_power: 317.67222 (watts)

That is, that’s how much power it takes to lift my mass¬†up a hill at that rate. Note the trig to change my speed up the hill to a vertical speed. There are losses in pedaling a bike, and on the tires on the road, etc, but this is a good estimate of the overall order of how much power I can comfortably sustain. Let’s call it 300W.

Now, another thing I’ve noticed while riding is that on flat ground, I can maintain about 17 mph. In that case, I’m not adding power to climb a hill at all, all of my power is overcoming road friction and drag.

It happens that power going to aerodynamic drag goes by the cube of the velocity. (There is more going on here than wind drag, but, eh, it probably dominates at higher speeds…) So, if we assume that on level ground I’m capable of the same ~300W that I do while climbing, I can calculate the constant in:

P = c * v^3

This is a simplification of the more general equation linked above, assuming constant air density, yadda. For 17 mph and 317 W, I get about 0.72376 kg / m. kg/m is a strange dimension, but it it what it is.

So then, I wondered, how fast should I be able to go with a given power budget while climbing different grades?

I created this equation which combines the power to climb and the power to overcome drag

P = c v^3 + m g v sin(theta)

where P  is power, c  is the drag power constant calculated above, m  is mass, g  is the acceleration of gravity, and theta is the angle of the hill. (The angle is the arctangent of the grade, by the way.) Oh, and v  is my speed.

It turns out that my brain doesn’t perform¬†the way it once did and I can’t solve that cubic equation on my own, so I resorted to a Python-based solver which is part of the sympy package.

This function gets the job done:

Note this equation has three solutions, two of which are complex. Only interested in the “real” solution.

Now, this is finally where the fun starts. Want to know how fast I can climb different grades, or how actual athletes who can summon more power than me can get up?

How fast I might get up hills if I could make more power.
How fast I might get up hills if I could make more power. (mass = 110 kg)

Like I said, I can make about 300W, but I saw a youtube video of a dude who could make about 1kW, at least for long enough to make toast.

Then I was wonder, would losing weight help much? It does. Interestingly, it helps on the middle grades. On the highest grades, I’m nearly stopped, and the numbers get small. On flat grades, drag (a function of my shape, not my size) dominates. But in the middle, yeah, there’s an effect.

Dave might go faster if we was less fat.
Dave might go faster if we were less fat.

So there you have it. If I lost 10 kg and could increase my power output by 15% I could go from about 5 mph on Claremont to about 6 mph.

Actually, that’s depressing.

The code we unwittingly run

This will come as no news to tech-savvy people, but when you open a webpage, you are running a metric shit-ton of code from all over the Internet.
A bunch of garbage nobody needs.
A bunch of garbage nobody needs.
Since I’ve been doing some Chrome Extension development over the past couple of days, I’ve been opening up the dev tools that let you see the “console” output of all the javascript that runs on a page. It’s a lot. I have an ad-blocker running, so most of those GETs and POSTs generate error messages and go nowhere. But there are a lot of them. And the code keeps trying over and over.
And it’s from a lot companies, too. On the NYT alone, I get messages from various systems from google, amazon, facebook, doubleclick, moatads.com, brealtime.com.
Aside from the privacy and tracking aspects, it feels like a theft of resources, too. They’re using my CPU to do work that has nothing to do with rendering their page.

Marriage proposal from Jezebel

The fine folks at Jezebel want to marry me! Though I am married in Real Life, I see no reason that should preclude an Internet-based group arrangement.

Because this is clearly the beginning and end of my fifteen minutes, I will paste a few comments from the post:

  • This is basically a marriage proposal to us as a group, right? We accept so hard.
  • This is the best thing that has ever happened in the known universe, space, and time. Ever.
  • I am not going to get any work done for the rest of the day…
  • this is making me positively giddy
  • Firmly believing that the entire Gawker Media empire was brought into existence specifically so this moment could happen. This is fantastic. BRING ON THE AMBITIOUS CORNDOGS, Y‚ÄôALL.
  • Whoever made this is a goddamn genius.
  • You are doing a wonderful service for your country! Love love love this.
  • Somebody please tweet this to Colbert? He‚Äôs been doing incredible take-downs of Trump and I‚Äôm sure would love to demo this on the Late Show.
  • Installing this on my work PC was a mistake. I‚Äôm crying.

In the words of Ken Burns, I think this really is my Best Idea.

 

Agnotology

I like discovering a new word, and am excited to see this one: Agnotology. I learned it today in this profile of Stanford University researcher Robert Proctor, an agnotologist.

Very succinctly, agnotology is the study of intentionally inducing ignorance, or as people I used to work with would put it: spreading FUD.

That is, the daily¬†work of thousands of people, employed in a large¬†segment of corporate America. Their job it is to make sure that people do not understand something, say, like vaccines safety or climate change that might interfere with profitability. I guess if “it is difficult to get a man to understand something when his salary depends on his not understanding it” then some corollary says it should be¬†easy for another man to help many men¬†not understand something if his salary depends on how many other men do not understand it.

Or something.

Anyway, with so much intentionally-induced ignorance pervading our universe these days, like the dark side of the force, I was happy to see that at least the activity has a name. I wish the agnotologists well, and hope they will come up with some kind of cure or vaccine that will help us contain the stupid-industrial complex that has come to so pervade our lives and politics.