Moore’s last sigh

I have a strange perspective on Moore’s Law that I can’t seem to shake.

The common expression of Moore’s Law is that transistor density on an integrated circuit grows exponentially. The typical time constant is a doubling every 18 to 24 months. Over the years, Moore’s Law has been remarkably stable. Good folks argue about if and when it will come to and end, or if it already has. People also argue about whether Moore’s Law itself was endogenous to semiconductor scaling; that is, whether the Law became a goal and so became self-fulfilling.

Here’s my take: Rather than observing a constant stream of innovation in semiconductors, what we have witnessed over the last 50 years or so has been the slow, logical expansion of a single innovation: that of the planarized transistor and integrated circuit made from them. The integrated circuit is credited to Jack Kilby who demonstrated the first IC in 1958. However, the basis of real chips is the planar transistor, invented by Jean Hoerni at Fairchild in 1959.

From there, the entirety of the history of Moore’s law is a logical and inevitable consequence. The exponential growth was not due to a stream of genius innovation, but an entirely canny and methodical march of engineering, taking an idea to its logical conclusion: larger wafers, smaller lithography, more metal layers, thinner gate oxides, etc. The logical conclusion being electronic devices that operate on the 100-103 numbers of electrons at a time. It is those limits, along with thermal limits that are the endgame we see today. (There are other complications, like deep-UV lithography that appear very difficult to solve, but can probably be solved at some price.)

I don’t want to belittle the work of so many brilliant engineers who have toiled hard in the salt mines of chip design. Of course, they (we!) have brought the world a fantastic technology. But if you back out just a bit on timescale, I think it’s easy to see that Moore’s Law is not telling you as much about electronics and computers as it is describing a state of the last 50 years.

We have lived in a period of exponential improvement in electronics. That period, like all periods of exponential change, will end; perhaps already has. At any but the smallest timescales, major technology innovations look like step functions followed by a longer and partially overlapping period of diffusion into society. Aeronautics, combustion engines, solar cells, wind turbines, you name it.

None of this makes me sad, though I wished airplanes were still getting faster and better. In the multi-generational mad-dash to take semiconductors to their limits, we’ve probably passed over lots of side opportunities to use chips in novel ways, ways that require more design attention per transistor than total transistors. I hope that we will see more novel electronic devices in the future, as brains that were focused on more and faster start to look for other ways to do interesting things in electronics.

 

 

machines don’t think but they can still be unknowable

I still read Slashdot for my tech news (because I’m old, I guess) and came across this article, AI Training Algorithms Susceptible to Backdoors, Manipulation. The article cites a paper that shows how the training data for a “deep” machine learning algorithms can be subtly poisoned (intentionally or otherwise) such that the algorithm can be trained to react abnormally to inputs that don’t seem abnormal to humans.

For example, an ML algorithm for self-driving cars might be programmed to recognize stop signs, by showing it thousands of stop signs as well as thousands of things that are not stop signs, and telling it which is which. Afterwords, when shown new pictures, the algorithm does a good job classifying them into the correct categories.

But lets say someone added a few pictures of stop signs with Post-It notes stuck on them into the “non stop sign” pile? The program would learn to recognize a stop sign with a sticky on it as a non stop sign. Unless you test your algorithm with pictures of stop signs with sticky notes on them (and why would you even think of that?), you’ll never know that your algorithm will happily misclassify them. Et voila, you have created a way to selectively get self driving cars to zip through stop signs like they weren’t there. This is bad.

What caught my eye about this research is that the authors seem not to fully grasp that this is not a computer problem or an algorithm problem. It is a more general problem that philosophers, logicians, and semiologists have grappled with for a long time. I see it as a sign of the intellectual poverty of most programmers’ education that they did not properly categorize this issue.

Everyone has different terms for it, and I don’t know jack about philosophy, but it really boils down to:

  • Can you know what someone else is thinking?
  • Can you know how their brain works?
  • Can you know they perceive the same things you perceive the same way?

You can’t.

Your brain is wholly isolated from the brains of everyone else. You can’t really know what’s going on inside their heads, except so much as they tell you, and for that, even if everyone is trying to be honest, we are limited by “language” and the mapping of symbols in your language to “meaning” in the heads of the speaker and listener can never truly be known. Sorry!

Now in reality, we seem to get by.  if someone says he is hungry, that probably means he wants food. But what if someone tells you there is no stop sign at the intersection? Does he know what a stop sign is? Is he lying to you? How is his vision? Can he see colors? What if the light is kinda funny? All you can do is rely on your experience with that person’s ability to identify stop signs to know if he’ll give you the right answer. Maybe you can lean on the fact that he’s a licensed driver. However, you don’t know  how his wet neural net has been trained by life experience and you have to make a guess about the adequacy of his sign-identification skills.

These deep learning algorithms, neural nets and the like, are not much like human brains, but they do have this in common with our brains: they are too complex to be made sense of. That is, we can’t look at the connections of neurons in the brain nor can we look at some parameters of a trained neural network and say, “oh, those are about sticky notes on stop signs. That is, all those coefficients are uninterpretable.

We’re stuck doing what we have done with people since forever: we “train” them, then we “test” them, and we hope to G-d that the test we gave covers all the scenarios they’ll face. It works, mostly, kinda, except when it doesn’t. (See every pilot-induced aviation accident, ever.)

I find it somewhat ironic that statisticians have worked hard to build models whose coefficients can be interpreted, but engineers are racing to build things around more sophisticated models that do neat things, but whose inner workings can’t quite be understood. Interpreting model coefficients is part of how how scientists assess the quality of their models and how they use them to tell stories about the world. But with the move to “AI” and deep learning, we’re giving that up. We are gaining the ability to build sophisticated tools that can do incredible things, but we can only assess their overall external performance — their F scores — with limited ability to look under the hood.

 

Cold Turkey

This morning, I started my regular morning ritual as usual. I got up, complained about my back, put bread in the toaster, water in the kettle, and then went to my phone to see what’s happening.

Except that last thing didn’t work. Facebook wouldn’t load.

Why? Because my better half convinced me that it was time to take a Face-cation. Last night we logged into our accounts and let each other change our passwords. As a result, we are unable to access our own accounts, locked in a pact of mutual Facebook stasis.

I can say, that several times already today I have pretty much instinctually opened a tab to read FB. In my browser, just typing the first letter ‘f’ is all it takes to open that page. Each time I’ve been greeted by a password prompt. Poor me.

Well, if FB is my heroin, let this plug be my methadone. We’ll see how that goes.

Musings on success

Traveling back to my native New Jersey usually casts me into a bit of melancholia, and this trip was no different. I guess it’s cliche, but visiting the place of my boyhood makes me want to mark my actual life against the plans I had for it back then. Unsurprisingly, they’re not very similar.

That caused me to further ponder my working definition of “success,” which, perhaps due to an extended soak in Silicon Valley, involves a lot of career advancement and lots of money (or at least a house with an upstairs), maybe inventing something appreciated by millions. However, I left SV because I wanted to work on energy policy be able to say that when it mattered I did something to measurably reduce climate change. So far at least, so much for any of that.

While doing this pondering, two people I knew while growing up kept persistently popping into mind. Both were Scoutmasters from my Boy Scouting days. David Millison was my Scoutmaster when I started Scouting in Troop 7 in Fairfield, NJ. George Berisso was my Scoutmaster in Troop 9 in Caldwell.

I met Dave when I was a wee one, just starting in Scouting. As he told the story, his initial involvement in Scouting leadership was pure coincidence. The local troop was looking to borrow a canoe and he had one. That somehow snowballed into running Troop 7 for 46 years and influencing the lives of hundreds of boys. Dave was the person who first planted in me the seeds of outdoor adventure. He talked in excited tones about Philmont, the scout ranch in the Sangre De Cristo mountains of New Mexico. One of the most profound things I learned from Dave is that discomfort can simply be set aside when necessary. You might be camping in the rain, your tent and sleeping bag soggy, eating cold oatmeal with no fire to warm you. No matter — you can still enjoy fellowship and adventure in such circumstances. You’ll be home to a hot shower in a day or two, so why not decide to carry on and have fun? I can’t say I always live that way, but I try to. Dave also was the master of situational ethics, encouraging us to carefully consider the ethical implications of the most trivial decisions. Are you going to tell the waiter he forgot to put your soda on the bill? Should you let your friend hold your place in line? I still wonder about some of these questions 30 years later.

Mr. Berisso taught AP Biology in my high school and sponsored our Environmental Protection Club. He also ran the troop I was in when I completed my Eagle. He was a demanding teacher and expected students to think carefully and to be skeptical and analytical. As an adult now, I am not quite certain how he could devote so much energy to his day job (science education) and still have energy left over for his family and extracurriculars like scouting. Mr. Berisso was the Scoutmaster who eventually took me to Philmont. Once, when was 17, I think, and rather advanced in Scouting, I did something that should have gotten me kicked out of Boy Scouts for good. Mr. Berisso did kick me out, but provided a path for me to be reinstated. He turned my stupidity into a character building lesson.

I don’t think either of these men were huge successes by the measures mentioned above. They didn’t have fancy cars or big houses or powerful positions. But they were exceedingly successful in other ways, which I now see are just as important, if not more so. They were well respected and appreciated by their communities and loved by their families. Furthermore, they both made indelible marks on hundreds, maybe thousands of young people. I don’t think anyone who knew either man will ever forget him.

It is very sad that both Dave and Mr. Berisso have passed away. I want to introduce them to my family, and to tell them what profound impacts they had on me as examples of character and service. It angers me that I cannot do this. Nevertheless, remembering them helps me to recalibrate my notion of success. I hope I can one day be half as successful as either of them.

Nostalgia: Airspace Edition. The end of the road for VORs

The FAA is in the process of redesigning the Class B airspace around SFO airport, and it signals an interesting  shift in air navigation: the requirement that everyone in the airspace be able to navigate by means of GPS.

They are undertaking the redesign primarily to make flying around SFO quieter and more fuel efficient. The new shape will allow steeper descents at or near “flight idle” — meaning the planes can just sort of glide in, burning less gas and making less noise. As a side benefit, they will be able to raise the bottom of the airspace in certain places so that it is easier for aircraft not going to SFO to operate underneath.

As far as I’m concerned, that’s all good, but I noticed something interesting about the new and old design. Here’s the old design:

This picture, or one like it, will be familiar to most pilots. It’s a bunch of concentric circles with lines radiating out from it, dividing it into sectored rings. The numbers represent the top and bottom of those sections, in hundreds of feet. This is the classic “inverted wedding cake” of a Class B airspace. In 3D, it looks something like this, but more complicated.

This design was based around the VOR, a radio navigation system, that could tell you what azimuth (radial) you are relative to a fixed station, such as the VOR transmitter on the field at SFO. A second system, usually coupled with a VOR, called DME, allows you to know your distance from the station. Together, you can know your exact position, but because of this “polar coordinate” way of knowing your position, designs intended to be flown by VOR+DME tend to be made of slices and sectors of circles.

The new proposed design does away with this entirely.

Basically, they just drew lines any which way, wherever it made sense. This map is almost un-navigable by VOR and DME. It takes a lot of knob twisting and fiddling to establish your exact position if it is not based on an arc or radial. Basically, this map is intended for aircraft with GPS.

All of this is well and good, I guess. GPS has been ubiquitous in every phone, every iPad and every pilot’s flight bag for a long time.

I learned to fly in a transitional era, when GPS existed, but the aircraft mostly had 2 VOR receivers and a DME. My flight instructor would never have let me use a GPS as a mean of primary navigation. Sure, for help, but I needed to be able to steer the plane without it, because the only “legal” navigation system in the plane were the VORs. I still feel a bit guilty when I just punch up “direct to” in my GPS and follow the purple line. It feels like cheating.

But it’s not, I guess. Time marches on. Today, new aircraft all have built-in GPS, but a lot of older ones don’t. And if they’re going to fly under the SFO Class B airspace, they’re going to need to use one of those iPads to know where they are relative to those airspace boundaries. And strictly, speaking, they probably should get panel-mounted GPS as well.

 

 

Nixon was right

 

When the president does it, that means that it’s not illegal.

Like most Americans, I’m familiar with this infamous quote, from a Nixon interview with David Frost in 1977. I always considered it an example of the maniacal hubris of Richard Nixon.

The thing is, I’ve come to understand that, at least from an functional standpoint, it is basically true.

What happens if the president breaks the law? Well, he might use his executive powers to direct law enforcement not to pursue it. But if he didn’t, presumably, he could be brought up on charges and a court could find him guilty. If it were a federal crime, he could just pardon himself, but if not, I guess he could even go to jail.

But, can he lose the presidency over a crime?

It seems, not directly. As far as I can tell — and I hope legal types will educate me — the Constitution contains precisely two ways to get rid of a president:

  • an election
  • impeachment

Both of these are obviously political processes. There’s certainly nothing there saying that if you commit a crime of type “X”, you’re out.

Conventional wisdom expects that if the president does something even moderately unsavory, political pressure will force the House of Representatives to impeach him. I would have taken comfort in that up until a few months ago, but now … not so much.

Today’s House of Representatives is highly polarized, and the party in power (by a ratio of 240 / 193) is poised to make great strides towards realizing its long-held agenda. Would they let a little unlawful presidential activity get in the way of that? I don’t think so.

What if their own constituents don’t care about a president breaking some laws that they think are pointless or unjust? Let’s say Trump’s tax returns show up on Wikileaks, and forensic accountants come up with evidence of tax fraud. Are we sure voters will not see that as a mark of genius?

Congress will move to impeach if and when the pressure from that activity gets in the way of their agenda, and not before. That time could, in theory, never come.

I’m afraid that people holding their breath for an impeachment based on the emoluments clause may pass out waiting.

Donald Trump once said this:

Also, probably one of DT’s more truthful statements. (Though my legal team informs me that murder is not a federal crime, so he would not be able to self-pardon.)

 

The end of computing as a hobby?

I grew up with computers. We got our first machine, an Atari 800, when I was only 8 or 9. An 8-bitter with hardware sprites. 48 KiB of memory, and a cassette tape trive, this was only one step removed from the Atari 2600 game console. Very nearly useless, this was a machine for enthusiasts and hobbyists.

Over time, computers became less useless, as well as more “user-friendly,” but they — particularly the PC style machines — kept the doors open to hobbyists and tinkerers.

The Bad News

I think, however, that that era has come to an end, and I’m saddened. I see three basic trends that have killed it.

The first is that the network-connected world is dangerous. You can’t just fire up any old executable you find on the Internet in order to see what it does. It might do something Awful.

The second is that the closed ecosystem app stores of the world, aiming for a super smooth experience, have raised the quality bar for participation — particularly for “polish.” You simply cannot publish ugly, but highly functional software today.

The third problem is that you can’t make interesting software today without interacting with several systems in the cloud. Your app, hosted on a server, talks to a database, another app, and a half dozen other APIs: a link shortener, a video encoder, etc. And these APIs change constantly. There is no commitment to backward compatibility — something that was an iron-clad requirement of the PC era.

Trend one is a painful fact of life. Trend two could be reversed if the manufacturers had any incentive to do so. They do not. Trend three, I think is the worse, because it is wholly unnecessary. Say what you want about the “Wintel duopoly,” but they did not punish developers like modern companies do.

Together, these things pretty much lock out the casual developer. I’ve learned this the hard way as I try to push forward in my free time with a few open-source apps in a post PC world. It is one thing for a paid programmer to maintain a piece of software and deal, however grudgingly, with every email that comes from Google telling you that you need to update your code, again. But the hobbyist who wrote something cool for his friends, that worked for six months and then broke, is kind of stuck. Does he want to run a zero-revenue company that “supports” his app in perpetuity?

This makes me sad, because I wonder what we’re missing. As many of your know, I have gotten into ham radio. There’s a lot of cool ham-authored software out there. It’s ugly. It’s clunky. But some of it does amazing things, like implement modems that forward-error-correct a message and then put it into a ridiculously narrow signal that can reach around the world. Today, that software still runs on Windows, usually coded against the old Win32 or even Win16 libraries. It gets passed around in zip files and people run unsigned executables without installers. It’s the last hacky platform standing, but not for long.

The Good News

Of course, if the PC, Mac, i-device, and household gadget becomes more and more locked off, there is an exciting antidote: Arduino, Raspberry Pi, Beaglebone, and the entire maker world. People are building cool stuff. It’s cheap, it’s fun, and the barriers to entry, though intellectually a bit higher than the “PC” are pretty damn low. Furthermore, the ecosystems around these products are refreshingly chaotic and more than slightly anti-corporate.

One of the nice things about this platforms is that they are self-contained and so pose little threat to data other than what you put on them. On the other hand, they are full-fledged computer and are as exploitable as any other.

If you make something cool that runs on a Raspberry Pi, there’s still pretty little chance every kid at school will soon have it and run it, but then again, maybe there never was.

 

Voices, ideas, and power

So, a day or so ago I was discussing the problems facing a democracy when a group of people, previously able to control outcomes with their vote, lose power. They may, not getting what they want democratically, turn to undemocratic approaches — the dangerous last gasp of a majority group becoming a minority.

Apparently, it turns out that that is not a problem we will have to deal with soon.

But it is with some irony that, tables turned, am today thinking about the limits of democracy. That was not on my mind yesterday morning.

Clearly, I need to come to grip with the fact that I and many of my friends were not hearing a lot of voices, or if we heard them, we dismissed them as uninformed, ignorant, and potentially irrelevant in the grander scheme of things. That is wrong for at least two reasons. First, duh, you end up losing. Each voice comes with a vote attached. But also, it just isn’t OK to dismiss people, even “bad” people. My main weapon against the Trump phenomenon of the last year was utter derision. That made me feel better (and I’m not giving it up) but it didn’t help stop him, and who knows, maybe it even helped fuel the response we saw last night?

If voices cannot and should not be ignored or somehow put on the sidelines, I don’t think the same goes for ideas. Ideas can vary from the brilliant to the disastrous, and we desperately need some way to sort them and then to make them stay where they belong. I’m not talking about censorship. Again, that’s focusing on voices. I’m talking about finding a way to make sure bad ideas are clearly, obviously so to everyone.

This has been a problem since the beginning of time, and it is clear that we are not very close to solving it. Back in olden days we had a system like this:

good idea bad idea
king likes happens, yay happens, disaster
king dislikes does not happen, opportunity lost nothing happens, ok

This turns out not to be fantastic system for decision-making, so we switched over to this:

good idea bad idea
people like happens, yay happens, disaster
people dislike does not happen, opportunity lost nothing happens, ok

This is much better, as people should generally like things that are good, or at least the people who have to deal with the consequences are the same ones making the decision. But if you believe that idea popularity and idea quality are not strongly correlated, it still leaves a lot to be desired.

Well, idea popularity and idea quality are not particularly well correlated. This is something that the Framers would have taken as prima facie obvious. The technology of the day would not have allowed for direct democracy, but they would not have wanted it anyway. They discussed this at length and put plenty of checks into the system to make sure runaway bad ideas do not gain power. Most of the time, in fact, I tend to think they put in too many checks. (That I suddenly feel different today says what?)

Well, my theory is that we relied on extra-governmental institutions: newspapers, intellectuals, clergy, to help pre-sort ideas. The most hideous ideas were put in the trash heap long before they became birdies whispering in candidates ears. I grew up in a world where it appeared that elites had pretty good power over ideas. They could not kill them, of course, but they could push them out of certain spaces, and that was good enough to keep them out of the mainstream and the ballot box.

That’s over. Unless the intellectually motivated, the curious, the skeptical, the open-minded, the thoughtful, the trained, the expert, the conservative, somehow reassert power over ideas, things are going to get worse.

How do we do it?

Cultural variation in phalangeal deployment in the service conveying antipathy

I have a lot of thoughts about politics these days, but so does everybody else, right? So I will not write about politics.

Instead, I want to write about “the finger.” I’ve been giving the finger as long as I can remember. I probably learned it from my brother or sister, though, its use was heavily reinforced in social settings — at least those not policed by grown-ups.

I don’t give the finger very much these days, but I still enjoy seeing a good display. I noticed, recently, though, that there seems to be a lot of variation in how people give the finger, and I’ve become curious about it.

The gesture I learned, which I’ll call the “basic” finger requires that the middle finger be extended fully, and all the others be curled down as much as possible. This includes the thumb. It looks like these:

the_gesture021p5b3199

 

 

However, for a long time, I’ve been aware of an alternative interpretation of this gesture, which I will call the “John Hughes.” In this variation, the other fingers are not held down, but merely curled at the knuckles — sometimes only very slightly. The thumb may even be extended. In film, the person giving this gesture often wears fingerless gloves.

Here are some examples:

via GIPHY

 

14717243_10210240665588417_3905152048572238054_n

I actually find performing this variation rather difficult, as I cannot seem to get my middle finger to extend fully while the others are only bent. However, for my wife and many others, this is the default form — she does not associate it with the Chicago suburbs at all.

So, I ask you, my loyal readers, what’s going on? What drives this variation?

Some theories:

  • geography (soda / pop / coke)
  • class-based
  • disdain vs. anger

 

Does one skew more Republican and the other more Dem?

Are there more types out there? I realize that if you widen the scope internationally, there are many more variations, including the “V” and the thumb, but I’m mostly curious about the intra-US variation.

On the morality of tax avoidance

People were pretty unhappy when Donald Trump claimed that not paying taxes “makes him smart.” Similarly, nobody was impressed when Mitt Romney claimed that he “paid all the taxes I am required to, not a dollar more.”

What these folks do is legal. It’s called tax avoidance, and the more money you have, the harder you and your accountants will work, and the better at it you’ll be. There is an entire industry built around tax avoidance.

From www.ccPixs.com
From www.ccPixs.com

Though I want to disapprove of these people, it does occur to me that most of us do not willingly pay taxes that we are not required to pay. It’s not like I skip out on deducting my charitable giving or my mortgage interest, or using the deductions for my kids. I’m legally allowed those deductions and I use them.

So what is wrong with what Trump and Romney do?

One answer is “nothing.” I think that’s not quite the right answer, but it’s close. Yes, just because something is legal does not mean that it’s moral. But where do you draw the line here? Is it based on how clever your accountants had to be to work the system? Or how crazy the hoops you jumped through were to hide your money? I’m not comfortable with fuzzy definitions like that at all.

What is probably immoral, is for a rich person to try to influence the tax system to give himself more favorable treatment. But then again, how do you draw a bright line? Rich people often want lower taxes and (presumably) accept that that buys less government stuff and/or believe that they should not have to transfer their wealth to others. That might be a position that I don’t agree with, but the case for immorality there is a bit more complex, and reasonable people can debate it.

On the other hand, lobbying for a tax system with loopholes that benefit them, and creating a system of such complexity that only the wealthiest can navigate it, thus putting the tax burden onto other taxpayers, taxpayers with less money, is pretty obviously immoral. Well, if not immoral, definitely nasty.