I like a cheap shot at economists. Who doesn’t? Economists are so frequently arrogant, close-minded, smug and willing to throw out data that doesn’t match the theory. Why not enjoy a good takedown screed? If you need to hear social scientists vent even more about the weaknesses in economics, the comments here are even more fun.
I have formally and informally studied econ a lot and have to say, I have a good deal of sympathy for some of the points made in the links above. The fact that we have seen some earth-shaking economic events in our lives and our “top men” have not, even many years on, been able to set aside ideology and come to some agreement about what has happened, or why, does not speak well for the whole intellectual endeavor. (NB: I don’t read the papers; I read the blogs, so my opinion is formed from that sample set.)
All that said, let’s remember that microeconomics has been a mostly successful enterprise. You want to know how to structure a welfare program to provide the least distortions? You want to internalize the costs of pollution? You want to set up an auction? Economists have your back.
You want to maximize social utility according to a welfare function of you choosing? Fuggetaboutit.
Sometimes I think that one of the bummers of humanity is that most of us are overqualified for what we do. I guess that’s not a big problem compared to starvation or war, but I think that getting each person to achieve — and then apply — their human potential is a laudable goal for a society. (That is, if you’re like me and think societies should have goals.)
I often feel overqualified for the tasks I’m asked to do. In part, that’s my own fault because I’ve made it hobby to pick up bits of unrelated knowledge that more or less cannot be simultaneously applied to any particular project. But still, I find it a thrill to be working on a particular problem that is at the limit of my capability. I think we would all benefit from being at the limit of our capabilities more, but our employers might feel otherwise.
I’m not bragging about being overqualified, mind you. The “sandwich artist” at Subway is overqualified, too. A thousand years ago, the farmer with his shoulder to the plow was overqualified. We’ve all got these amazing state-of-the-art computers in our heads that can solve all sorts of problems. Sure, some of them are better at certain things than others, but I think it’s safe to say that the vast majority of them are grossly underutilized.
Sometimes I envy medical doctors. They probably spend a lot of time doing routine things, but they must occasionally be presented with a patient that they will have to work hard to treat. As long as there exist ailments that cannot be cured, I think it’s fair to say that physicians will never be overqualified.
Airline pilots are also an interesting category. Jets are highly automated these days, and a lot of flying is just “minding” the machine. But every once in awhile a pilot is called on to apply judgment in an abnormal and unforseen situation. They might spend most of their professional lives quite bored, but there will be those times, I’m sure, when they feel just adequately — if not inadequately –qualified.
Anyway, there’s not much point to this post. One of the things I admire about the fictional Star Trek universe is that each person seems able to work in the field of their choosing, pushing the boundaries of their abilities, without a care or concern other than excellence for its own sake. (That is, unless you’re a “red shirt.”)
Until we’re Star Fleet officers, let’s put our shoulder back to our plows and hope for better for our kids.
I admit it. I’m something of a connoisseur of fraud, particularly technology fraud.I’m fascinated by it. That’s why I have not been able to keep my eyes off the unfolding story of Thernos. Theranos was a company formed to do blood testing with minute quantities of blood. The founder, who dropped out of Stanford to pursue her idea, imagined blood testing kiosks in every drugstore, making testing ubiquitous, cheap, safe, painless. It all sounds pretty great in concept, but it seemed to me from the very start to lack an important hallmark of seriousness: evidence of a thoughtful survey of “why hasn’t this happened already?”
There were plenty of warning signs that this would not work out, but I think what’s fascinating to me is that the very same things that set off klaxons in my brain lured in many investors. For example, the founder dropped out of school, so had “commitment,” but no technical background in the art she was promising to upend. Furthermore, there were very few medical or testing professionals among her directors. (There was one more thing that did it for me: the founder liked to ape the presentation style and even fashion style of Steve Jobs. Again, there were people with money who got lured by that … how? The mind boggles.)
Anyway, there is, today, a strange current of anti-expert philosophy floating around Silicon Valley. I don’t know what to make of it. They do have some points. It is true that expertise can blind you to new ideas. And it’s also true that a lot of people who claim to be experts are really just walking sacks of rules-of-thumb and myths accreted over unremarkable careers.
At the same time, building truly innovative technology products is especially hard. I’m not talking about applying technology to hailing a cab. I’m talking about creating new technology. The base on which you are innovating is large and complex. The odds that you can add something meaningful to it through some googling seems vanishingly small.
But it is probably non-zero, too. Which means that we will always have stories of the iconoclast going against the grain to make something great. But are those stories explanatory? Do they tell us about how innovation works? Are they about exceptions or rules? Should we mimic successful people who defy experts, by defying experts ourselves, and if we do, what are our chances of success? And should we even try to acquire expertise ourselves?
All of this brings me to one of my favorite frauds in progress: Ubeam. This is a startup that wants to charge your cell phone, while it’s in your pocket, by means of ultrasound — and it raised my eyebrows the moment I heard about it. They haven’t raised quite as much money as did Theranos, but their technology is even less likely to work. (There are a lot of reasons, but they boil down to the massive attenuation of ultrasound in air, the danger of exposing people to high levels of ultrasound, the massive energy loss from sending out sound over a wide area, only to be received over a small one [or the difficulty and danger of forming a tight beam], the difficulty of penetrating clothes, purses, and phone holders, and the very low likelihood that a phone’s ultrasound transducer will be positioned close to normally with respect to the beam source.) And if they somehow manage to make it work, it’s still a terrible idea, as it will be grotesquely inefficient.
What I find so fascinating about this startup is that the founder is ADAMANT that people who do not believe it will work are just trapped in an old paradigm. They are incapable of innovation — broken, in a way. She actively campaigns for “knowledge by Google” and against expertise.
As an engineer by training and genetic predisposition, this TEDx talk really blows my mind. I still cannot quite process it:
I’ve been doing a little progr^H^H^H^H^Hsoftware engineering lately, and with it, I’ve been interacting with libraries and APIs from third parties. Using APIs can be fun, because it lets your simple program leverage somebody else’s clever work. On the other hand, I really hate learning complex APIs because the knowledge is a) too often hard-won through extended suffering and b) utterly disposable. You will not be able to use what you’ve learned next week, much less, next year.
So, when I’m learning a new API, I read the docs, but I admit to trying to avoid reading them too closely or completely, and I try not to commit any of it to memory. I’m after the bare minimum to get my job done.
That said, I sometimes get stuck and have to look closer, and a few times recently, I’ve even pulled the trigger on the last resort: writing for support. Here’s the thing: I do this when I have read the docs and am seriously stuck and/or strongly suspect that the API is broken in some way.
As it happens, I have several years’ experience as a corporate and field applications engineer (that’s old-skool Silicon Valley speak for person-who-helps-make-it-work), so I like to think I know how to approach support folks; I know how I would like to be approached.
I always present them with a single question, focused down to the most basic elements, preferably in a form that they can use to reproduce the issue themselves.
But three out of three times recently when I’ve done this in the past month (NB: a lot of web APIs are, in fact, quite broken), I have run into a support person who replied before considering my example problems or even simply running. Most annoying, they sent me the dreaded “Have you looked at the documentation?”
All of those are support sins, but I find the last the most galling. For those of you whose job it is to help others use your product, let me make this very humble suggestion: always, always, ALWAYS point the user to the precise bit of documentation that answers the question asked. (Oh, and if your docs website does not allow that, it is broken, too. Fix it.)
This has three handy benefits:
It helps the user with their problem immediately. Who woodanodeit?
It chastens users who are chastenable (like me), by pointing out how silly and lazy they were to write for help before carefully examining the docs.
It forces you, the support engineer, to look at the documentation yourself, with a particular question in mind, and gives you the opportunity to consider whether the doc’s ability to answer this question might be improvable.
On the other hand, sending a friendly email suggesting that the user “look at the documentation” makes you look like an ass.
This afternoon I was removing a dead CFL from a fixture in the kitchen when it broke in my hand, sending mercury-tainted glass towards my face and the floor. Our kitchen had been remodeled by the previous owner before he put the house up for sale, and he brought up to compliance with California Title 24 requirements for lighting, which at the time could only have been met with CFL fixtures that used bulb bases incompatible with the ubiquitous “edison base” used by incandescent bulbs — after all, with regular bases, what would stop someone from replacing the CFLs with awful incandescents?
I’ve never liked the setup, and part of the reason is the many compromises that come with CFLs. Though they have certainly saved energy, they’ve been a failure on several other levels. First, they are expensive, between $8-14 in my experience. (These are G24 based bulbs, not the cheap edison-compatible retrofits) Furthermore, they fail. A lot. We’ve lived in this house since 2012 and our kitchen has six overhead cans. I’ve probably replaced 7 or 8 bulbs in that time. Finally, with the fancy CFL-only bases come electronic ballasts, built into the fixture. One has failed already and it can only be replaced from the attic. I hate going up there, so I haven’t done it even though it happened six months ago. The ballasts also stop me from putting in LED retrofits. I’ll have to remove them all first.
The thing is, only a few years ago, it seemed like every environmentalist and energy efficiency expert was telling us (often in pleading, patronizing tones) to switch to CFLs. They cost a bit more, but based on energy savings and longer life, they’d be cheaper in the long run. But it turns out that just wasn’t true. It was theoretically true but practically not. Unfortunately, this is not uncommon in the efficiency world.
There were other drawbacks to CFLs. They did not fit lamps that many people had. The color of their light was irksome. There was flicker that some people could detect. When they broke, they released mercury into your home, that, if it fell into carpet or crevices in the floor, would be their essentially forever. Most could not dim, and those that could have laughably limited dimming range.Finally, they promised longevity, but largely failed to deliver.
Basically, for some reason, experts could not see what became plainly obvious to Joe Homeowner: CFLs kinda suck.
So, was pushing them a good idea, perhaps based on what was known at the time?
I would argue no. This was a case of promoting a product that solved a commons problem (environmental impact of energy waste) with something whose private experience was worse in almost every way possible. Even the economics, touted as a benefit, failed to materialize in most cases.
I would argue that the rejection (and even political backlash) against CFLs was entirely predictable, because 1) over promising benefits 2) downplaying drawbacks and 3) adding regulation does not make people happy. What does make people happy is a better product.
So far, it looks like LEDs are the better product we need. They actually are better than incandescent bulbs in most ways, and their cost is coming down. I’m sure their quality will come down, too, as manufacturers explore the price/performance/reliability frontier, and we may end up throwing away more LED fixtures than any environmentalist could imagine. Not so far; things are holding. They’re a lighting efficiency success, perhaps the first since incandescent bulbs replaced gas and lamp oil.
The lesson for energy efficiency advocates, I think, is:
UNDERpromise and OVERdeliver
do not try to convince people that an inferior experience is superior. Sure, some will drink the Kool-Aid, but most won’t. Consider what you’re asking of people.
Do not push a technology before its time
Do not push a technology after its time (who’s days are numbered)
The New York Times has a new article on Nest, describing how a software glitch allowed units to discharge completely and become non-functional. We’re all used to semi-functional gadgets and Internet services, but when it comes to thermostats we expect a higher standard of performance. After all, when thermostats go wrong, people can get sick and pipes can freeze. Or take a look at the problems of Nest Protect, a famously buggy smoke detector. Nest is an important case, because these are supposed to be the closest thing we have to grownups in IoT right now!
Having worked for an Internet of Things company, I have more than a little sympathy for Nest. It’s hard to make reliable connected things. In fact, it might be impossible — at least using today’s prevailing techniques and tools, and subject to today’s prevailing expectations for features, development cost, and time to market.
First, it should go without saying that a connected thermostat is millions or even billions of times as complex as the old, bimetallic strips that it is often replacing. You are literally replacing a single moving part that doesn’t even wear out with a complex arrangement of chips, sensors, batteries, and relays, and then you are layering on software: an operating system, communications protocols, encryption, a user interface, etc. Possibility that this witch’s brew can be more reliable than a mechanical thermostat: approximately zero.
But there is also something else at work that lessens my sympathy: culture. IoT is coming from the Internet tech world’s attempt to reach into physical devices. The results can be exciting, but we should stop for a moment to consider the culture of the Internet. This is a the culture of “go fast and break things.” Are these the people you want building devices that have physical implications in your life?
My personal experience with Internet-based services is that they, work most of the time. But they change on their own schedule. Features and APIs come and go. Sometimes your Internet connection goes out. Sometimes your device becomes unresponsive for no obvious reason, or needs to be rebooted. Sometimes websites go down for maintenance at an inconvenient time. Even when the app is working normally, experience can vary. Sometimes it’s fast, sometimes slow. Keypresses disappear into the ether, etc.
My experience building Internet-based services is even more sobering. Your modern, complex web or mobile app is made up of agglomeration of sub-services, all interacting asynchronously through REST APIs behind the scenes. Sometimes, those sub-services use other sub-services in their implementation, and you don’t even have a way of knowing what ones. Each of those links can fail for many reasons, and you must code very defensively to gracefully handle such failures. Or you can do what most apps do — punt. That’s fine for chat, but you’ll be sorely disappointed if your sprinkler kills your garden, or even if your alarm clock failures to wake you up before an important meeting.
This post is sloppily going to try to tie together two threads that have been in my newsfeed for years.
The first thread is about rising inequality. It is the notion, as Thomas Piketty puts it that r > g, or returns to capital are greater than the growth rate of the economy, so that ultimately wealth concentrates. I think there is some good evidence that wealth is concentrating now (stagnating middle class wages for decades) but I am certainly not economist enough to judge. So let’s just take this as a supposition for the moment. (NB: Also, haven’t read the book.)
There are many proposed mechanisms for this concentration, but one of them is that the wealthy, with both more at stake and more resources to wield, access power more successfully than most people. That is, they adjust the rules of the game constantly in their favor. (Here’s a four year old study that showed that in the Chicago area, more than 1/2 of those with wealth over $7.5M had personally contacted their congressional representatives. Have you ever gotten your Senator on the line?)
The second thread is about what technology does or does not accomplish. Is tech the great equalizer, bringing increased utility to everyone by lowering the cost of everything? Or is its role more complex?
I have fond memories of a project of the early 70’s that postulated that we did not need programs at all! All we needed was “intelligence amplification”. If they have been able to design something that could “amplify” at all, they have probably discovered it would amplify stupidity as well…
If you think of tech as an amplifier of sorts, then one can see that there is little reason to think that it always makes life better. An amplifier can amplify intelligence as well as stupidity, but it can also amplify greed and avarice, could it not?
Combining these threads we get to my thesis, that technology, rather than lifting all boats and making us richer, can be exploited by those who own it and control its development and deployment can use it disproportionally to their benefit, resulting in a permanent wealthy class. That is, though many techno-utopians see computers as a means to allow everyone a leisure-filled life, a la Star Trek, the reality is that there is no particular reason to think that that benefits of technology and computers will accrue to the population at large. Perhaps there are good reasons to think they won’t.
In short, if the wealthy can wield government to their benefit, won’t they similarly do so with tech?
There is plenty of evidence contrary to this thesis. Tech has given us many things we could never have afforded before, like near-infinite music collections, step-by-step vehicle navigation, and near-free and instant communications (including transmission of cat pictures). In that sense, we are indeed all much richer for it. And in the past, to the degree that tech eliminated jobs, it always seemed that new demand arose as a result, creating even more work. Steam engine, electrification, etc.
But recent decades do seem a different pattern, and I can’t help but see a lot of tech’s gifts as bric-a-brac, trivial as compared to the basics of education and economic security, where it seems that so far, tech has contributed surprisingly little. Or, maybe that should not be surprising?
Some drone enthusiasts and some libertarians are up in arms. “It’s just a new technology that they fear they can’t control!” “Our rights are being curtailed for no good reason!” “That database will be used against us, just wait and see!”
I have a few thoughts.
First, I have more than a smidgeon of sympathy for these views. We should always pause whenever the government decides it needs to intervene in some process. And to be frank, the barriers set by the FAA to traditional aviation are extremely high. So high that general aviation has never entered the mainstream of American culture, and given the shrinking pilot population, probably never will. The price to get in the air is so high in terms of training that few ever get there. As a consequence, the price of aircraft remains high, the technological improvement of aircraft remains slow, rinse, repeat.
In fact, I have often wondered what the world might be like if the FAA had been more lax about crashes and regulation. Perhaps we’d have skies filled with swarms of morning commuters, with frequent crashes which we accept as a fact of life. Or perhaps those large volumes of users would spur investment in automation and safety technologies that would mitigate the danger — at least after an initial period of carnage.
I think I would be upset if the rules were like those for general aviation. But in fact registration is pretty modest. I suspect that later, there will be some training and perhaps a knowledge test, which seems quite reasonable. As a user of the National Airspace System (both as a pilot and a passenger) I certainly appreciate not ramming into solid objects while airborne. Registration, of course, doesn’t magically separate aircraft, but it provides a means for accountability. Over time, I suspect rules will be developed to set expectations on behavior, so that all NAS users know what to expect in normal operations. Call it a necessary evil, or, to use a more traditional term, “governance.”
But there is one interesting angle here: the class of UAS being regulated (those weighing between 0.55 lb and 55 lb) have existed for a long time in the radio-controlled model community. What has changed to make drones “special,” requiring regulation now?
I think it is not the aircraft themselves, but the community of users. Traditional radio-controlled models were expensive to buy, took significant time to build, and were difficult to fly. The result was an enthusiast community, which by either natural demeanor or soft-enforced community norms, seemed able to keep their model airplanes out of airspace used by manned aircraft.
Drones came along and that changed quickly. The drones are cheap and easy to fly, and more and different people are flying them. And they’re alone, not in clubs. The result has been one serious airspace incursion after another.
A lot of people seem to think that because drones aren’t fundamentally different technology from traditional RC hobby activity, that no new rule is warranted. I don’t see the logic. That’s not smart. It’s not about the machines, it’s about the situation.
Anyway, I think the future for drone ops is actually quite bright. There is precedent for a vibrant hobby along with reasonable controls. Amateur radio is one example. Yes, taking a multiple-choice test is a barrier to many, but perhaps a barrier worth having. Also, the amateur radio community seems to have developed its own immune system against violators of the culture and rules, which works out nicely, since the FCC (like the FAA) has limited capacity for enforcement. And it’s probably not coincidental that the FCC has never tried to build up a large enforcement capability.
Which brings me to my final point, which is that if the drone community is smart they will create a culture of their own and they will embrace and even suggest rules that allow their hobby to fruitfully coexist with traditional NAS users. The Academy of Model Aeronautics, a club of RC modelers, could perhaps grow to encompass the coming army of amateur drone users.
The other evening, I was relaxing in my special coffin filled with semiconductors salvaged from 1980’s-era consumer electronics, when I was thinking about how tired I am of hearing about a certain self-funded presidential candidate, or guns, or terrorism … and my mind wondered to simpler times. Not simpler times without fascists and an easily manipulated populace, but simpler times where you could more easily avoid pointless and dumb news, while still getting normal news.
It wasn’t long ago that I read news, or at least “social media” on Usenet, a system for posting on message boards that predates the web and even the Internet. My favorite “news reader” (software for reading Usenet) was called trn. I learned all it’s clever single-key commands and mastered a feature common to most serious news readers: the kill file.
Kill files are conceptually simple. They contain a list of rules, usually specified as regular expressions, that determine which posts on the message board you will see. Usenet was the wild west, and it always had a lot of garbage on it, so this was a useful feature. Someone is being rude, or making ridiculous, illogical arguments? <plonk> Into the kill file goes their name. Enough hearing about terrorism? <plonk> All such discussions disappear.
Serious users of Usenet maintained carefully curated kill files, and the result was generally a pleasurable reading experience.
Of course, technology moves on. Most people don’t use text-based news readers anymore, and Facebook is the de-facto replacement for Usenet. And in fact, Facebook is doing curation of our news feed – we just don’t know what it is they’re doing.
All of which brings me to musing about why Facebook doesn’t support kill files, or any sophisticated system for controlling the content you see. We live in more advanced times, so we should have more advanced software, right?
More advanced, almost certainly, but better for you? Maybe not. trn ran on your computer, and the authors (its open source) had no pecuniary interest in your behavior. Facebook, of course, is a media company, not a software company, and in any case, you are not the customer. The actual customers do not want you to have kill files, so you don’t.
Though I enjoy a good Facebook bash more than most people, I must also admit that Usenet died under a pile of its own garbage content. It was an open system and, after, a gajillion automated spam posts, even aggressive kill files could not keep up. Most users abandoned it. Perhaps if there had been someone with a pecuniary interest in making it “work,” things would have been different. Also, if it could had better support for cat pictures.
Coming after my last post, which took aim at Vox, I am hereby directing you to an interesting interview on Vox in which a researcher discusses his work on bullshit. Bullshit, as the researcher defines it is:
Bullshit is different from nonsense. It’s not just random words put together. The words we use have a syntactic structure, which implies they should mean something.
The difference between bullshit and lying is that bullshit is constructed without any concern for the truth. It’s designed to impress rather than inform. And then lying, of course, is very concerned with the truth — but subverting it.
This a pretty fascinating category, no? What is it for? The first thing that springs to mind is establishing authority, which, though distinct from lying, seems to be the basic groundwork for slipping in lies by shutting down critical faculties. Bullshit is like the viral protein coat necessary to deliver some RNA lie payload.
It seems to me that bullshit is particularly rampant these days, but perhaps someone with more knowledge of history will correct me. We live in a very complex, dynamic world, and simple heuristics built into our wetware seem rather outgunned when confronted with modern, well-engineered, state-of-the-art BS. Furthermore, I notice more and more people — not just those in the business of propaganda — who make their living, in part or wholly, by spinning bullshit. Bullshit about guns, vaccines, education, politics, food, religion, terrorism, how your dotcom is helping the world — you name it.
Bullshit arising from the San Bernadino killings angered me over the last few days. Gun control advocates filled my FB feed with pleas for gun control, but the facts of the situation seem to imply that these people would have been able to perpetrate their murder under any conceivable gun control regime, except, perhaps, for a total ban with confiscation. (Which I think we can all agree is not going to happen and probably shouldn’t.) The conservative media, of course, seems aflame with innuendo about Islam and violence, justifying fear of Muslim refugees and discrimination against them. Overall, it’s too early to make much sense of this tragedy, but whether you like gun control or restrictions on refugee immigration, there’s not much in this event to support a serious argument for either. Which is to say, everything on your Facebook feed that links this story to pretty much any cause is 100% pure bullshit.
I believe traditional thinking about bullshit is that, first, people who hear bullshit that confirms their priors just let it go unprocessed because, well, why not? And second, that processing everything you hear critically is work, and most people quite rationally avoid work when they can.
I (and this researcher) wonder, though, because some people have highly sensitive bullshit detectors and can sniff it out instantly, without consulting snopes.com or WebMD. And I know plenty of people who get angry about bullshit, even when it aligns with what they already believe.
Is this some kind of immunity? Is it natural or can people be inoculated? And if the latter is possible, how do we go about it?