Apparently, both Elon Musk and Neil deGrasse Tyson believe that we are probably living in a more advanced civilization’s computer simulation.
Now, I’m no philosopher, so I can’t weigh in on whether I really exist, but it does occur to me that if this is a computer simulation, it sucks. First, we have cruelty, famine, war, natural disasters, disease. On top of that, we do not have flying cars, or flying people, or teleportation for that matter.
Seriously, whoever is running this advanced civilization simulation must be into some really dark shit.
Short post here. I notice people are writing about self-driving cars a lot. There is a lot of excitement out there about our driverless future.
I have a few thoughts, to expand on at a later day:
Apparently a lot of economic work on driving suggests that the a major externality of driving is from congestion. Simply, your being on the road slows down other people’s trips and causes them to burn more gas. It’s an externality because it is a cost of driving that you cause but don’t pay.
Now, people are projecting that a future society of driverless cars will make driving cheaper by 1) eliminating drivers (duh) and 2) getting more utilization out of cars. That is, mostly, our cars sit in parking spaces, but in a driverless world, people might not own cars so much anymore, but rent them by the trip. Such cars would be much better utilized and, in theory, cheaper on a per-trip basis.
So, if I understand my micro econ at all, people will use cars more because they’ll be cheaper. All else equal, that should increase congestion, since in our model, congestion is an externality. Et voila, a bad outcome.
But, you say, driverless cars will operate more efficiently, and make more efficient use of the roadways, and so they generate less congestion than stupid, lazy, dangerous, unpredictable human drivers. This may be so, but I will caution with a couple of ideas. First, how much less congestion will a driverless trip cause than a user-operated one? 75% as much? Half? Is this enough to offset the effect mentioned above? Maybe.
But there is something else that concerns me: the difference between soft- and hard-limits.
Congestion as we experience it today, seems to come on gradually as traffic approaches certain limits. You’ve got cars on the freeway, you add cars, things get slower. Eventually, things somewhat suddenly get a lot slower, but even then it’s certain times of the day, in certain weather, etc.
Now enter a driverless cars that utilize capacity much more effectively. Huzzah! More cars on the road getting where they want, faster. What worries me is that was is really happening is not that the limits are raised, but that we are operating the system much close to existing, real limits. Furthermore, now that automation is sucking out all the marrow from the road bone — the limits become hard walls, not gradual at all.
So, imagine traffic is flowing smoothly until a malfunction causes an accident, or a tire blows out, or there is a foreign object in the road — and suddenly the driverless cars sense the problem, resulting in a full-scale insta-jam, perhaps of epic proportions, in theory, locking up an entire city nearly instantaneously. Everyone is safely stopped, but stuck.
And even scarier than that is the notion that the programmers did not anticipate such a problem, and the car software is not smart enough to untangle it. Human drivers, for example, might, in an unusual situation, use shoulders or make illegal u-turns in order to extricate themselves from a serious problem. That’d be unacceptable in a normal situation, but perhaps the right move in an abnormal one. Have you ever had a cop the scene of an accident wave at you to do something weird? I have.
Will self-driving cars be able to improvise? This is an AI problem well beyond that of “merely” driving.”
Speaking of capacity and efficiency, I’ll be very interested to see how we make trade-offs of these versus safety. I do not think technology will make these trade-offs go away at all. Moving faster, closer will still be more dangerous than going slowly far apart. And these are the essential ingredients in better road capacity utilization.
What will be different will be how and when such decisions are made. In humans, the decision is made implicitly by the driver moment by moment. It depends on training, disposition, weather, light, fatigue, even mood. You might start out a trip cautiously and drive more recklessly later, like when you’re trying to eat fast food in your car. The track record for humans is rather poor, so I suspect that driverless cars will do much better overall.
But someone will still have to decide what is the right balance of safety and efficiency, and it might be taken out of the hands of passengers. This could go different ways. In a liability-driven culture me way end up with a system that is safer but maybe less efficient than what we have now. (call it “little old lady mode”) or we could end up with decisions by others forcing us to take on more risk than we’d prefer if we want to use the road system.
I recently read in the June IEEE Spectrum (no link, print version only) that some people are suggesting that driverless cars will be a good justification for the dismantlement of public transit. Wow, that is a bad idea of epic proportions. If, in the first half of the 21st century, the world not only continues to embrace car culture, but doubles down to the exclusion of other means of mobility, I’m going to be ill.
* * *
That was a bit more than I had intended to write. Anyway, one other thought is that driverless cars may be farther off than we thought. In a recent talk, Chris Urmson, the director of the Google car project explains that the driverless cars of our imaginations — the fully autonomous, all conditions, all mission cars — may be 30 years off or more. What will come sooner are a succession of technologies that will reduce driver workload.
So, I suspect we’ll have plenty of time to think about this. Moreover, the nearly 7% of our workforce that works in transportation will have some time to plan.
join() functions. (As well as semicolons, tabs, braces, of course.)
join() is a simple function that joins an array of strings into one long string, sticking a separator in between, if you want.
"this_that_other" . Pretty simple.
join() as a built-in, and it has an old-school non object interface.
Python is object-orienty, so it has an object interface:
What’s interesting here is that join is a member of the string class, and you call it on the separator string. So you are asking a
"," to join up the things in that array. OK, fine.
I was surprised to see that C++ does not include join in its standard library, even though it has the underlying pieces:
<string>. I made up a little one like this:
Now that’s no beauty queeen. The function does double-duty to make it a bit easier to allocate for the resulting string. You call it first without a target pointer and it will return the size you need (not including the terminating null.) Then you call it again with the target pointer for the actual copy.
I’ve been thinking a lot lately about success and innovation. Perhaps its because of my lack of success and innovation.
Anyway, I’ve been wondering how the arrow of causality goes with those things. Are companies successful because they are innovative, or are they innovative because they are successful.
This is not exactly a chicken-and-egg question. Google is successful and innovative. It’s pretty obvious that innovation came first. But after a few “game periods,” the situation becomes more murky. Today, Google can take risks and reach further forward into the technology pipeline for ideas than a not-yet successful entrepeneur could. In fact, a whole lot of their innovation seems not to affect their bottom line much, in part because its very hard to grow a new business at the scale of their existing cash cows. This explains (along with impatience and the opportunity to invest in their high-returning existing businesses) Google’s penchant for drowning many projects in the bathtub.
I can think of other companies that had somewhat similar behavior over history. AT&T Bell Labs and IBM TJ Watson come to mind as places that were well funded due to their parent companies enormous success (success, derived at least in part, from monopoly or other market power). And those places innovated. A lot. As in Nobel Prizes, patents galore, etc. But, despite their productive output of those labs, I don’t think they ever contributed very much to the companies’ success. I mean, the transistor! The solar cell! But AT&T didn’t pursue these businesses because they had a huge working business that didn’t have much to do with those. Am I wrong about that assessment? I hope someone more knowledgable will correct me.
Anyway, that brings me back to the titans of today, Google, Facebook, etc. And I’ll continue to wonder out loud:
are they innovating?
is the innovation similar to their predecessors?
are they benefiting from their innovation?
if not, who does, and why do they do it?
So, this gets back to my postulate, which is that, much more often than not, success drives innovation, and not the reverse. That it ever happens the other way is rare and special.
Perhaps a secondary postulate is that large, successful companies do innovate, but they have weak incentives to act aggressively on those innovations, and so their creative output goes underutilized longer than it might if it had been in the hands of a less successful organization.
I like a cheap shot at economists. Who doesn’t? Economists are so frequently arrogant, close-minded, smug and willing to throw out data that doesn’t match the theory. Why not enjoy a good takedown screed? If you need to hear social scientists vent even more about the weaknesses in economics, the comments here are even more fun.
I have formally and informally studied econ a lot and have to say, I have a good deal of sympathy for some of the points made in the links above. The fact that we have seen some earth-shaking economic events in our lives and our “top men” have not, even many years on, been able to set aside ideology and come to some agreement about what has happened, or why, does not speak well for the whole intellectual endeavor. (NB: I don’t read the papers; I read the blogs, so my opinion is formed from that sample set.)
All that said, let’s remember that microeconomics has been a mostly successful enterprise. You want to know how to structure a welfare program to provide the least distortions? You want to internalize the costs of pollution? You want to set up an auction? Economists have your back.
You want to maximize social utility according to a welfare function of you choosing? Fuggetaboutit.
Sometimes I think that one of the bummers of humanity is that most of us are overqualified for what we do. I guess that’s not a big problem compared to starvation or war, but I think that getting each person to achieve — and then apply — their human potential is a laudable goal for a society. (That is, if you’re like me and think societies should have goals.)
I often feel overqualified for the tasks I’m asked to do. In part, that’s my own fault because I’ve made it hobby to pick up bits of unrelated knowledge that more or less cannot be simultaneously applied to any particular project. But still, I find it a thrill to be working on a particular problem that is at the limit of my capability. I think we would all benefit from being at the limit of our capabilities more, but our employers might feel otherwise.
I’m not bragging about being overqualified, mind you. The “sandwich artist” at Subway is overqualified, too. A thousand years ago, the farmer with his shoulder to the plow was overqualified. We’ve all got these amazing state-of-the-art computers in our heads that can solve all sorts of problems. Sure, some of them are better at certain things than others, but I think it’s safe to say that the vast majority of them are grossly underutilized.
Sometimes I envy medical doctors. They probably spend a lot of time doing routine things, but they must occasionally be presented with a patient that they will have to work hard to treat. As long as there exist ailments that cannot be cured, I think it’s fair to say that physicians will never be overqualified.
Airline pilots are also an interesting category. Jets are highly automated these days, and a lot of flying is just “minding” the machine. But every once in awhile a pilot is called on to apply judgment in an abnormal and unforseen situation. They might spend most of their professional lives quite bored, but there will be those times, I’m sure, when they feel just adequately — if not inadequately –qualified.
Anyway, there’s not much point to this post. One of the things I admire about the fictional Star Trek universe is that each person seems able to work in the field of their choosing, pushing the boundaries of their abilities, without a care or concern other than excellence for its own sake. (That is, unless you’re a “red shirt.”)
Until we’re Star Fleet officers, let’s put our shoulder back to our plows and hope for better for our kids.
Some drone enthusiasts and some libertarians are up in arms. “It’s just a new technology that they fear they can’t control!” “Our rights are being curtailed for no good reason!” “That database will be used against us, just wait and see!”
I have a few thoughts.
First, I have more than a smidgeon of sympathy for these views. We should always pause whenever the government decides it needs to intervene in some process. And to be frank, the barriers set by the FAA to traditional aviation are extremely high. So high that general aviation has never entered the mainstream of American culture, and given the shrinking pilot population, probably never will. The price to get in the air is so high in terms of training that few ever get there. As a consequence, the price of aircraft remains high, the technological improvement of aircraft remains slow, rinse, repeat.
In fact, I have often wondered what the world might be like if the FAA had been more lax about crashes and regulation. Perhaps we’d have skies filled with swarms of morning commuters, with frequent crashes which we accept as a fact of life. Or perhaps those large volumes of users would spur investment in automation and safety technologies that would mitigate the danger — at least after an initial period of carnage.
I think I would be upset if the rules were like those for general aviation. But in fact registration is pretty modest. I suspect that later, there will be some training and perhaps a knowledge test, which seems quite reasonable. As a user of the National Airspace System (both as a pilot and a passenger) I certainly appreciate not ramming into solid objects while airborne. Registration, of course, doesn’t magically separate aircraft, but it provides a means for accountability. Over time, I suspect rules will be developed to set expectations on behavior, so that all NAS users know what to expect in normal operations. Call it a necessary evil, or, to use a more traditional term, “governance.”
But there is one interesting angle here: the class of UAS being regulated (those weighing between 0.55 lb and 55 lb) have existed for a long time in the radio-controlled model community. What has changed to make drones “special,” requiring regulation now?
I think it is not the aircraft themselves, but the community of users. Traditional radio-controlled models were expensive to buy, took significant time to build, and were difficult to fly. The result was an enthusiast community, which by either natural demeanor or soft-enforced community norms, seemed able to keep their model airplanes out of airspace used by manned aircraft.
Drones came along and that changed quickly. The drones are cheap and easy to fly, and more and different people are flying them. And they’re alone, not in clubs. The result has been one serious airspace incursion after another.
A lot of people seem to think that because drones aren’t fundamentally different technology from traditional RC hobby activity, that no new rule is warranted. I don’t see the logic. That’s not smart. It’s not about the machines, it’s about the situation.
Anyway, I think the future for drone ops is actually quite bright. There is precedent for a vibrant hobby along with reasonable controls. Amateur radio is one example. Yes, taking a multiple-choice test is a barrier to many, but perhaps a barrier worth having. Also, the amateur radio community seems to have developed its own immune system against violators of the culture and rules, which works out nicely, since the FCC (like the FAA) has limited capacity for enforcement. And it’s probably not coincidental that the FCC has never tried to build up a large enforcement capability.
Which brings me to my final point, which is that if the drone community is smart they will create a culture of their own and they will embrace and even suggest rules that allow their hobby to fruitfully coexist with traditional NAS users. The Academy of Model Aeronautics, a club of RC modelers, could perhaps grow to encompass the coming army of amateur drone users.
The other evening, I was relaxing in my special coffin filled with semiconductors salvaged from 1980’s-era consumer electronics, when I was thinking about how tired I am of hearing about a certain self-funded presidential candidate, or guns, or terrorism … and my mind wondered to simpler times. Not simpler times without fascists and an easily manipulated populace, but simpler times where you could more easily avoid pointless and dumb news, while still getting normal news.
It wasn’t long ago that I read news, or at least “social media” on Usenet, a system for posting on message boards that predates the web and even the Internet. My favorite “news reader” (software for reading Usenet) was called trn. I learned all it’s clever single-key commands and mastered a feature common to most serious news readers: the kill file.
Kill files are conceptually simple. They contain a list of rules, usually specified as regular expressions, that determine which posts on the message board you will see. Usenet was the wild west, and it always had a lot of garbage on it, so this was a useful feature. Someone is being rude, or making ridiculous, illogical arguments? <plonk> Into the kill file goes their name. Enough hearing about terrorism? <plonk> All such discussions disappear.
Serious users of Usenet maintained carefully curated kill files, and the result was generally a pleasurable reading experience.
Of course, technology moves on. Most people don’t use text-based news readers anymore, and Facebook is the de-facto replacement for Usenet. And in fact, Facebook is doing curation of our news feed – we just don’t know what it is they’re doing.
All of which brings me to musing about why Facebook doesn’t support kill files, or any sophisticated system for controlling the content you see. We live in more advanced times, so we should have more advanced software, right?
More advanced, almost certainly, but better for you? Maybe not. trn ran on your computer, and the authors (its open source) had no pecuniary interest in your behavior. Facebook, of course, is a media company, not a software company, and in any case, you are not the customer. The actual customers do not want you to have kill files, so you don’t.
Though I enjoy a good Facebook bash more than most people, I must also admit that Usenet died under a pile of its own garbage content. It was an open system and, after, a gajillion automated spam posts, even aggressive kill files could not keep up. Most users abandoned it. Perhaps if there had been someone with a pecuniary interest in making it “work,” things would have been different. Also, if it could had better support for cat pictures.
Coming after my last post, which took aim at Vox, I am hereby directing you to an interesting interview on Vox in which a researcher discusses his work on bullshit. Bullshit, as the researcher defines it is:
Bullshit is different from nonsense. It’s not just random words put together. The words we use have a syntactic structure, which implies they should mean something.
The difference between bullshit and lying is that bullshit is constructed without any concern for the truth. It’s designed to impress rather than inform. And then lying, of course, is very concerned with the truth — but subverting it.
This a pretty fascinating category, no? What is it for? The first thing that springs to mind is establishing authority, which, though distinct from lying, seems to be the basic groundwork for slipping in lies by shutting down critical faculties. Bullshit is like the viral protein coat necessary to deliver some RNA lie payload.
It seems to me that bullshit is particularly rampant these days, but perhaps someone with more knowledge of history will correct me. We live in a very complex, dynamic world, and simple heuristics built into our wetware seem rather outgunned when confronted with modern, well-engineered, state-of-the-art BS. Furthermore, I notice more and more people — not just those in the business of propaganda — who make their living, in part or wholly, by spinning bullshit. Bullshit about guns, vaccines, education, politics, food, religion, terrorism, how your dotcom is helping the world — you name it.
Bullshit arising from the San Bernadino killings angered me over the last few days. Gun control advocates filled my FB feed with pleas for gun control, but the facts of the situation seem to imply that these people would have been able to perpetrate their murder under any conceivable gun control regime, except, perhaps, for a total ban with confiscation. (Which I think we can all agree is not going to happen and probably shouldn’t.) The conservative media, of course, seems aflame with innuendo about Islam and violence, justifying fear of Muslim refugees and discrimination against them. Overall, it’s too early to make much sense of this tragedy, but whether you like gun control or restrictions on refugee immigration, there’s not much in this event to support a serious argument for either. Which is to say, everything on your Facebook feed that links this story to pretty much any cause is 100% pure bullshit.
I believe traditional thinking about bullshit is that, first, people who hear bullshit that confirms their priors just let it go unprocessed because, well, why not? And second, that processing everything you hear critically is work, and most people quite rationally avoid work when they can.
I (and this researcher) wonder, though, because some people have highly sensitive bullshit detectors and can sniff it out instantly, without consulting snopes.com or WebMD. And I know plenty of people who get angry about bullshit, even when it aligns with what they already believe.
Is this some kind of immunity? Is it natural or can people be inoculated? And if the latter is possible, how do we go about it?
How well do you understand the beliefs of those at the opposite political spectrum as yourself?
Being a semiprofessional policy nerd, so I thought I had a good handle on this. I know, for example, most of the conservative and liberal arguments for this or that policy proposal, and can (and do) rank them on their credibility all the time, constantly adjusting those rankings as I learn more about the world. That’s a wonk’s life.
But here’s a different question: which of those arguments do they believe and feel are the most compelling?
Some JMU researchers have devised a little experiment to determine just that. It’s a short questionnaire. You should take it! They ask you a few questions about the best policy arguments from conservative and liberal viewpoints and then they ask you your own political orientation.
I learned something from my results. I was able to correctly identify the favored argument of political conservatives approximately zero percent of the time. 0 for 5!
Paul Krugman thinks liberals understand conservative reasoning better than conservatives do liberal reasoning. Well, he might be true with respect to the logic of the arguments, but at least for this guy, he’s dead wrong regarding the beliefs about the strengths of the arguments.