After last night’s embarrassing Clinton vs. Trump matchup, I’m once again feeling glum and confused. It caused me to reflect on a dichotomy that I was exposed to in high school: that of “great man” vs. circumstance. I think I believe mostly in circumstance, and maybe even a stronger version of that theory than is commonly proposed.
In my theory, Trump is not an agent with free will, but more akin to a virus: ablob of RNA with a protein coat, evolved to do one thing, without any sense of what it is doing. He is a speck floating in the universe, a mechanically fulfilling its destiny. A simulation running in an orrery of sufficient complexity could predict his coming.
This is his story:
Somewhere, through a combination of natural selection and genetic mutation, a strange child is born into a perfectly suited environment, ample resources and protection for his growth into a successful, powerful monster. Had he been born in another place or time, he might have been abandoned on an ice floe when his nature was discovered, or perhaps killed in early combat with another sociopath. But he prospered. With a certain combination of brashness and utter disregard for anything like humility, substance, or character, it was natural that he would be put on magazine covers, and eventually, television, where, because of television’s intrinsic nature, itself the product of a long, peculiar evolution, he killed, growing yet more powerful.
Later, perhaps prompted by something he saw on a billboard or perhaps due to a random cosmic ray triggering a particular neuron to fire, our virus started talking about politics. By chance, his “ideas” plugged into certain receptors, present in the most ancient, reptilian parts of our brains. Furthermore, society’s immune system, weakened through repeated recent attacks from similar viruses, was wholly unprepared for this potent new disease vector. Our virus, true to form, exploited in-built weaknesses to direct the media and make it work for its own benefit, potentially instructing the media to destroy itself and maybe taking the entire host — our world — in the process.
In the end, what will be left? A dead corpse of a functioning society, teeming with millions of new viruses, ready to infect any remnants or new seedlings of a vital society.
I often think about how to preserve data. This is mostly driven by my photography habit. My pictures are not fantastic, but they mean a lot to me, and I suspect, but am by no means certain, that they will mean something to my children and grandchildren. I certainly would love to know what the lives of my own grandparents were like, to see them in stages of life parallel to my own. But I don’t know how to make sure my kids and their kids will be able to see these photos.
This is a super difficult problem. The physical media that the images are stored on (hard drives, flash cards, etc) degrade and will fail over time, and even if they don’t, the equipment to read that media will become scarce. Furthermore, the format of the data may become undecipherable over time as well. I have high confidence that it will be possible to read jpegs in the year 2056, but when you get into some more esoteric formats, I dunno.
A commonly proffered solution is to upload your data to a cloud service for backup. I have strong reservations about this as a method for long-term preservation. Those cloud backups are only good as long as the businesses that run them have some reason to continue to do so. Subscriptions, user accounts, and advertising driven revenue seem a poor match for permanent archival storage of anything. Who, long after I’m dead, is going to receive the email that says “your account will be closed if you do not update your credit card in 30 days”? Also, what good is a backup of data I can no longer view on my now-current quantum holographic AI companion?
All of this compares quite unfavorably with a common archival technique used for informal, family information: the shoe box. Photographs stored in a shoe box are susceptible to destruction by fire or flood, but they are fantastically resilient to general benign neglect over exceedingly long periods of time. Sure, the colors will fade if the box is left in a barn for 50 years, but, upon discovery, anyone can recognize the images using the mark-I human eyeball. (Furthermore, it’s really astounding how easy it is to use a computer to restore natural color to faded images.)
There is simply no analog to the shoe box full of negatives in today’s world. Sure, you can throw some flash memory cards into such a box, but you still have the readout problems mentioned above.
As people migrate from their first digital camera to their last digital camera to iPhoneN to iPhoneN+1, lots of images have already been lost. Because of the very short history of digital photography, you can’t even blame that loss on technological change. It’s more about plain old poor stewardship. But just to amplify my point above: the shoe box is quite tolerant of poor stewardship.
* * *
Okay, so, this post was not even going to be about the archival problems of families. That is, in aggregate, a large potential loss, made up of hundreds of millions of comparatively smaller losses.
The reason I decided to write today was because I saw this blog post about this article, in which it was described how the on-line archives for a major metropolitan newspaper — going back more than 200 years, are in risk of disappearing from the digital universe.
Here we have a situation in which institutions that are committed to preserving history, with (shrinking) staffs of professional librarians and archivists are failing to preserve history for future generations. In this case, the microfiche archives of the print version of the paper are safe, but the digitally accessible versions are not. The reason: you can’t just put them in a shoe box (or digital library). Someone most host them, and that someone needs to get paid. Forever.
Going forward, more and more of our history is going to happen only in the digital world. Facebook, Twitter, Hillary Clinton’s (or anyone other politician’s) email. There’s not going to be a microfilm version at the local university library. Who is going to store it? Who will access it and how?
A few years ago, it looked like companies like Google were going to — pro bono — solve this problem for us. They were ready, willing, and seemingly able to host all the data and make is available. But now things are getting in the way. Copyright is one. The demand from investors to monetize is another. It used to be thought that you could not monetize yesterday’s paper — today’s paper is tomorrow’s fish-wrap, but more wily content owners realize that if they don’t know the value of an asset, they can’t give it away for free. Even Google, which, I think, hands somewhat tied, is still committed to this sort of project, probably cannot be trusted with the permanent storage of our collective history. Will they be around in 50, 100 years? Will they migrate all their data forever? Will they get bought and sold a dozen times to owners who are not as committed to their original mission to “organize the world’s information and make it universally accessible and useful?” Will the actual owners of the information that Google is trying to index try to monetize it into perpetuity?
I think we know the answers. Right now, it all looks pretty grim to me.
I think one of the rites of your mid-40s is assessing what you have accomplished relative to what your young self thought you were going to accomplish — and feeling blue about it.
I’ve been thinking a lot lately about my interchangeability. That is, the notion that in the vast majority of my own endeavors in life, I have been basically an interchangeable part in a larger system. This is in spite of being a pretty clever engineer, pretty good policy analyst, or even Really Smart Guy. I can’t really point to any professional situation where someone else, similarly trained and skilled, would not have performed the job more or less as I did. Or perhaps it’s just the nature of employment in our world that even if my particular combination of traits is uncommon, most jobs only require only a couple of them.
I’m not saying that we are not each unique, special snowflakes, but that in the vast majority of situations, that uniqueness just isn’t operational. Which is kind of a rough realization. It’s probably best for young folks to avoid realizing this as long as they can.
Of course, there are aspects of life in which even the least accomplished of us is not interchangeable. Obviously, the personal aspects come to mind. My spouse and kids would probably be nonplussed to wake up one day to find me replaced with someone who was “a lot like me.” Though maybe in a week, year, or decade they’d mostly get over it.
Some people transcend interchangeability. Artists and musicians create unique things that nobody else could possible have created. Sometimes scientists discover things that would otherwise have gone undiscovered for a long time. These are things whose profound singularness are easily recognizable, whether or not you know the creator herself. That’s pretty amazing if you think about it. Can anyone do this or is it just for rare talents?
Maybe, part of the secret to a happy second half of life is the acceptance of interchangeability. So what if I’m not leaving a unique mark on the world? Maybe it’s lifts a huge burden to accept that it’s more than enough that your family and friends love you (I have received reliable assurances that mine do) and that has nothing to do with your worldly accomplishments. Or, is that prematurely throwing in the towel on the world?
Or, maybe interchangeability — an assessment of value of self as perceived by the outside world — is a wholly inappropriate way to consider one’s own life. An alternative approach might be to totally disregard what the world “thinks” and just “be” whatever works for me. The world probably isn’t even real, anyway? Because of my non-liberal arts background, I wasn’t exposed to much philosophy in my education, but in high school I had a teacher who was really big into the existentialists. Didn’t really resonate for me then, but it’s starting to much more these days.
Apparently, both Elon Musk and Neil deGrasse Tyson believe that we are probably living in a more advanced civilization’s computer simulation.
Now, I’m no philosopher, so I can’t weigh in on whether I really exist, but it does occur to me that if this is a computer simulation, it sucks. First, we have cruelty, famine, war, natural disasters, disease. On top of that, we do not have flying cars, or flying people, or teleportation for that matter.
Seriously, whoever is running this advanced civilization simulation must be into some really dark shit.
Short post here. I notice people are writing about self-driving cars a lot. There is a lot of excitement out there about our driverless future.
I have a few thoughts, to expand on at a later day:
I.
Apparently a lot of economic work on driving suggests that the a major externality of driving is from congestion. Simply, your being on the road slows down other people’s trips and causes them to burn more gas. It’s an externality because it is a cost of driving that you cause but don’t pay.
Now, people are projecting that a future society of driverless cars will make driving cheaper by 1) eliminating drivers (duh) and 2) getting more utilization out of cars. That is, mostly, our cars sit in parking spaces, but in a driverless world, people might not own cars so much anymore, but rent them by the trip. Such cars would be much better utilized and, in theory, cheaper on a per-trip basis.
So, if I understand my micro econ at all, people will use cars more because they’ll be cheaper. All else equal, that should increase congestion, since in our model, congestion is an externality. Et voila, a bad outcome.
II.
But, you say, driverless cars will operate more efficiently, and make more efficient use of the roadways, and so they generate less congestion than stupid, lazy, dangerous, unpredictable human drivers. This may be so, but I will caution with a couple of ideas. First, how much less congestion will a driverless trip cause than a user-operated one? 75% as much? Half? Is this enough to offset the effect mentioned above? Maybe.
But there is something else that concerns me: the difference between soft- and hard-limits.
Congestion as we experience it today, seems to come on gradually as traffic approaches certain limits. You’ve got cars on the freeway, you add cars, things get slower. Eventually, things somewhat suddenly get a lot slower, but even then it’s certain times of the day, in certain weather, etc.
Now enter a driverless cars that utilize capacity much more effectively. Huzzah! More cars on the road getting where they want, faster. What worries me is that was is really happening is not that the limits are raised, but that we are operating the system much close to existing, real limits. Furthermore, now that automation is sucking out all the marrow from the road bone — the limits become hard walls, not gradual at all.
So, imagine traffic is flowing smoothly until a malfunction causes an accident, or a tire blows out, or there is a foreign object in the road — and suddenly the driverless cars sense the problem, resulting in a full-scale insta-jam, perhaps of epic proportions, in theory, locking up an entire city nearly instantaneously. Everyone is safely stopped, but stuck.
And even scarier than that is the notion that the programmers did not anticipate such a problem, and the car software is not smart enough to untangle it. Human drivers, for example, might, in an unusual situation, use shoulders or make illegal u-turns in order to extricate themselves from a serious problem. That’d be unacceptable in a normal situation, but perhaps the right move in an abnormal one. Have you ever had a cop the scene of an accident wave at you to do something weird? I have.
Will self-driving cars be able to improvise? This is an AI problem well beyond that of “merely” driving.”
III.
Speaking of capacity and efficiency, I’ll be very interested to see how we make trade-offs of these versus safety. I do not think technology will make these trade-offs go away at all. Moving faster, closer will still be more dangerous than going slowly far apart. And these are the essential ingredients in better road capacity utilization.
What will be different will be how and when such decisions are made. In humans, the decision is made implicitly by the driver moment by moment. It depends on training, disposition, weather, light, fatigue, even mood. You might start out a trip cautiously and drive more recklessly later, like when you’re trying to eat fast food in your car. The track record for humans is rather poor, so I suspect that driverless cars will do much better overall.
But someone will still have to decide what is the right balance of safety and efficiency, and it might be taken out of the hands of passengers. This could go different ways. In a liability-driven culture me way end up with a system that is safer but maybe less efficient than what we have now. (call it “little old lady mode”) or we could end up with decisions by others forcing us to take on more risk than we’d prefer if we want to use the road system.
IV.
I recently read in the June IEEE Spectrum (no link, print version only) that some people are suggesting that driverless cars will be a good justification for the dismantlement of public transit. Wow, that is a bad idea of epic proportions. If, in the first half of the 21st century, the world not only continues to embrace car culture, but doubles down to the exclusion of other means of mobility, I’m going to be ill.
* * *
That was a bit more than I had intended to write. Anyway, one other thought is that driverless cars may be farther off than we thought. In a recent talk, Chris Urmson, the director of the Google car project explains that the driverless cars of our imaginations — the fully autonomous, all conditions, all mission cars — may be 30 years off or more. What will come sooner are a succession of technologies that will reduce driver workload.
So, I suspect we’ll have plenty of time to think about this. Moreover, the nearly 7% of our workforce that works in transportation will have some time to plan.
I’ve recently been doing a small project that involves Python and Javascript code, and I keep tripping up on the differing syntax of their
join() functions. (As well as semicolons, tabs, braces, of course.)
join() is a simple function that joins an array of strings into one long string, sticking a separator in between, if you want.
So,
join(["this","that","other"],"_") returns
"this_that_other" . Pretty simple.
Perl has
join() as a built-in, and it has an old-school non object interface.
Perl
1
my$foo_string=join(",",@bar_array);
Python is object-orienty, so it has an object interface:
Python
1
foo_string=",".join(bar_array)
What’s interesting here is that join is a member of the string class, and you call it on the separator string. So you are asking a
"," to join up the things in that array. OK, fine.
Javascript does it exactly the reverse. Here, join is a member of the array class:
JavaScript
1
varfoo_string=bar_array.join(",");
I think I slightly prefer Javascript in this case, since calling member functions of the separator just “feels” weird.
I was surprised to see that C++ does not include join in its standard library, even though it has the underlying pieces:
<vector> and
<string>. I made up a little one like this:
You can see I took the Javascript approach. By the way, this is how they do it in Boost. Boost avoids the extra compare for the separator each time by handling the first list item separately.
Using it is about as easy as the scripting languages:
Now that’s no beauty queeen. The function does double-duty to make it a bit easier to allocate for the resulting string. You call it first without a target pointer and it will return the size you need (not including the terminating null.) Then you call it again with the target pointer for the actual copy.
I’ve been thinking a lot lately about success and innovation. Perhaps its because of my lack of success and innovation.
Anyway, I’ve been wondering how the arrow of causality goes with those things. Are companies successful because they are innovative, or are they innovative because they are successful.
This is not exactly a chicken-and-egg question. Google is successful and innovative. It’s pretty obvious that innovation came first. But after a few “game periods,” the situation becomes more murky. Today, Google can take risks and reach further forward into the technology pipeline for ideas than a not-yet successful entrepeneur could. In fact, a whole lot of their innovation seems not to affect their bottom line much, in part because its very hard to grow a new business at the scale of their existing cash cows. This explains (along with impatience and the opportunity to invest in their high-returning existing businesses) Google’s penchant for drowning many projects in the bathtub.
I can think of other companies that had somewhat similar behavior over history. AT&T Bell Labs and IBM TJ Watson come to mind as places that were well funded due to their parent companies enormous success (success, derived at least in part, from monopoly or other market power). And those places innovated. A lot. As in Nobel Prizes, patents galore, etc. But, despite their productive output of those labs, I don’t think they ever contributed very much to the companies’ success. I mean, the transistor! The solar cell! But AT&T didn’t pursue these businesses because they had a huge working business that didn’t have much to do with those. Am I wrong about that assessment? I hope someone more knowledgable will correct me.
Anyway, that brings me back to the titans of today, Google, Facebook, etc. And I’ll continue to wonder out loud:
are they innovating?
is the innovation similar to their predecessors?
are they benefiting from their innovation?
if not, who does, and why do they do it?
So, this gets back to my postulate, which is that, much more often than not, success drives innovation, and not the reverse. That it ever happens the other way is rare and special.
Perhaps a secondary postulate is that large, successful companies do innovate, but they have weak incentives to act aggressively on those innovations, and so their creative output goes underutilized longer than it might if it had been in the hands of a less successful organization.
I like a cheap shot at economists. Who doesn’t? Economists are so frequently arrogant, close-minded, smug and willing to throw out data that doesn’t match the theory. Why not enjoy a good takedown screed? If you need to hear social scientists vent even more about the weaknesses in economics, the comments here are even more fun.
I have formally and informally studied econ a lot and have to say, I have a good deal of sympathy for some of the points made in the links above. The fact that we have seen some earth-shaking economic events in our lives and our “top men” have not, even many years on, been able to set aside ideology and come to some agreement about what has happened, or why, does not speak well for the whole intellectual endeavor. (NB: I don’t read the papers; I read the blogs, so my opinion is formed from that sample set.)
All that said, let’s remember that microeconomics has been a mostly successful enterprise. You want to know how to structure a welfare program to provide the least distortions? You want to internalize the costs of pollution? You want to set up an auction? Economists have your back.
You want to maximize social utility according to a welfare function of you choosing? Fuggetaboutit.
Sometimes I think that one of the bummers of humanity is that most of us are overqualified for what we do. I guess that’s not a big problem compared to starvation or war, but I think that getting each person to achieve — and then apply — their human potential is a laudable goal for a society. (That is, if you’re like me and think societies should have goals.)
I often feel overqualified for the tasks I’m asked to do. In part, that’s my own fault because I’ve made it hobby to pick up bits of unrelated knowledge that more or less cannot be simultaneously applied to any particular project. But still, I find it a thrill to be working on a particular problem that is at the limit of my capability. I think we would all benefit from being at the limit of our capabilities more, but our employers might feel otherwise.
I’m not bragging about being overqualified, mind you. The “sandwich artist” at Subway is overqualified, too. A thousand years ago, the farmer with his shoulder to the plow was overqualified. We’ve all got these amazing state-of-the-art computers in our heads that can solve all sorts of problems. Sure, some of them are better at certain things than others, but I think it’s safe to say that the vast majority of them are grossly underutilized.
Sometimes I envy medical doctors. They probably spend a lot of time doing routine things, but they must occasionally be presented with a patient that they will have to work hard to treat. As long as there exist ailments that cannot be cured, I think it’s fair to say that physicians will never be overqualified.
Airline pilots are also an interesting category. Jets are highly automated these days, and a lot of flying is just “minding” the machine. But every once in awhile a pilot is called on to apply judgment in an abnormal and unforseen situation. They might spend most of their professional lives quite bored, but there will be those times, I’m sure, when they feel just adequately — if not inadequately –qualified.
Anyway, there’s not much point to this post. One of the things I admire about the fictional Star Trek universe is that each person seems able to work in the field of their choosing, pushing the boundaries of their abilities, without a care or concern other than excellence for its own sake. (That is, unless you’re a “red shirt.”)
Until we’re Star Fleet officers, let’s put our shoulder back to our plows and hope for better for our kids.
Some drone enthusiasts and some libertarians are up in arms. “It’s just a new technology that they fear they can’t control!” “Our rights are being curtailed for no good reason!” “That database will be used against us, just wait and see!”
I have a few thoughts.
First, I have more than a smidgeon of sympathy for these views. We should always pause whenever the government decides it needs to intervene in some process. And to be frank, the barriers set by the FAA to traditional aviation are extremely high. So high that general aviation has never entered the mainstream of American culture, and given the shrinking pilot population, probably never will. The price to get in the air is so high in terms of training that few ever get there. As a consequence, the price of aircraft remains high, the technological improvement of aircraft remains slow, rinse, repeat.
In fact, I have often wondered what the world might be like if the FAA had been more lax about crashes and regulation. Perhaps we’d have skies filled with swarms of morning commuters, with frequent crashes which we accept as a fact of life. Or perhaps those large volumes of users would spur investment in automation and safety technologies that would mitigate the danger — at least after an initial period of carnage.
I think I would be upset if the rules were like those for general aviation. But in fact registration is pretty modest. I suspect that later, there will be some training and perhaps a knowledge test, which seems quite reasonable. As a user of the National Airspace System (both as a pilot and a passenger) I certainly appreciate not ramming into solid objects while airborne. Registration, of course, doesn’t magically separate aircraft, but it provides a means for accountability. Over time, I suspect rules will be developed to set expectations on behavior, so that all NAS users know what to expect in normal operations. Call it a necessary evil, or, to use a more traditional term, “governance.”
But there is one interesting angle here: the class of UAS being regulated (those weighing between 0.55 lb and 55 lb) have existed for a long time in the radio-controlled model community. What has changed to make drones “special,” requiring regulation now?
I think it is not the aircraft themselves, but the community of users. Traditional radio-controlled models were expensive to buy, took significant time to build, and were difficult to fly. The result was an enthusiast community, which by either natural demeanor or soft-enforced community norms, seemed able to keep their model airplanes out of airspace used by manned aircraft.
Drones came along and that changed quickly. The drones are cheap and easy to fly, and more and different people are flying them. And they’re alone, not in clubs. The result has been one serious airspace incursion after another.
A lot of people seem to think that because drones aren’t fundamentally different technology from traditional RC hobby activity, that no new rule is warranted. I don’t see the logic. That’s not smart. It’s not about the machines, it’s about the situation.
Anyway, I think the future for drone ops is actually quite bright. There is precedent for a vibrant hobby along with reasonable controls. Amateur radio is one example. Yes, taking a multiple-choice test is a barrier to many, but perhaps a barrier worth having. Also, the amateur radio community seems to have developed its own immune system against violators of the culture and rules, which works out nicely, since the FCC (like the FAA) has limited capacity for enforcement. And it’s probably not coincidental that the FCC has never tried to build up a large enforcement capability.
Which brings me to my final point, which is that if the drone community is smart they will create a culture of their own and they will embrace and even suggest rules that allow their hobby to fruitfully coexist with traditional NAS users. The Academy of Model Aeronautics, a club of RC modelers, could perhaps grow to encompass the coming army of amateur drone users.