Throwaway knowledge

I was recently tasked with writing an pretty useful extension for Chrome that would enhance Google Calendar so that room numbers on our campus would show links that would take you to campus’s internal map tool. Google Maps does not understand this campus, so the normal map link provided is not helpful.

This task took me the better part of a day and a half, and the whole time I was gripped by this awful feeling that in order to solve this simple problem, I had to learn a bunch of random stuff that would be mostly useless as soon as I was done.

For example, I needed to learn how Chrome extensions work. For that, I needed to familiarize myself with the Google Chrome API — at least enough to get this job done. I can say for sure that even in the small amount of time I spent on that task, I could tell that this API is not stable. They change it as often as they feel like, adding this, removing that, and most ominously “deprecating” some other things. Depracation is the worse. It means “we’re not taking this function way today, but we are taking it away, probably when you least expect it.” And, in any case, how many Chrome extensions is the average programmer going to write?

Furthermore, getting into Google Calendar enough to insert my links was an exercise in pure disposable hackery. I had to reverse engineer enough of Calendar to figure out where to scan for locations and where to insert links. And I needed to understand how Calendar works well enough to know when it is going to change up the document model and set event handlers for that.

This script works today, but it’s obviously going to break, probably very soon, because Google can and will change the internal guts of Calendar whenever they please. Unlike the extension API mentioned above, they are under no obligation, not even an twinge of guilt, to hold their internal implementation of Calendar to some fixed standard.

All this points to some rather sad facts about the future of coding for the web. You’ll  absolutely have to work through the supported APIs, and when they change, that’s your problem, and if they are not offered or are incomplete, that’s also your problem, and if they go away, also your problem. Compare that, for example, to a piece of software on your PC. Maybe you wrote an ancient MSDOS .bat file that does something with the input and output of an old program you liked. If the company that made the program makes a new version that breaks your script, you could upgrade to it when you want to — or never, depending on what was convenient for you.

I’ve already been bitten by this. I made an alarm clock that interfaces with Google Calendar using a published API. It is a nice alarm clock with a mechanical chime handmade by hippies from Woodstock, NY. It worked, just fine for years, until, one day, it stopped working. Google had deprecated, and then dropped the API the clock was using. There was a new API. I had to go to my years-old code and rewrite it (to use a substantially more complicated API, by the way).

I guess that’s life, but people who make alarm clocks for a living may be surprised that users want them to provide active support and software upgrades forever. Or maybe we’ll just throw out our clocks after a year or so, like a phone.

But this is a mere annoyance compared to my main concern: unstable platforms discourage the development of knowledge and expertise. Why learn anything well if the entire API, or hell, the only supported programming language, or even the entire theoretical framework for working with the API (does anybody seriously think REST with PUT, GET, and POST is going to last forever?) is going to change next Wednesday? Perhaps that’s why in my interviews with Google nobody ever seemed to care one whit about any of my knowledge of experience.

I, for one, welcome the coming generations of ADD-afflicted software engineers and am excited to see the wagonloads of new “wheels” they’ll invent.

Yippee!

Back when computers were computers

Trying to learn new tricks, this old dog dove into Amazon Lambda today. Distributed, cloud-based programming is really something to wrap your head around if you grew up working “a computer” where most if not all the moving parts you needed to make your program run were on that computer. Well, that’s not how the kids are doing it these days.

So Lambda is cool because you only have to write functions and everything about being a computer: setting it up, maintaining it, and provisioning more as needed is abstracted away and handled automatically. But that does come at a price: your functions need to be completely and 100% stateless. Hmm, all that fancy automatic provisioning and teardown seems a bit less impressive in that context.

To tie all those functions into what we used to call “a program” you still need state, and you get that from the usual suspects: , S3 or a database of your choosing, DropBox, whatever.

Which, to me seems like actually, all that provisioning and scaling mumbo jumbo is not handled automagically at all. You still need to think about your database, how many copies you need to X transactions per second, etc. Except now you’re using a database when you might have merely declared a variable which would be stored in, you know, memory as long as the program was running.

So now everything is way more complex and about a metric gajillion times slower.

But this is the cost, I guess, of the future, where every application is designed to support 100 million users. I think it makes great sense for applications where those 100 million users’ data is interacting with each other. But for old-skool apps, where you’re working with your own data most of the time (ie, editing a doc) I’d much rather write code for “a computer.”

 

 

WSJ swipes at science

Rather interesting piece by Matt Ridley in the WSJ, making the case that spending on basic science is a waste. It’s definitely worth a read for it’s world-tipped-on-its-side-itude. [ Ridley, a Conservative member of the House of Lords, has some interesting views about many things, so a scan of his wikipedia page, linked above, is worthwhile if you’re going to read the article. ]

Though I will happily grant the author the point that the linear model of:

is incorrect and simplistic, I don’t think that’ll be news to anyone who has ever spent a few minutes thinking about any of those things. Yes, technological advance is chaotic. Yes, innovation comes from many places, and the arrows are not always in the same direction.

science_matrix

But stating that the direction is not always from science to tech is a very far cry from proving that we can get away without science altogether.

He’s right, of course, that not all science leads to anything particularly valuable, and even when it does, it’s hard to know in advance what will and won’t. Sometimes hundreds of years can pass between a discovery and the moment society knows what to do with it.

In fact, it is for those very reasons and man more that it makes sense for governments to fund science.

The rest of the piece is, unfortunately, worse. I don’t have enough time to criticize all the arguments in the piece, but a few quick call-outs:

In 2007, the economist Leo Sveikauskas of the U.S. Bureau of Labor Statistics concluded that returns from many forms of publicly financed R&D are near zero and that “many elements of university and government research have very low returns, overwhelmingly contribute to economic growth only indirectly, if at all.”

You don’t say? Yeah, you can’t point to the monetary benefits of science because it does not directly generate monetary benefits. I wonder if that has anything to do with the fact that you can’t sell public knowledge? But you can use it to make things, and sell those. Or use it to direct your own research and make something of that. Whodathunk? Also, in the process, you get a bunch of educated people that private actors will hire to make things.

And, by the way, there are good reason to finance science with public money. Here’s one:

Let’s say knowledge “A”, obtained at cost a’ can be combined by technology entrepreneurs, P,Q,R to generate wealth p’,q’,r’. Without government funding of science, unless p’,q’,r’ each individually are more than a’, it won’t happen, because in the private investment scenario, the private investors has to recoup their costs alone. Even if p’ > a’, we still won’t see any of q’ and r’.  But with the public investment to get A, we get all of p’,q’,r’. When you throw in the uncertainty of the value of A at the time that it is being generated, it’s even harder for the private sector to justify. This has been known for a good while.

[ Aside: this is and other interesting aspects of innovation are covered in great detail, by the way, in the late Suzanne Scotchmer’s well thought out book, Innovation and Incentives. ]

Ridley also has a weird theory that technology has become a living organism, desiring to and able to perpetuate itself. I don’t half understand what that means, but it’s a strange foundation for an argument that government science doesn’t matter:

Increasingly, technology is developing the kind of autonomy that hitherto characterized biological entities. The Stanford economist Brian Arthur argues that technology is self-organizing and can, in effect, reproduce and adapt to its environment. It thus qualifies as a living organism, at least in the sense that a coral reef is a living thing. Sure, it could not exist without animals (that is, people) to build and maintain it, but then that is true of a coral reef, too.

And who knows when this will no longer be true of technology, and it will build and maintain itself? To the science writer Kevin Kelly, the “technium”—his name for the evolving organism that our collective machinery comprises—is already “a very complex organism that often follows its own urges.” It “wants what every living system wants: to perpetuate itself.”

Even if this is true, why believe that the innovation we would get from a purely technologically driven progress is the “best” innovation we can get, or even the innovation we want? Oh, that’s right, in the libertarian mindset, “we” doesn’t exist. So, it’s a good thing if, say, industry sink billions into fantastic facial moisturizer while cures for diseases that only affect the poor go unfunded.

Here’s another groaner:

To most people, the argument for public funding of science rests on a list of the discoveries made with public funds, from the Internet (defense science in the U.S.) to the Higgs boson (particle physics at CERN in Switzerland). But that is highly misleading. Given that government has funded science munificently from its huge tax take, it would be odd if it had not found out something. This tells us nothing about what would have been discovered by alternative funding arrangements.

And we can never know what discoveries were not made because government funding crowded out philanthropic and commercial funding, which might have had different priorities. In such an alternative world, it is highly unlikely that the great questions about life, the universe and the mind would have been neglected in favor of, say, how to clone rich people’s pets.

Ah, yes, the “counterfactual would have been better” argument. Of course, it comes with no particular theory or reason why private incentives would advance science, only the assertion that it would. Except, it turns out we do, in fact, have counterfactuals, because there are countries all around the world through history that made different prioritization of science, along with associated outcomes, and the answer is quite grim for the laissez faire folks, I’m afraid.

The rest of the article trots out a bunch of examples of interesting and important technologies, such as the steam engine, that came into being more or less without the underlying science to back them up. But I can make a list, too. Wozniak and Jobs made a computer in their garage, and — bang! — there came the internet. Except, the were already standing on giants, including boatloads of government-funded basic research (a lot of it defense-driven, yes) from which sprung semiconductors and the very notions of electronic computer. (Turing, Von Neumann)

Or lets take a look at radio. Sure, Marconi doing some early tinkering with spark gap transmitters allowed us to get some dit-dahs across the Atlantic without too much understanding, but even he was standing on Maxwell. And besides, the modern digital communications would not be possible without the likes of Fourier, Shannon, Nyquist, Hartley, all of which were doing science. (Some in private labs, though.)

I’m not historian of science, so I hope to soon read blogs from such people responding to this piece.

I’m unsettled by something else, though:

This is a full-throated, direct attack on government-funded science itself, printed in a mainstream publication.

It was not long ago that no serious political ideology in the US would have been broadly anti- public science research. Sure, we’ve seen serious efforts to undermine science in certain areas: climate change, danger of pesticides, etc, but nobody has come straight out and said that government should get out of the science business entirely.

Should be interesting to see if this is the start of a new long-term strategy or just one man’s rant.

 

 

If programming languages were exes

 

[ Please excuse this ridiculous flight of fancy. This post occurred to me yesterday while I was hypoxically working my way up Claremont on a bike. ]

An common game among the nerderati is to compare favorite computer languages, talking trash about your friends’ favorites. But what if programming languages were ex-girlfriend (or -boyfriends)?

Perl 5

Perhaps not the most handsome ex, but probably the most easy-going. Perl was up for anything and didn’t care much what you did as long as it was fun. Definitely got you in trouble a few times. Did not get jealous if you spent time with other languages. Heck, Perl even encouraged it as long as you could all play together. Perl was no priss, and taught you about things that you shouldn’t even describe in polite company. The biggest problem with Perl is that nobody approved, and in the end, you dumped Perl because everyone told you that you had to grow up and move on to a Nice, Serious Language. But you do wonder what might have been…

Perl 6

Never actually went on a date. Stood you up many times.

Python

Trim and neat, Python really impressed you the first time you met. Python came with a lot of documentation, which was a breath of fresh air at first. However, the times when Python’s inflexibility proved annoying started to mount. After one PEP talk too many, you decided to move on. You still remember that one intimate moment when Python yelled out “you’re doing it wrong!” Relationship- ender. Mom was disappointed.

C++

C++ seemed to have it all. It knew just about everything to know about programming. If you heard of some new idea, the odds were that C++ had heard of it before you and incorporated awhile back. You had many intellectual conversations about computer science with C++. Thing is, C++ seemed kind of rulesy, too, and it was hard to know what C++ really wanted from you. Most annoying, whenever you didn’t know what C++ wanted, it blamed you for not “getting” it. C++ also seemed to have a bit of a dark side. Sure, most of the time C++ could be elegant and structured, but more than once you came home to find C++ drunk and in bed with C doing some truly nasty things.

C

C is not an ex. C is your grumpy grandpa/ma who gives zero f@#ks what the kids are doing today. C is the kind of computer language that keeps a hot rod in the garage, but crashes it every time it takes it out. It’s a wonder C is still alive, given its passtime of lighting M-80’s while holding them between its fingers. Thing is, it’s actually pretty fun to hang out with C, someone who can tell good stories and get its hands dirty.

PHP

Looked a lot like Perl, just as promiscuous, but never said or did anything that made you think or laugh. Boring. Dumped.

Haskell

The weird kid in high school that sat alone and didn’t seem to mind be ostracized. Everything Haskell ever said in class was interesting, if cryptic. There was something attractive about Haskell, but you could never put your finger on it. In the end, you couldn’t imagine a life as such an outsider, so you never even got Haskell’s phone number.

Excel

Wore a tie starting in elementary school, Excel was set on business school. Funny thing was that beneath that business exterior, Excel was a complete slob. Excel’s apartment was a pig sty. It was amazing anything ever worked at all. Pretty boring language in the end, though. Went on a few dates, but no chemistry.

Java

Man, in the 90’s everybody was telling you to date Java. This was the language you could finally settle down with. Good thing your instincts told you to dodge that bullet, or you’d be spending your retirement years with a laggy gui for an internal app at a bank. Ick.

Javascript

You were never that impressed with Javascript, but you have to admit its career has taken off better than yours has. Seems Javascript is everywhere now, a celebrity really. Javascript has even found work on servers. At least Javascript is not hanging out with that ugly barnacle, Jquery as much as it used to.

 

Silicon Valley v Hucksters

The recent WSJ article about how Theranos, a well-funded and highly valued blood testing startup, has certainly gotten the press whipped up. Everybody likes a scandal, I guess.

I have no idea if the allegations are true, but the situation reminds me of another company I’ve been following with curiosity: uBeam. This small company promises a technology whereby your cell phone can be charged wirelessly, by ultrasound. Like Theranos, it has a telegenic founder and solid funding from top-tier VC (including Andreesen Horowitz and Marissa Mayer).

Thing is, this company is making promises that I’m strongly inclined to bet against. What they’ve demonstrated so far is nothing like what there product needs to do to be useful. Though it is in theory possible to charge a phone by ultrasound, the physics make it seem rather impractical. It requires rather high sound levels and, to avoid massive inefficiency, very tight audio beam forming. It also needs to work through pants pockets, purses, etc, which is not easy for ultrasound. And of course, it needs to be safe around humans and animals. When asked for more information to support the concept, the CEO usually goes on the attack, making fun of people who didn’t think X: { flight, moon landing, electric cars} was possible. All of which makes me wonder about the geniuses in Silicon Valley who make these investments.  Every engineer I have spoken to about this company immediately smells BS, yet they’ve gotten top-flight capital.

Which makes me wonder. Are people like me too small-minded to appreciate grand ideas? Or is Silicon Valley easily duped? Or, is accepting a certain amount of fraud part of the business model?

Maybe they know most of these types of folks are hucksters, but for $10M a pop, it’s worth it to fund them on the off chance one changes the world.

I dunno.

 

 

Security v. Convenience

I’ve had my credit card credentials stolen a few times. Each time it has been a mostly minor convenience but still unsettling.

As a result, I am happy to see the new chipped credit cards and the chip-reading scanners to go with them. I don’t know how difficult this system is to crack, but I have noticed one minor annoyance: the card must stay in the machine for the entire transaction. I imagine this is so that the cryptographic challenge and response can be brokered all the way back to the credit card clearing agency and not just locally on the reader device. This should improve security, since we don’t have to worry about shoving our cards in compromised devices.

There is a catch, though. In the old world, you scanned your card and could put it back in your wallet while the (typically very slow) interaction with the credit card company continued on in parallel. Now, your wallet needs to remain out until the transaction is complete.

It bugs me. I guess, it’s just a very minor way in which the modern world is not as good as what we had before, and a reminder why though everyone likes security, nobody likes security.

Worse than TV?

The New York Times ran an article last week on the cost of mobile ads. I’m surprised it did not raise more eyebrows.

In it, they counted all the data transferred when accessing articles from various websites, and categorized it by ad-related and content-related. What they found was that on average, more than half the bits go to ads; in some cases, way more than half.

But I think it’s worse, because, aside from download time, the article doesn’t go into the computational resources consumed by ads. To a first approximation, a static webpage should only use the CPU needed to render, and after that, nothing. But ad-infused webpage continue to use the CPU doing all manner of peek-a-boo’s, hey-there’s, delayed starts, etc.  This is taking your time and your battery life. If your phone has a non-replaceable battery, it’s also taking your phone life.

A few observations:

  • This is worse than television. On TV, content was 22 minutes of every 30. That makes 27% advertising by bandwidth. I recognize that there was advertising embedded in the content, as well.
  • This is worse than the article implies since so much web content is paginated, meaning you have to pay the ad penalty several times to see the content.
  • Ads effect wealthy users, who are more likely to have high-performance phones and high-bandwidth data plans, less than poorer users. Wealthy users will spend less time waiting for content and less time looking at pages to load. This also is different from TV.
  • A lot of editorial web content is of very low quality (again, subjectively, worse than TV because with TV you needed to attract some non-trivial audience and hold it) and even that is covered with ads. In fact, like radio, it seems like rather than having the prevalence of advertising correlate directly with the cost of production and delivery, it correlates somewhat negatively.
  • There is a game on between ad-blockers and advertisers that seems to have a Nash Equilibrium at a very bad place. That is, the ads are getting much more aggressive and taking more and more resources, and the countermeasures to avoid them are getting similarly aggressive, and there seems to be no clear bottom.

More than one person has explained to me that the ad-driven web is a fact of life, like gravity, and that we must all accept it. I’m not so sure. Better outcomes do seem possible. One might be micropayments, allowing people to access content on a per-click basis without the need for ad revenue. It was tried and failed. Maybe the situation was just not bad enough yet?

Another alternative would be for media to aggregate into subscription-based syndicates. Perhaps that will result in a two-tiered “free” and “paid” web that will match our stratified society.

The current trend is obviously for the web to implode leaving us with an app-based world, though I see no reason in the long term that even the best apps won’t eventually race to the bottom, as well.