progress, headphones edition

It looks like Intel is joining the bandwagon of people that want to take away the analog 3.5mm “headphone jack” and replace it with USB-C. This is on the heels of Apple announcing that this is definitely happening, whether you like it or not.

obsolete technology
obsolete technology

There are a lot of good rants out there already, so I don’t think I can really add much, but I just want to say that this does sadden me. It’s not about analog v. digital per se, but about simple v. complex and open v. closed.

The headphone jack is a model of simplicity. Two signals and a ground. You can hack it. You can use it for other purposes besides audio. You can get a “guzinta” or “guzoutta” adapter to match pretty much anything in the universe old or new — and if you can’t get it, you can make it. Also, it sounds Just Fine.

Now, I’m not just being ant-change. Before the 1/8″ stereo jack, we had the 1/4″ stereo jack. And before that we had mono jacks, and before that, strings and cans. And all those changes have been good. And maybe this change will be good, too.

But this transition will cost us something. For one, it just won’t work as well. USB is actually a fiendishly complex specification, and you can bet there will be bugs. Prepare for hangs, hiccups, and snits. And of course, none of the traditional problems with headphones are eliminated: loose connectors, dodgy wires, etc. On top of this, there will be, sure as the sun rises, digital rights management, and multiple attempts to control how and when you listen to music. Prepare to find headphones that only work with certain brands of players and vice versa. (Apple already requires all manufacturers of devices that want to interface digitally with the iThings to buy and use a special encryption chip from Apple — under license, natch.)

And for nerd/makers, who just want to connect their hoozyjigger to their whatsamaducky, well, it could be the end of the line entirely. For the time being, while everyone has analog headphones, there will be people selling USB-C audio converter thingies — a clunky, additional lump between devices. But as “all digital” headphones become more ubiquitous, those adapters will likely disappear, too.

Of course, we’ll always be able to crack open a pair of cheap headphones and steal the signal from the speakers themselves … until the neural interfaces arrive, that is.

EDIT: 4/28 8:41pm: Actually, the USB-C spec does allow analog on some of the pins as a “side-band” signal. Not sure how much uptake we’ll see of that particular mode.

 

Inferencing from Big Data

Last week, I came across this interesting piece on the perils of using “big data” to draw conclusions about the world. It analyzes, among other things, the situation of Google Flu Trends, the much heralded public health surveillance system that turned out to be mostly a predictor of winter (and has since been withdrawn).

It seems to me that big data is a fun place to explore for patterns, and that’s all good, clean, fun — but it is the moment when you think you have discovered something new when the actual work really starts. I think “data scientists” are probably on top of this problem, but are most people going on about big data data scientists?

I really do not have all that much to add to the article, but I will amateurishly opine a bit about statistical inferencing generally:

1.

I’ve taken several statistics courses over my life (high school, undergrad, grad). In each one, I thought I had a solid grasp of the material (and got an “A”), until I took the next one, where I realized that my previous understanding was embarrassingly incorrect. I see no particular reason to think this pattern would ever stop if I took ever more stats classes. The point is, stats is hard. Big data does not make stats easier.

2

If you throw a bunch of variables at a model, it will find some  that look like good predictors. This is true even if the variables are totally and utterly random and unrelated to the dependent variable (see try-it-at-home experiment below). Poking around in big data, unfortunately, only encourages people to do this and perhaps draw conclusions when they should not. So, if you are going to use big data, do have a plan in advance. Know what effect size would be “interesting” and disregard things well under that threshold, even if they appear to be “statistically significant.” Determine in advance how much power (and thus, observations) you should have to make your case, and sample from your ginormous set to a more appropriate size.

3

Big data sets seem like they were mostly created for other purposes than statistical inferencing. That makes them a form of convenience data. They might be big, but are the variables present really what you’re after? And was this data collected scientifically, in a manner designed to minimize bias? I’m told that collecting a quality data set takes effort (and money). If that’s so, it seems likely that the quality of your average big data set is low.

A lot of big data comes from log files from web services. That’s a lame place to learn about anything other than how the people who use those web services think or even how people who do use web services think while they’re doing something other than using that web service. Just sayin’.

 

Well, anyway, I’m perhaps out of my depth here, but I’ll leave you with this quick experiment, in R:

It generates 10,000 observations of 201 variables, each generated from a uniform random distribution on [0,1]. Then it runs an OLS model using one variable as the dependent and the remaining 200 as independents. R is even nice enough to put friendly little asterisks next to variables that have p<0.05 .

When I run it, I get 10 variables that appear to be better than “statistically significant at the 5% level” — even though the data is nothing but pure noise. This is about what one should expect from random noise.

Of course, the r2 of the resulting model  is ridiculously low (that is, the 200 variables together have low explanatory power ). Moreover, the effect size of the variables is small. All as it should be — but you do have to know to look. And in a more subtle case, you can imagine what happens if you build a model with a bunch of variables that do have explanatory power, and a bunch more that are crap. Then you will see a nice r2 overall, but you will still have some of your crap pop up.

 

 

 

More minimum wage bullshit

 

Workers unaware that they are soon to be laid off.
Workers unaware that they are soon to be laid off.

Some clever economists have come up with a name for the religious  application of simple economic principles to complex situations where they probably don’t apply: Econ-101ism.

That’s immediately what I thought of when my better half told me about this stupid article in Investor’s Business Daily about the minimum wage and UC Berkeley.

See, folks at Berkeley touted the $15/hr minimum wage as a good thing, and then UC laid off a bunch of people. Coincidence? The good people at Irritable Bowel Disease think not!

Except, few at UC gets paid the minimum wage. And the $15/hr minimum wage has not taken effect and won’t take effect for years. And the reason for the job cuts are the highly strained budget situation at the UCs, a problem that is hardly new.

You could make an argument that a $15/hr minimum will strain the economy, resulting in lower tax revenue, resulting in less state money, resulting in layoffs at the UC’s. I guess. Quite a lot of moving parts in that story, though.

Smells like bullshit.

Edit: UCB does have its own minimum wage, higher than the California minimum. It has been $14/hr since 10/2015 and will be $15/hr starting in 2017. (http://www.mercurynews.com/business/ci_28522491/uc-system-will-raise-minimum-wage-15-an)

Another edit: Chancellor Dirks claim the 500 job cuts would save $50M/yr. That implies an average hourly cost of $50/hr. Even if 1/2 that goes to overhead and benefits, those would be $25/hr jobs, not near the minimum. In reality, the jobs probably had a range of salaries, and one can imagine some were near the $15 mark, but it is not possible that all or even most of them were.

 

Peak Processing Pursuit

I’ve known for some time that the semiconductor (computer chip) business has not been the most exciting place, but it still surprised me and bummed me out to see that Intel was laying off 11% of its workforce. There are lots of theories about what is happening in non-cloudy-software-appy tech, but I think fundamentally, the money is being drained out of “physical” tech businesses. The volumes are there, of course. Every gadget has a processor — but that processor doesn’t command as much margin as it once did.

A CPU from back in the day (Moto 68040)
A CPU from back in the day (Moto 68040)

A lot of people suggest that the decline in semiconductors is a result of coming to the end of the Moore’s Law epoch. The processors aren’t getting better as fast as they used to, and some  argue (incorrectly) hardly at all. This explains the decline, because without anything new and compelling on the horizon, people do not upgrade.

But in order for that theory to work, you also have to assume that the demand for computation has leveled off. This, I think, is almost as monumental a shift as Moore’s Law ending. Where are the demanding new applications? In the past we always seemed to want more computer (better graphics, snappier performance, etc) and now we somewhat suddenly don’t. It’s like computers became amply adequate right about the same time that they stopped getting much better.

Does anybody else find that a little puzzling? Is it coincidence? Did one cause the other, and if so, which way does the causality go?

The situation reminds me a bit of “peak oil,” the early-2000’s fear that global oil production will peak and there will be massive economic collapse as a result. Well, we did hit a peak in oil production in 2008-9 time-frame, but it wasn’t from scarcity, it was from low demand in a faltering economy. Since then, production has been climbing again. But with the possibility of electrified transportation tantalizingly close, we may see true peak oil in the years ahead, driven by diminished demand rather than diminished supply.

I am not saying that we have reached “peak computer.” That would be absurd. We are all using more CPU instructions than ever, be it on our phones, in the cloud, or in our soon-to-be-internet-of-thinged everything. But the ever-present pent up demand for more and better CPU performance from a single device seems to be behind us. Which, for just about anyone alive but the littlest infants, is new and weird.

If someone invents a new activity to do on a computing device that is both popular and ludicrously difficult, that might put us back into the old regime. And given that Moore’s Law is sort of over, that could make for exciting times in Silicon Valley (or maybe Zhongguancun), as future performance will require the application of sweat and creativity, rather than just riding a constant wave. (NB: there was a lot of sweat and creativity required to keep the wave going.)

Should I hold my breath?

A postulate regarding innovation

I’ve been thinking a lot lately about success and innovation. Perhaps its because of my lack of success and innovation.

Anyway, I’ve been wondering how the arrow of causality goes with those things. Are companies successful because they are innovative, or are they innovative because they are successful.

This is not exactly a chicken-and-egg question. Google is successful and innovative. It’s pretty obvious that innovation came first. But after a few “game periods,” the situation becomes more murky. Today, Google can take risks and reach further forward into the technology pipeline for ideas than a not-yet successful entrepeneur could. In fact, a whole lot of their innovation seems not to affect their bottom line much, in part because its very hard to grow a new business at the scale of their existing cash cows. This explains (along with impatience and the opportunity to invest in their high-returning existing businesses) Google’s penchant for drowning many projects in the bathtub.

I can think of other companies that had somewhat similar behavior over history. AT&T Bell Labs and IBM TJ Watson come to mind as places that were well funded due to their parent companies enormous success (success, derived at least in part, from monopoly or other market power). And those places innovated. A lot. As in Nobel Prizes, patents galore, etc. But, despite their productive output of those labs, I don’t think they ever contributed very much to the companies’ success. I mean, the transistor! The solar cell! But AT&T didn’t pursue these businesses because they had a huge working business that didn’t have much to do with those. Am I wrong about that assessment? I hope someone more knowledgable will correct me.

Anyway, that brings me back to the titans of today, Google, Facebook, etc. And I’ll continue to wonder out loud:

  • are they innovating?
  • is the innovation similar to their predecessors?
  • are they benefiting from their innovation?
  • if not, who does, and why do they do it?

So, this gets back to my postulate, which is that, much more often than not, success drives innovation, and not the reverse. That it ever happens the other way is rare and special.

Perhaps a secondary postulate is that large, successful companies do innovate, but they have weak incentives to act aggressively on those innovations, and so their creative output goes underutilized longer than it might if it had been in the hands of a less successful organization.

Crazy?

Agnotology

I like discovering a new word, and am excited to see this one: Agnotology. I learned it today in this profile of Stanford University researcher Robert Proctor, an agnotologist.

Very succinctly, agnotology is the study of intentionally inducing ignorance, or as people I used to work with would put it: spreading FUD.

That is, the daily work of thousands of people, employed in a large segment of corporate America. Their job it is to make sure that people do not understand something, say, like vaccines safety or climate change that might interfere with profitability. I guess if “it is difficult to get a man to understand something when his salary depends on his not understanding it” then some corollary says it should be easy for another man to help many men not understand something if his salary depends on how many other men do not understand it.

Or something.

Anyway, with so much intentionally-induced ignorance pervading our universe these days, like the dark side of the force, I was happy to see that at least the activity has a name. I wish the agnotologists well, and hope they will come up with some kind of cure or vaccine that will help us contain the stupid-industrial complex that has come to so pervade our lives and politics.