next, they’ll discover fire

OK, now I’m feeling ornery. Google just announced a new chip of theirs that is tailored for machine-learning. It’s called the Tensor Processing Unit. and it is designed to speed up a software package called TensorFlow.

Okay, that’s pretty cool. But then Sundar Pichai has to go ahead and say:

This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore’s Law).

No, no, no, no, no.

First of all, Moore’s law is not about performance.  It is a statement of transistor density scaling, and this chip isn’t going to move that needle at all — unless Google has invented their own semiconductor technology.

Second, people have been developing special-purpose chips that solve a problem way faster than can a general-purpose microprocessor since the beginning of chip-making. It used to be that pretty much anything computationally interesting could not be done in a processor. Graphics, audio, modems, you name it all used to be done in hardware. Such chips are called application specific integrated circuits (ASICs) and, in fact, the design and manufacture of ASICs is more or less what gave Silicon Valley its name.

So, though I’m happy that Google has a cool new chip (and that they finally found an application that they believe merits making a custom chip) I wish the tech press wasn’t so gullible as to print any dumb thing that a Google rep says.

Gah.

I’ll take one glimmer of satisfaction from this, though. And that is that someone found an important application that warrants novel chip design effort. Maybe there’s life for “Silicon” Valley yet.

2 thoughts on “next, they’ll discover fire”

  1. The phrase made me roll my eyes too, though I confess I’ve been guilty of playing fast and loose with “Moore’s Law” on occasion.

    I was interested that Google did this, though. There were rumors it was building its own CPUs a few years ago, which I treated skeptically since I wasn’t confident they’d do better than Intel or higher-end ARM designs even if they could optimize it for their particular workloads. They have colossal infrastructure needs and maybe they could get Facebook or Amazon or Microsoft or something to sign on, I suppose, but this special-purpose chip seems much more sensible. (Qualcomm and Nvidia also are building machine-learning specific chips.)

    1. On further consideration, probably the only person who deserves “shade” for this is the comms droid that wrote it. I’m sure it wasn’t Pichai. I have some directly experience with those folks and they would never let correctness get in the way of a sexy line.

      It’s probably a fact that, all else equal, there is no application for which a general purpose CPU can beat out custom hardware on a performance or performance/watt basis. What is interesting about general purpose CPUs is how successful they have been at overcoming this burden pretty much everywhere — so much so that it’s news when Google uses a custom ASIC to solve some problem.

      I actually hope that as Moore’s Law sunsets there may be a renaissance in silicon design work, as people start to try to wring more performance out of chips by moving away from general-purpose architectures.

Leave a Reply to Dave Cancel reply

Your email address will not be published. Required fields are marked *