How to pay for the Internet, part 0xDEAF0001

Today’s Wall Street Journal had an article about Facebook, in which they promise to change the way the serve advertising in order to defeat ad blockers. This quote, from an FB spokesperson was choice:

“Facebook is ad-supported. Ads are a part of the Facebook experience; they’re not a tack on”

I’ll admit, I use an ad block a lot of the time. It’s not that I’m anti ads totally, but I am definitely utter trash, garbage, useless ads that suck of compute and network resources, cause the page to load much more slowly, and often enough, include malware and tracking. The problem is most acute on the mobile devices, where bandwidth, CPU power, and pixels are all in short supply, and yet it’s harder to block ads there. In fact, you really can’t do it without rooting your phone or doing all your browsing through a proxy.

The ad-supported Internet is just The Worst. I know, I know, I’ve had plenty of people explain to me that that ship has sailed, but I can still hate our ad-supported present and future.

  • Today’s ads suck, and they seem to be getting worse. Based on trends in the per ad revenue, it appears that most of the world agrees with this. They are less and less valuable.
  • Ads create perverse incentives for content creators. Their customer is the advertising client, and the reader is the product. In a pay for service model, you are the customer.
  • Ads are an attack vector for malware.
  • Ads use resources on your computer. Sure, the pay the content provider, but the cpu cycles on your computer are stolen.

I’m sure I could come up with 50 sucky things about Internet advertising, but I think it’s overdetermined. What is good about it is that it provides a way for content generators to make money, and so far, nothing else has worked.

The sad situation is that people do not want to pay for the Internet. We shell out $50 or more each month for access to the Internet, but nobody wants to pay for the Internet itself. Why not? The corrosive effect of an ad-driven Internet is so ubiquitous that people cannot even see it anymore. Because we don’t “pay” for anything on the Internet, everything loses its value. Journalism? Gone. Music? I have 30k songs (29.5k about which I do not care one whit) on my iThing.

Here is a prescription for a better Internet:

  1. Paywall every goddam thing
  2. Create non-profit syndicates that exist to attract member websites and collect subscription revenue on their behalf, distributing it according to clicks, or views, or whatever, at minimal cost.
  3. Kneecap all the rentier Internet businesses like Google and Facebook. They’re not very innovative and there is no justification for their outsized profits and “revenue requirements.” There is a solid case for economic regulation of Internet businesses with strong network effects. Do it.

I know this post is haphazard and touches on a bunch of unrelated ideas. If there is one idea I’d like to convey is: let’s get over our addiction to free stuff. It ain’t free.

 

 

The future of electrical engineering as a profession

The other day I was watching Dave Jones, a video blogger that I find entertaining and informative. His blog, the EEVblog, is catnip for nerds who like to solder stuff and use oscilloscopes.

Recently he did a short segment where he answered a question from a student who was upset that his teacher told him that EE was perhaps not a great field for job security, and he sort of went on a colorful rant about how wrong the professor is.

The professor is right.

Electrical engineering employment is indeed in decline, at least in the USA, and I suspect, other development countries. It’s not that EE skills are not helpful, or that understanding electronics, systems, signals, etc, are not useful. They are all useful and will continue to be. But I think more and more of the work, in particular, the high paying work, will migrate to software people who understand the hardware “well enough.” Which is fine. The fact is that EEs make good firmware engineers.

I think someone smart, with a solid EE background and a willingness to adapt throughout your entire career, should always find employment, but over time I suspect it will be less and less directly related to EE.

I mostly know Silicon Valley. Semiconductor employment is way down here. Mostly, it is through attrition, as people retire and move on, but nobody is hiring loads of young engineers to design chips anymore. It makes sense. Though chip volumes continue to grow, margins continue to shrink, and new chip design starts are way down, because “big” SOCs (systems on chip) with lots of peripherals can fill many niches that used to require custom or semi-custom parts.

I suspect that the need for EEs in circuit board design is also in decline. Not because there are fewer circuit boards, but because designing them is getting easier. One driver is the proliferation of very capable semiconductor parts with lots of cool peripherals is also obviating a lot of would-have-been design work. It’s gotten really easy to plop down a uC and hook up a few things over serial links and a few standard interfaces. In essence, a lot of board design work has been slurped into the chips, where one team designs it once rather than every board designer doing it again. There might be more boards being designed than ever, but the effort per board seems to be going down fast, and that’s actually not great for employment. Like you, I take apart a lot of stuff, and I’m blown away lately not by how complex many modern high volume boards are, but how dead simple they are.

The growth of the “maker” movement bears this out. Amateurs, many with little or no electronics knowledge, are designing circuit boards that do useful things, and they work. Are they making mistakes? Sure, they are. The boards are often not pretty, and violate rules and guidelines that any EE would know, but somehow they crank out working stuff anyway.

I do hold out some hope that as Moore’s law sunsets — and it really is sunseting this time — there will be renewed interest in creative EE design, as natural evolution in performance and capacity won’t solve problems “automatically.” That will perhaps mean more novel architectures, use of FPGAs, close HW/SW codesign, etc.

Some statistics bear all this out. The US Bureau of Labor Statistics has this to say about the 2014-2024 job outlook for EEs:
http://www.bls.gov/ooh/architecture-and-engineering/electrical-and-electronics-engineers.htm#tab-6

Note that over a 10 year period they are predicting essentially no growth for EE’s at all. None. Compare this to employment overall, in which they predict 7% growth.

One final note. People who love EE tend to think of EEs as the “model EE” — someone clever, curious, and energetic, and who remains so way for 40+ years. But let’s remind ourselves that 1/2 of EEs are below median.  If you know the student in question, you can make an informed assessment about that person’s prospects, but when you are answering a generic question about prospects for generic EEs, I think the right picture to have in mind is that of the middling engineer, not a particularly good one.

I’m not saying at all that EE is a bad career, and for all I know the number of people getting EE degrees is going down faster than employment, so that the prospects for an EE graduate are actually quite good, but it is important for students to know the state of affairs.

technological progress, freedom to v. freedom from

Technology progresses. Most of the time, progress is good, sometimes bad, but in all times it creates new circumstances, and those circumstances have winners and losers. Our society is not good at recognizing when circumstances have changed. We tend to take, for a long time at least, the world-as-it-is as the world-as-it-ought-to-be.

But I see no reason it must be so. I wish we were better at evaluating our reality, deciding if we like it or want something else, and then, coming to consensus on what, if anything, should be done at a policy level to control our circumstances.

 

For example, remote control airplanes have been around for quite some time. They were rather expensive toys, and not easy to fly. Similarly, aerial photography has existed almost since the dawn of flight. Because paying a pilot of fly over some location for you and photograph it is not cheap, it tends to be done where value of the resulting photograh is high enough to justify the expense.

For whatever reasons, we were pretty much OK with that status quo and the laws surrounding it. For example, yes, someone could photograph you through your window, and a passing plane could catch you sunning in your yard. People do not like those thigns, but it was hard enough to do and easy enough to stop, that basically, everyone but celebs and paparazzi seemed fine with the world as it was.

Enter inexpensive, simple aerial photograph with UAVs. Today, anybody with a few hundred bucks can get aerial imagery, and in a few years, that might be $10’s or even $1’s. Whole new possibilities for surveillance open up and people are suddenly uncomfortable about their vulnerability.

Does this mean we need laws to stop aerial surveillance “abuse?” Or maybe we need to adjust our expectations of privacy? I dunno. We need to evaluate the situation anew, since technology has changed circumstances. The fact that the existing laws were fine does not mean they are fine.

Totally rad UAVI can think of lots of contemporary examples of this sort of change: facial recognition along with ubiquitous video cameras make it possible to track everyone, everywhere they show their faces. License plate reader technology allows someone to track everywhere you go. You could do the same before, with detectives or private eyes, but now it can be done in bulk, cheaply. Cookies on websites allow someone to track most of what you look like on the Internet. In essence, people’s expectation of privacy was actually the complex combination of the state of technology and the law together, not either separate from the other.

None of these technologies are sinister in and of themselves, but dropped into a an environment that was in legal equilibrium without them, I think we should expect that equilibrium to shift.

Zoom!Of course, there are historical examples of such adjustments. Prior to the ubiquity of the automobile, people did not need carriage licenses, nor did they need to carry liability insurance for carriage accidents. How long after cars became popular did we realize they were dangerous enough and important enough that we should require that drivers get training? I think most (though not all) people today regard drivers’ licenses as a good idea. A few decades after that, we started requiring drivers to carry liability insurance and today most states have some requirement, though it is amazingly low in some places. (I know that agreement is hardly universal that liability requirements are a good idea, but we have them.)

One contemporary problem that is not typically considered in this light is gun violence. One might say that extremely capable weapons have been available for a long time, but that they have been expensive enough and just tricky enough to obtain, that we, as a society, were comfortable with the status quo. Collectors and sportspeople had them, and they used them safely, more or less. Enter cheap, easily available weapons, and all of a sudden the game has changed. In fact, today you can 3D print a gun at home, and maybe in a few years you’ll be able to 3D print a most of a not-too-shabby automatic weapon. The technology is not going to go away, but because of the technology change, the status quo is going to shift. Can or should we try to shift it back?

 

My point is that I think there are many  people who advocate for a kind of technological determinism, suggesting, “well, tech marches on.” But history tells us that we clearly do not have to accept such outcomes if we don’t want them.

Freedom-loving readers will notice a whole lot of “we’s” in this essay. I’m afraid they’re right. I’m suggesting that the group sometimes make decisions that restrict an individual’s freedoms. I know there is a cost to that. But I also see costs in letting individuals restrict the freedom (and well-being) of many other individuals.

As always, practicality and balance will be hard to achieve. We all seem fine with driver’s licensure, but pet grooming licenses seem perhaps too far. Required liability coverage for drivers is OK, but we probably would not tolerate such a requirement for many other potentially dangerous-to-others activities.

I hope we will have spirited, informed debates on issues like privacy and autonomy and that the outcome, if not new norms and laws, are new, explicit reiterations of existing norms that were previously implicit.

Clever, disturbing

Apple was recently granted a new patent for technology that will disable your phone’s camera at concerts where photography is forbidden.

The patent uses an infrared signal, which could be picked up by the imaging sensor itself. This is rather ingenious and cunning, because you could not disable the shut-down sensor without disabling the camera yourself, since they are one and the same.

IPhone_5S_main_cameraDepending one how pervasive such tech became, and how closely integrated the detection, decoding, and disabling is to the actual silicon image sensor, it could become nearly impossible to defeat this tech, or to obtain a phone that doesn’t include it.

I find blocking cameras at concert venues mildly annoying, but the potential for abuse of this technology seems large. Will folks on the street use it to block being photographed? Will it be deployed in government buildinds? Outside cop-cars? Will the secret for how to disable everyone else’s phone get out?

Over the last few years we’ve seen some exciting benefits from ubiquitous deployment of cameras. People are getting caught doing things that are illegal or at least shameful. I’d be bummed to see some technology from Silicon Valley reverse this progress.

 

 

Detrumpify2 — some cleanup

Even though my short brush with Internet fame appears to be over (Detrumpify has about 920 users today, up only 30 from yesterday), pride required that I update the extension because it was a bit too quick-n-dirty for my taste. Everything in it was hard-coded and that meant that every update I made to add new sites or insults would require users to approve an update. Hard-coding stuff in your programs is a big no-no, starting from CS 101 on.

So, I have a rewritten version available, and intrepid fans can help me out by testing it. You will not find it by searching on the Chrome Web Store, instead, get it directly from here. It is substantially more complicated under the hood than before, so expect bugs. (Github here, in “v2” folder.)

An important difference between this and the classic version is that there is an options page. It looks like this:

Screen Shot 2016-06-28 at 11.33.34 AM The main thing it lets you do is specify an URL from which a configuration file will periodically be downloaded. The config file contains the actual insults as well as some other parameters. I will host and maintain several configuration files ToolsOfOurTools, but anyone who wants to make one (for example, to mock a different presidential candidate) will be able to do so and just point to it.

If you want to make changes locally, you can also load a file, click on the edit button, make changes, and then click on the lock button. From then on the extension will use your custom changes.

The format of the config file is simple. Here’s an example with most of the names removed:

Explanation:

  • actions  is a container that will hold one or more named sets of search and replace instructions. This file just has one for replacing trump variations, but one can make files that will replace many different things according to different rules
  • find_regex  inside the trump action finds a few variations of Trump, Donald Trump, Donald J. Trump.
  • monikers  section lists the alternatives.
  • randomize_mode  can be always , hourly , daily , and tells how often the insult changes. In always , it will change with each appearance in the document.
  • refresh_age  is how long to wait (in milliseconds) before hitting the server for an update.
  • run_info  tells how long to wait before running the plugin and how many times to run. This is for sites that do not elaborate their content until after some javascript runs. (ie, every site these days, apparently). Here, it runs after 1000ms, then runs four more times, each time waiting 1.8x as long as the last time.
  •   bracket  can be set to a two-element array of text to be placed before and after any trump replacement.
  • schema  is required to ID the format of this file and should look just like that.
  • whitelist  is a list of sites that are enabled to run the extension. Et voila.

Let me know if you experience issues / bugs! The code that runs this is quite a bit more complex than the version you’re running now. In particular, I’m still struggling a bit with certain websites that turn on “content security policies” that get in the way of the config fetch. Sometimes it works, sometimes it doesn’t.

 

Simulate this, my dark overloards!

Apparently, both Elon Musk and Neil deGrasse Tyson believe that we are probably living in a more advanced civilization’s computer simulation.

Now, I’m no philosopher, so I can’t weigh in on whether I really exist, but it does occur to me that if this is a computer simulation, it sucks. First, we have cruelty, famine, war, natural disasters, disease. On top of that, we do not have flying cars, or flying people, or teleportation for that matter.

Seriously, whoever is running this advanced civilization simulation must be into some really dark shit.

Mental Models

I think we all make mental models constantly — simplifications of the world that help us understand it. And for services on the Internet, our mental models are probably very close — logically, if not in implementation — to the reality of what those services do. If not, how could we use them?

I also like to imagine how the service works, too. I don’t know why I do this, but it makes me feel better about the universe. For a lot of things, to a first approximation, the what and how are sufficiently close that they are essentially the same model. And sometimes a model of how it works eludes me entirely.

for example, my model of email is that an email address is the combination of a username and a system name. My mail server looks up the destination mail server, and IP routes my blob of text to the destination mail server, where that server routes it to the appropriate user’s “mailbox,” which is a file. Which is indeed how it works, more or less, with lots of elision of what I’m sure are important details.

I’ve also begun sorting my mental models of Internet companies and services into a taxonomy that have subjective meaning for me, based on how meritorious and/or interesting they are. Here’s a rough draft:

The What The How Example Dave’s judgment
obvious as in real life email Very glad these exist, but nobody deserves a special pat on the back for them. I’ll add most matchmaking services, too.
obvious non-obvious, but simple and/or elegant Google Search (PageRank) High regard. Basically, this sort of thing has been the backbone of Internet value to-date
not obvious / inscrutable nobody cares Google Buzz lack of popularity kills these. Not much to talk about
obvious obvious Facebook Society rewards these but technically, they are super-boring to me
obvious non-obvious and complex natural language, machine translation, face recognition Potentially very exciting, but not really very pervasive or economically important just yet. Potentially creepy and may represent the end of humanity’s reign on earth.

 

Google search is famously straightforward. You’re searching for some “thing,” and Google is combing a large index for that “thing.” Back in the Altavista era, that “thing” was just keywords on a page. Google’s first innovation was to use the site’s own popularity (as measured by who links to it and the rankings of those links.) to help sort the results. I wonder how many people had a some kind of mental model of how Google worked that was different than that of Altavista — aside from the simple fact that it worked much “better.” The thing about Google’s “Pagerank” was that it was quite simple, and quite brilliant, because, honestly, none of the rest of us thought of it. So kudos to them.

There have been some Internet services I’ve tried over the years that I could not quite understand. I’m not talking about how they work under the hood, but how they appear to work from my perspective. Remember Google “Buzz?” I never quite understood what that was supposed to be doing.

Facebook, in its essence is pretty simple, too, and I think we all formed something of a working mental model for what we think it does. Here’s mine, written up as SQL code. First, the system is composed of a few tables:

A table of users, a table representing friendships, and a table of posts. The tables are populated by straightforward UI actions like “add friend” or “write post.”

Generating a user’s wall when they log in is as simple as:

You could build an FB clone with that code alone. It is eye-rollingly boring and unclever.

Such an implementation would die when you got past a few thousand users or posts, but with a little work and modern databases that automatically shard and replicate, etc, you could probably handle a lot more. Helping FB is the fact they makes no promises about correctness: a post you make may or may not ever appear on your friend’s wall, etc.

I think the ridiculous simplicity of this is why I have never taken Facebook very seriously. Obviously it’s a gajillion dollar idea, but technically, there’s nothing remotely creative or interesting there. Getting it all to work for a billion users making a billion posts a day is, I’m sure, a huge technical challenge, but not requiring inspiration. (As an aside, today’s FB wall is not so simple. It uses some algorithm to rank and highlight posts. What’s the algorithm and why and when will my friends see my post? Who the hell knows?! Does this bother anybody else but me?)

The last category is things that are reasonably obviously useful to lots of people, but how they work is pretty opaque, even if you think about it for awhile. That is, things that we can form a mental model of what it is, but mere mortals do not understand how it works. Machine translation falls into that category, and maybe all the new machine learning and future AI apps do, too.

It’s perhaps “the” space to watch, but if you ask me the obvious what / simple how isn’t nearly exhausted yet — as long as you can come up with an interesting “why,” that is.

next, they’ll discover fire

OK, now I’m feeling ornery. Google just announced a new chip of theirs that is tailored for machine-learning. It’s called the Tensor Processing Unit. and it is designed to speed up a software package called TensorFlow.

Okay, that’s pretty cool. But then Sundar Pichai has to go ahead and say:

This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore’s Law).

No, no, no, no, no.

First of all, Moore’s law is not about performance.  It is a statement of transistor density scaling, and this chip isn’t going to move that needle at all — unless Google has invented their own semiconductor technology.

Second, people have been developing special-purpose chips that solve a problem way faster than can a general-purpose microprocessor since the beginning of chip-making. It used to be that pretty much anything computationally interesting could not be done in a processor. Graphics, audio, modems, you name it all used to be done in hardware. Such chips are called application specific integrated circuits (ASICs) and, in fact, the design and manufacture of ASICs is more or less what gave Silicon Valley its name.

So, though I’m happy that Google has a cool new chip (and that they finally found an application that they believe merits making a custom chip) I wish the tech press wasn’t so gullible as to print any dumb thing that a Google rep says.

Gah.

I’ll take one glimmer of satisfaction from this, though. And that is that someone found an important application that warrants novel chip design effort. Maybe there’s life for “Silicon” Valley yet.

notes on self-driving cars

A relaxing trip to work (courtesy wikimedia)
A relaxing trip to work (courtesy wikimedia)

Short post here. I notice people are writing about self-driving cars a lot. There is a lot of excitement out there about our driverless future.

I have a few thoughts, to expand on at a later day:

I.

Apparently a lot of economic work on driving suggests that the a major externality of driving is from congestion. Simply, your being on the road slows down other people’s trips and causes them to burn more gas. It’s an externality because it is a cost of driving that you cause but don’t pay.

Now, people are projecting that a future society of driverless cars will make driving cheaper by 1) eliminating drivers (duh) and 2) getting more utilization out of cars. That is, mostly, our cars sit in parking spaces, but in a driverless world, people might not own cars so much anymore, but rent them by the trip. Such cars would be much better utilized and, in theory, cheaper on a per-trip basis.

So, if I understand my micro econ at all, people will use cars more because they’ll be cheaper. All else equal, that should increase congestion, since in our model, congestion is an externality. Et voila, a bad outcome.

II.

But, you say, driverless cars will operate more efficiently, and make more efficient use of the roadways, and so they generate less congestion than stupid, lazy, dangerous, unpredictable human drivers. This may be so, but I will caution with a couple of ideas. First, how much less congestion will a driverless trip cause than a user-operated one? 75% as much? Half? Is this enough to offset the effect mentioned above? Maybe.

But there is something else that concerns me: the difference between soft- and hard-limits.

Congestion as we experience it today, seems to come on gradually as traffic approaches certain limits. You’ve got cars on the freeway, you add cars, things get slower. Eventually, things somewhat suddenly get a lot slower, but even then it’s certain times of the day, in certain weather, etc.

Now enter a driverless cars that utilize capacity much more effectively. Huzzah! More cars on the road getting where they want, faster. What worries me is that was is really happening is not that the limits are raised, but that we are operating the system much close to existing, real limits. Furthermore, now that automation is sucking out all the marrow from the road bone — the limits become hard walls, not gradual at all.

So, imagine traffic is flowing smoothly until a malfunction causes an accident, or a tire blows out, or there is a foreign object in the road — and suddenly the driverless cars sense the problem, resulting in a full-scale insta-jam, perhaps of epic proportions, in theory, locking up an entire city nearly instantaneously. Everyone is safely stopped, but stuck.

And even scarier than that is the notion that the programmers did not anticipate such a problem, and the car software is not smart enough to untangle it. Human drivers, for example, might, in an unusual situation, use shoulders or make illegal u-turns in order to extricate themselves from a serious problem. That’d be unacceptable in a normal situation, but perhaps the right move in an abnormal one. Have you ever had a cop the scene of an accident wave at you to do something weird? I have.

Will self-driving cars be able to improvise? This is an AI problem well beyond that of “merely” driving.”

III.

Speaking of capacity and efficiency, I’ll be very interested to see how we make trade-offs of these versus safety. I do not think technology will make these trade-offs go away at all. Moving faster, closer will still be more dangerous than going slowly far apart. And these are the essential ingredients in better road capacity utilization.

What will be different will be how and when such decisions are made. In humans, the decision is made implicitly by the driver moment by moment. It depends on training, disposition, weather, light, fatigue, even mood. You might start out a trip cautiously and drive more recklessly later, like when you’re trying to eat fast food in your car. The track record for humans is rather poor, so I suspect  that driverless cars will do much better overall.

But someone will still have to decide what is the right balance of safety and efficiency, and it might be taken out of the hands of passengers. This could go different ways. In a liability-driven culture me way end up with a system that is safer but maybe less efficient than what we have now. (call it “little old lady mode”) or we could end up with decisions by others forcing us to take on more risk than we’d prefer if we want to use the road system.

IV.

I recently read in the June IEEE Spectrum (no link, print version only) that some people are suggesting that driverless cars will be a good justification for the dismantlement of public transit. Wow, that is a bad idea of epic proportions. If, in the first half of the 21st century, the world not only continues to embrace car culture, but  doubles down  to the exclusion of other means of mobility, I’m going to be ill.

 

*   *   *

 

That was a bit more than I had intended to write. Anyway, one other thought is that driverless cars may be farther off than we thought. In a recent talk, Chris Urmson, the director of the Google car project explains that the driverless cars of our imaginations — the fully autonomous, all conditions, all mission cars — may be 30 years off or more. What will come sooner are a succession of technologies that will reduce driver workload.

So, I suspect we’ll have plenty of time to think about this. Moreover, the nearly 7% of our workforce that works in transportation will have some time to plan.

 

minor annoyances: debug-printing enums

This is going to be another programming post.

One thing that always annoys me when working on a project in a language like C++ is that when I’m debugging, I’d like to print messages with meaningful names for the enumerated types I’m using.

The classic way to do it is something like this:

Note that I have perhaps too-cleverly left out the break statements because each case returns.

But this has problems:

  • repetitive typing
  • maintenance. Whenever you change the enum, you have to remember to change the debug function.

It just feels super-clunky.

I made a little class in C++ that I like a bit better because you only have to write the wrapper code once even to use it on a bunch of different enums. Also you can hide the code part in another file and never see or think about it again.

C++11 lets you initialize those maps pretty nicely, and they are static const, so you don’t have to worry about clobbering them or having multiple copies. But overall, it still blows because you have to type those identifiers no fewer than three times: once in the definition and twice in the printer thing.

Unsatisfactory.

I Googled a bit and learned about how Boost provides some seriously abusive preprocessor macros, including one that can loop. I don’t know what kind of dark preprocessor magic Boost uses, but it works. Here is the template and some macros:

And here’s how you use it:

Now I only have to list out the enumerators one time! Not bad. However, it obviously only works if you control the enum. If you are importing someone else’s header with the definition, it still has the maintenance problem of the other solutions.

I understand that the C++ template language is Turing-complete, so I’m suspect this can be done entirely with templates and no macros, but I wouldn’t have the foggiest idea how to start. Perhaps one of you do?