On the morality of tax avoidance

People were pretty unhappy when Donald Trump claimed that not paying taxes “makes him smart.” Similarly, nobody was impressed when Mitt Romney claimed that he “paid all the taxes I am required to, not a dollar more.”

What these folks do is legal. It’s called tax avoidance, and the more money you have, the harder you and your accountants will work, and the better at it you’ll be. There is an entire industry built around tax avoidance.

From www.ccPixs.com
From www.ccPixs.com

Though I want to disapprove of these people, it does occur to me that most of us do not willingly pay taxes that we are not required to pay. It’s not like I skip out on deducting my charitable giving or my mortgage interest, or using the deductions for my kids. I’m legally allowed those deductions and I use them.

So what is wrong with what Trump and Romney do?

One answer is “nothing.” I think that’s not quite the right answer, but it’s close. Yes, just because something is legal does not mean that it’s moral. But where do you draw the line here? Is it based on how clever your accountants had to be to work the system? Or how crazy the hoops you jumped through were to hide your money? I’m not comfortable with fuzzy definitions like that at all.

What is probably immoral, is for a rich person to try to influence the tax system to give himself more favorable treatment. But then again, how do you draw a bright line? Rich people often want lower taxes and (presumably) accept that that buys less government stuff and/or believe that they should not have to transfer their wealth to others. That might be a position that I don’t agree with, but the case for immorality there is a bit more complex, and reasonable people can debate it.

On the other hand, lobbying for a tax system with loopholes that benefit them, and creating a system of such complexity that only the wealthiest can navigate it, thus putting the tax burden onto other taxpayers, taxpayers with less money, is pretty obviously immoral. Well, if not immoral, definitely nasty.


Discouraging coding

I know I’ve written about this before, but I need to rant about tech companies pay lip service about encouraging young people to “code” but then throw up barriers to end-users (ie, regular people, not developers) writing code for their own use.

The example that’s been bugging me lately is Google Chrome, which asks you, every single time it’s started, if you want to disable “developer mode” extensions, with disable as the default, natch.

You see, you can’t run a Chrome extension unless you are in “developer mode” to start with. Then you can write some code, load it into Chrome, and you’re off to the races. This is good for, you know, developing, but also nice for people who just want to write their own extension, for their own use, and that will be the end of it.

Except they will be nagged perpetually for trying to do so. The solution is to upload your extension to the Chrome Web Store, where it can be validated by Google according to a secret formula of tests, and given a seal of approval (maybe).

But you don’t want to upload your extension to the Chrome Web Store? Well, too fscking bad, kid! Maybe you should stick to Scratch if you don’t want to run with the big boys.

It’s not just Google. If you want to run an extension on Firefox, you have to upload it to Mozilla, too — but at least if you just want to use it yourself, you can skip the human validation step. (NB: If you do want to share the extension, you will be dropped into a queue where a human being will — eventually — look at your extension. I tried upgrading Detrumpify on Firefox last week and I’m still waiting for approval.)

And don’t even get me started on Apple, where you need to shell out $99 to do any kind if development at all.

I don’t know how this works on phone apps, but I suspect it’s as complicated.

I get it: there are bad guys out there and we need to be protected from them. And these systems are maybe unavoidably complex. But, damn, I don’t hear anybody saying out loud that we really are losing something as we move to “app culture.” The home DIY hacker is being squeezed.


Trump: the disease

After last night’s embarrassing Clinton vs. Trump matchup, I’m once again feeling glum and confused. It caused me to reflect on a dichotomy that I was exposed to in high school: that of “great man” vs. circumstance. I think I believe mostly in circumstance, and maybe even a stronger version of that theory than is commonly proposed.

In my theory, Trump is not an agent with free will, but more akin to a virus: a blob of RNA with a protein coat, evolved to do one thing, without any sense of what it is doing. He is a speck floating in the universe, a mechanically fulfilling its destiny. A simulation running in an orrery of sufficient complexity could predict his coming.

This is his story:

Somewhere, through a combination of natural selection and genetic mutation, a strange child is born into a perfectly suited environment, ample resources and protection for his growth into a successful, powerful monster. Had he been born in another place or time, he might have been abandoned on an ice floe when his nature was discovered, or perhaps killed in early combat with another sociopath. But he prospered. With a certain combination of brashness and utter disregard for anything like humility, substance, or character, it was natural that he would be put on magazine covers, and eventually, television, where, because of television’s intrinsic nature, itself the product of a long, peculiar evolution, he killed, growing yet more powerful.

Later, perhaps prompted by something he saw on a billboard or perhaps due to a random cosmic ray triggering a particular neuron to fire, our virus started talking about politics. By chance, his “ideas” plugged into certain receptors, present in the most ancient, reptilian parts of our brains. Furthermore, society’s immune system, weakened through repeated recent attacks from similar viruses, was wholly unprepared for this potent new disease vector. Our virus, true to form, exploited in-built weaknesses to direct the media and make it work for its own benefit, potentially instructing the media to destroy itself and maybe taking the entire host — our world — in the process.

In the end, what will be left? A dead corpse of a functioning society, teeming with millions of new viruses, ready to infect any remnants or new seedlings of a vital society.

And the universe will keep turning, indifferent.

The end. 

Your search for “history” did not return any results.

I often think about how to preserve data. This is mostly driven by my photography habit. My pictures are not fantastic, but they mean a lot to me, and I suspect, but am by no means certain, that they will mean something to my children and grandchildren. I certainly would love to know what the lives of my own grandparents were like, to see them in stages of life parallel to my own. But I don’t know how to make sure my kids and their kids will be able to see these photos.

A box of old pictures
A box of old pictures

This is a super difficult problem. The physical media that the images are stored on (hard drives, flash cards, etc) degrade and will fail over time, and even if they don’t, the equipment to read that media will become scarce. Furthermore, the format of the data may become undecipherable over time as well. I have high confidence that it will be possible to read jpegs in the year 2056, but when you get into some more esoteric formats, I dunno.

A commonly  proffered solution is to upload your data to a cloud service for backup. I have strong reservations about this as a method for long-term preservation. Those cloud backups are only good as long as the businesses that run them have some reason to continue to do so. Subscriptions, user accounts, and advertising driven revenue seem a poor match for permanent archival storage of anything. Who, long after I’m dead, is going to receive the email that says “your account will be closed if you do not update your credit card in 30 days”? Also, what good is a backup of data I can no longer view on my now-current quantum holographic AI companion?

All of this compares quite unfavorably with a common archival technique used for informal, family information: the shoe box. Photographs stored in a shoe box are susceptible to destruction by fire or flood, but they are fantastically resilient to general benign neglect over exceedingly long periods of time. Sure, the colors will fade if the box is left in a barn for 50 years, but, upon discovery, anyone can recognize the images using the mark-I human eyeball. (Furthermore, it’s really astounding how easy it is to use a computer to restore natural color to faded images.)

There is simply no analog to the shoe box full of negatives in today’s world. Sure, you can throw some flash memory cards into such a box, but you still have the readout problems mentioned above.

As people migrate from their first digital camera to their last digital camera to iPhoneN to iPhoneN+1, lots of images have already been lost. Because of the very short history of digital photography, you can’t even blame that loss on technological change. It’s more about plain old poor stewardship. But just to amplify my point above: the shoe box is quite tolerant of poor stewardship.

*   *   *

Okay, so, this post was not even going to be about the archival problems of families. That is, in aggregate, a large potential loss, made up of hundreds of millions of comparatively smaller losses.

The reason I decided to write today was because I saw this blog post about this article, in which it was described how the on-line archives for a major metropolitan newspaper — going back more than 200 years, are in risk of disappearing from the digital universe.

Here we have a situation in which institutions that are committed to preserving history, with (shrinking) staffs of professional librarians and archivists are failing to preserve history for future generations. In this case, the microfiche archives of the print version of the paper are safe, but the digitally accessible versions are not. The reason: you can’t just put them in a shoe box (or digital library). Someone most host them, and that someone needs to get paid. Forever.

Going forward, more and more of our history is going to happen only in the digital world. Facebook, Twitter, Hillary Clinton’s (or anyone other politician’s) email. There’s not going to be a microfilm version at the local university library. Who is going to store it? Who will access it and how?

A few years ago, it looked like companies like Google were going to — pro bono — solve this problem for us. They were ready, willing, and seemingly able to host all the data and make is available. But now things are getting in the way. Copyright is one. The demand from investors to monetize is another. It used to be thought that you could not monetize yesterday’s paper — today’s paper is tomorrow’s fish-wrap, but more wily content owners realize that if they don’t know the value of an asset, they can’t give it away for free. Even Google, which, I think, hands somewhat tied, is still committed to this sort of project, probably cannot be trusted with the permanent storage of our collective history. Will they be around in 50, 100 years? Will they migrate all their data forever? Will they get bought and sold a dozen times to owners who are not as committed to their original mission to “organize the world’s information and make it universally accessible and useful?” Will the actual owners of the information that Google is trying to index try to monetize it into perpetuity?

I think we know the answers. Right now, it all looks pretty grim to me.




How to pay for the Internet, part 0xDEAF0001

Today’s Wall Street Journal had an article about Facebook, in which they promise to change the way the serve advertising in order to defeat ad blockers. This quote, from an FB spokesperson was choice:

“Facebook is ad-supported. Ads are a part of the Facebook experience; they’re not a tack on”

I’ll admit, I use an ad block a lot of the time. It’s not that I’m anti ads totally, but I am definitely utter trash, garbage, useless ads that suck of compute and network resources, cause the page to load much more slowly, and often enough, include malware and tracking. The problem is most acute on the mobile devices, where bandwidth, CPU power, and pixels are all in short supply, and yet it’s harder to block ads there. In fact, you really can’t do it without rooting your phone or doing all your browsing through a proxy.

The ad-supported Internet is just The Worst. I know, I know, I’ve had plenty of people explain to me that that ship has sailed, but I can still hate our ad-supported present and future.

  • Today’s ads suck, and they seem to be getting worse. Based on trends in the per ad revenue, it appears that most of the world agrees with this. They are less and less valuable.
  • Ads create perverse incentives for content creators. Their customer is the advertising client, and the reader is the product. In a pay for service model, you are the customer.
  • Ads are an attack vector for malware.
  • Ads use resources on your computer. Sure, the pay the content provider, but the cpu cycles on your computer are stolen.

I’m sure I could come up with 50 sucky things about Internet advertising, but I think it’s overdetermined. What is good about it is that it provides a way for content generators to make money, and so far, nothing else has worked.

The sad situation is that people do not want to pay for the Internet. We shell out $50 or more each month for access to the Internet, but nobody wants to pay for the Internet itself. Why not? The corrosive effect of an ad-driven Internet is so ubiquitous that people cannot even see it anymore. Because we don’t “pay” for anything on the Internet, everything loses its value. Journalism? Gone. Music? I have 30k songs (29.5k about which I do not care one whit) on my iThing.

Here is a prescription for a better Internet:

  1. Paywall every goddam thing
  2. Create non-profit syndicates that exist to attract member websites and collect subscription revenue on their behalf, distributing it according to clicks, or views, or whatever, at minimal cost.
  3. Kneecap all the rentier Internet businesses like Google and Facebook. They’re not very innovative and there is no justification for their outsized profits and “revenue requirements.” There is a solid case for economic regulation of Internet businesses with strong network effects. Do it.

I know this post is haphazard and touches on a bunch of unrelated ideas. If there is one idea I’d like to convey is: let’s get over our addiction to free stuff. It ain’t free.



The future of electrical engineering as a profession

The other day I was watching Dave Jones, a video blogger that I find entertaining and informative. His blog, the EEVblog, is catnip for nerds who like to solder stuff and use oscilloscopes.

Recently he did a short segment where he answered a question from a student who was upset that his teacher told him that EE was perhaps not a great field for job security, and he sort of went on a colorful rant about how wrong the professor is.

The professor is right.

Electrical engineering employment is indeed in decline, at least in the USA, and I suspect, other development countries. It’s not that EE skills are not helpful, or that understanding electronics, systems, signals, etc, are not useful. They are all useful and will continue to be. But I think more and more of the work, in particular, the high paying work, will migrate to software people who understand the hardware “well enough.” Which is fine. The fact is that EEs make good firmware engineers.

I think someone smart, with a solid EE background and a willingness to adapt throughout your entire career, should always find employment, but over time I suspect it will be less and less directly related to EE.

I mostly know Silicon Valley. Semiconductor employment is way down here. Mostly, it is through attrition, as people retire and move on, but nobody is hiring loads of young engineers to design chips anymore. It makes sense. Though chip volumes continue to grow, margins continue to shrink, and new chip design starts are way down, because “big” SOCs (systems on chip) with lots of peripherals can fill many niches that used to require custom or semi-custom parts.

I suspect that the need for EEs in circuit board design is also in decline. Not because there are fewer circuit boards, but because designing them is getting easier. One driver is the proliferation of very capable semiconductor parts with lots of cool peripherals is also obviating a lot of would-have-been design work. It’s gotten really easy to plop down a uC and hook up a few things over serial links and a few standard interfaces. In essence, a lot of board design work has been slurped into the chips, where one team designs it once rather than every board designer doing it again. There might be more boards being designed than ever, but the effort per board seems to be going down fast, and that’s actually not great for employment. Like you, I take apart a lot of stuff, and I’m blown away lately not by how complex many modern high volume boards are, but how dead simple they are.

The growth of the “maker” movement bears this out. Amateurs, many with little or no electronics knowledge, are designing circuit boards that do useful things, and they work. Are they making mistakes? Sure, they are. The boards are often not pretty, and violate rules and guidelines that any EE would know, but somehow they crank out working stuff anyway.

I do hold out some hope that as Moore’s law sunsets — and it really is sunseting this time — there will be renewed interest in creative EE design, as natural evolution in performance and capacity won’t solve problems “automatically.” That will perhaps mean more novel architectures, use of FPGAs, close HW/SW codesign, etc.

Some statistics bear all this out. The US Bureau of Labor Statistics has this to say about the 2014-2024 job outlook for EEs:

Note that over a 10 year period they are predicting essentially no growth for EE’s at all. None. Compare this to employment overall, in which they predict 7% growth.

One final note. People who love EE tend to think of EEs as the “model EE” — someone clever, curious, and energetic, and who remains so way for 40+ years. But let’s remind ourselves that 1/2 of EEs are below median.  If you know the student in question, you can make an informed assessment about that person’s prospects, but when you are answering a generic question about prospects for generic EEs, I think the right picture to have in mind is that of the middling engineer, not a particularly good one.

I’m not saying at all that EE is a bad career, and for all I know the number of people getting EE degrees is going down faster than employment, so that the prospects for an EE graduate are actually quite good, but it is important for students to know the state of affairs.

Narrow fact-checking is less than useless

Last night, Donald Trump gave a speech that included a bunch of statements about crime that the New York Times fact-checked for us. This summarizes what they found:

Many of Mr. Trump’s facts appear to be true, though the Republican presidential nominee sometimes failed to offer the entire story, or provide all of the context that might help to explain his numbers.

Putting aside the ridiculously low bar of “many facts appear to be true”, they failed to mention or explain that in every case, despite his factoids being narrowly true, the conclusions he was drawing from them, and suggesting we draw from them, were absolutely, incontrovertibly false.

This kind of reporting drives me bonkers. Crime stats, like all stats, are noisy, and from one year to another, in a specific city, you can find an increase or decrease — whatever you are looking for. But the overall trends are clear, and Trump’s assessment was utter bullshit.

Another, somewhat less savory, media outlet did a much better job, because they took the 10 minutes of Googling necessary to assemble some charts and put Trump’s facts in context.

Would it have been partisan for the NYT to put Trump’s facts into context with respect to the conclusions he was drawing from them? It just seems like journalism.


Gah… Apple

I use a Mac at work. It’s a fine machine and I like the screen and battery life, but I’m not a generally fan of Apple the company or its products. Sometimes I forget why, and I need to be reminded.

Like today, when I decided, even though Safari is basically a sucky product, there are probably people that use it, so I might just port my little political statement Chrome extension to Safari. I’d already done so to Firefox, so how hard could it be?

Well, it turns out, not too hard. Actually, for the minimalist version that most people are using, it required no code changes at all. It did take me awhile to figure out how everything works in the Apple extension tool, but overall, not too bad.

I knew I would have to submit to reviewers at Apple to get it published. I had to do the same at Mozilla for Firefox. But what I did not know is that in order to do that, I had to sign up to be an Apple Developer. Moreover, I could only do so under my real name (ie, not dave@toolsofourtools.org) and most annoying, they wanted $99. A year. or as long as the extension is up.

I’m not going to play $99/yr to provide a free plugin for the few people who are dumb enough to use Safari on a regular basis.

In an odd way, this gets right to the heart of one of the many reasons I do not like Apple. They are constitutionally opposed to my favorite aspects of the computing and the Internet: the highly empowering ability for people to scrappily do, say, make anything they want for next to nothing, and at the level of sophistication that they want to deal with. Apple doesn’t like scrappy things in its world, and actively weeds them out.

Apple, you suck. Thanks for the reminder never to spend my own money on your polished crap.

Simulate this, my dark overloards!

Apparently, both Elon Musk and Neil deGrasse Tyson believe that we are probably living in a more advanced civilization’s computer simulation.

Now, I’m no philosopher, so I can’t weigh in on whether I really exist, but it does occur to me that if this is a computer simulation, it sucks. First, we have cruelty, famine, war, natural disasters, disease. On top of that, we do not have flying cars, or flying people, or teleportation for that matter.

Seriously, whoever is running this advanced civilization simulation must be into some really dark shit.

Mental Models

I think we all make mental models constantly — simplifications of the world that help us understand it. And for services on the Internet, our mental models are probably very close — logically, if not in implementation — to the reality of what those services do. If not, how could we use them?

I also like to imagine how the service works, too. I don’t know why I do this, but it makes me feel better about the universe. For a lot of things, to a first approximation, the what and how are sufficiently close that they are essentially the same model. And sometimes a model of how it works eludes me entirely.

for example, my model of email is that an email address is the combination of a username and a system name. My mail server looks up the destination mail server, and IP routes my blob of text to the destination mail server, where that server routes it to the appropriate user’s “mailbox,” which is a file. Which is indeed how it works, more or less, with lots of elision of what I’m sure are important details.

I’ve also begun sorting my mental models of Internet companies and services into a taxonomy that have subjective meaning for me, based on how meritorious and/or interesting they are. Here’s a rough draft:

The What The How Example Dave’s judgment
obvious as in real life email Very glad these exist, but nobody deserves a special pat on the back for them. I’ll add most matchmaking services, too.
obvious non-obvious, but simple and/or elegant Google Search (PageRank) High regard. Basically, this sort of thing has been the backbone of Internet value to-date
not obvious / inscrutable nobody cares Google Buzz lack of popularity kills these. Not much to talk about
obvious obvious Facebook Society rewards these but technically, they are super-boring to me
obvious non-obvious and complex natural language, machine translation, face recognition Potentially very exciting, but not really very pervasive or economically important just yet. Potentially creepy and may represent the end of humanity’s reign on earth.


Google search is famously straightforward. You’re searching for some “thing,” and Google is combing a large index for that “thing.” Back in the Altavista era, that “thing” was just keywords on a page. Google’s first innovation was to use the site’s own popularity (as measured by who links to it and the rankings of those links.) to help sort the results. I wonder how many people had a some kind of mental model of how Google worked that was different than that of Altavista — aside from the simple fact that it worked much “better.” The thing about Google’s “Pagerank” was that it was quite simple, and quite brilliant, because, honestly, none of the rest of us thought of it. So kudos to them.

There have been some Internet services I’ve tried over the years that I could not quite understand. I’m not talking about how they work under the hood, but how they appear to work from my perspective. Remember Google “Buzz?” I never quite understood what that was supposed to be doing.

Facebook, in its essence is pretty simple, too, and I think we all formed something of a working mental model for what we think it does. Here’s mine, written up as SQL code. First, the system is composed of a few tables:

A table of users, a table representing friendships, and a table of posts. The tables are populated by straightforward UI actions like “add friend” or “write post.”

Generating a user’s wall when they log in is as simple as:

You could build an FB clone with that code alone. It is eye-rollingly boring and unclever.

Such an implementation would die when you got past a few thousand users or posts, but with a little work and modern databases that automatically shard and replicate, etc, you could probably handle a lot more. Helping FB is the fact they makes no promises about correctness: a post you make may or may not ever appear on your friend’s wall, etc.

I think the ridiculous simplicity of this is why I have never taken Facebook very seriously. Obviously it’s a gajillion dollar idea, but technically, there’s nothing remotely creative or interesting there. Getting it all to work for a billion users making a billion posts a day is, I’m sure, a huge technical challenge, but not requiring inspiration. (As an aside, today’s FB wall is not so simple. It uses some algorithm to rank and highlight posts. What’s the algorithm and why and when will my friends see my post? Who the hell knows?! Does this bother anybody else but me?)

The last category is things that are reasonably obviously useful to lots of people, but how they work is pretty opaque, even if you think about it for awhile. That is, things that we can form a mental model of what it is, but mere mortals do not understand how it works. Machine translation falls into that category, and maybe all the new machine learning and future AI apps do, too.

It’s perhaps “the” space to watch, but if you ask me the obvious what / simple how isn’t nearly exhausted yet — as long as you can come up with an interesting “why,” that is.