What these folks do is legal. It’s called tax avoidance, and the more money you have, the harder you and your accountants will work, and the better at it you’ll be. There is an entire industry built around tax avoidance.
Though I want to disapprove of these people, it does occur to me that most of us do not willingly pay taxes that we are not required to pay. It’s not like I skip out on deducting my charitable giving or my mortgage interest, or using the deductions for my kids. I’m legally allowed those deductions and I use them.
So what is wrong with what Trump and Romney do?
One answer is “nothing.” I think that’s not quite the right answer, but it’s close. Yes, just because something is legal does not mean that it’s moral. But where do you draw the line here? Is it based on how clever your accountants had to be to work the system? Or how crazy the hoops you jumped through were to hide your money? I’m not comfortable with fuzzy definitions like that at all.
What is probably immoral, is for a rich person to try to influence the tax system to give himself more favorable treatment. But then again, how do you draw a bright line? Rich people often want lower taxes and (presumably) accept that that buys less government stuff and/or believe that they should not have to transfer their wealth to others. That might be a position that I don’t agree with, but the case for immorality there is a bit more complex, and reasonable people can debate it.
On the other hand, lobbying for a tax system with loopholes that benefit them, and creating a system of such complexity that only the wealthiest can navigate it, thus putting the tax burden onto other taxpayers, taxpayers with less money, is pretty obviously immoral. Well, if not immoral, definitely nasty.
I’ve been bouncing around just at the edge of my 2016 presidential campaign overload limit, and the other night’s debate and associated post-debate blogging sent me right through it.
Yes, I was thrilled to see my preferred candidate (Hermione) outperform the other candidate (Wormtail), but all the post-debate analysis and gloating made me weary.
Then, I thought about the important issues facing this country, the ones that keep me up at night worrying for my kids, the ones that were not discussed in the debate, or if they were, only through buzzwords and hand waves, and I got depressed. Because there is precious little in the campaign that addresses them. (To be fair, Clinton’s platform touches on most of these, and Trump’s covers a few, though neither as clearly or candidly as I’d like.)
So, without further ado, I present my list of campaign issues I’d like to see discussed, live, in a debate. If you are a debate moderator from an alternate universe who has moved through an interdimensional portal to our universe, consider using some of these questions:
1.
How do we deal with the employment effects of rapid technological change? File the effects of globalization under the same category, because technological change is a driver of that as well. I like technology and am excited about its many positive possibilities, but you don’t have to be a “Luddite” to say that it has already eliminated a lot of jobs and will eliminate many more. History has shown us that whenever technology takes away work, it eventually gives it back, but I have two issues with that. First, it is certainly possible that “this time it’s different. Second, and more worrisome, history also shows that the time gap between killing jobs and creating new ones can be multigenerational. Furthermore, it’s not clear that the same people who had the old jobs will be able to perform the new ones, even if they were immediately available.
This is a setup for an extended period of immiseration for working people. And, by the way, don’t think you’ll be immune to this because you’re a professional or because you’re in STEM. Efficiency is coming to your workplace.
It’s a big deal.
I don’t have a fantastic solution to offer, but HRC’s platform, without framing the issue just as I have, does include the idea of major infrastructure reinvestment, which could cushion this effect.
Bonus: how important should work be? Should every able person have/need a job? Why or why not?
2.
Related to this is growing inequality. The technology is allowing fewer and fewer people to capture more and more surplus. Should we try to reverse that, and if so, how do we do so? Answering this means answering some very fundamental questions about what is fairness that I don’t think have been seriously broached.
Sanders built his campaign on this, and Clinton’s platform talks about economic justice, but certainly does not frame it so starkly.
What has been discussed, at least in the nerd blogosphere, are the deleterious effects of inequality: its (probably) corrosive effect on democracy as well as its challenge to the core belief that everyone gets a chance in America.
Do we try to reverse this or not, and if so, how?
3.
Speaking of chances, our public education system has been an important, perhaps the important engine of upward mobility in the US. What are we going to do to strengthen our education system so that it continues to improve and serve everyone? This is an issue that spans preschool to university. Why are we systematically trying to defund, dismantle, weaken, and privatize these institutions? Related, how have our experiments in making education more efficient been working? What have we learned from them?
4.
Justice. Is our society just and fair? Are we measuring it? Are we progressing? Are we counting everyone? Are people getting a fair shake? Is everyone getting equal treatment under the law?
I’m absolutely talking about racial justice here, but also gender, sexual orientation, economic, environmental, you name it.
If you think the current situation is just, how do you explain recent shootings, etc? If you think it is not just, how do you see fixing it? Top-down or bottom-up? What would you say to a large or even majority constituency that is less (or more) concerned about these issues than you yourself are?
5.
Climate change. What can be done about it at this point, and what are we willing to do? Related, given that we are probably already seeing the effects of climate change, what can be done to help those adversely effected, and should we be doing anything to help them? Who are the beneficiaries of climate change or the processes that contribute to climate change, and should we transfer wealth to benefit those harmed? Should the scope of these questions extend internationally?
6.
Rebuilding and protecting our physical infrastructure. I think both candidates actually agree on this, but I didn’t hear much about plans and scope. We have aging:
electric
rail
natural gas
telecom
roads and bridges
air traffic control
airports
water
ports
internet
What are we doing to modernize them, how much will it cost? What are the costs of not doing it? What are the barriers that are getting in the way of major upgrades of these infrastructures, and what are we going to do to overcome them?
Also, which of these can be hardened and protected, and at what cost? Should we attempt to do so?
7.
Military power. What is it for, what are its limits? How will you decide when and how to deploy military power? Defending the US at home is pretty straightforward, but defending military interests abroad is a bit more complex.
Do the candidates have established doctrines that they intend follow? What do they think is possible to accomplish with US military power and what is not? What will trigger US military engagement? Under what circumstances do we disengage from a conflict? What do you think of the US’s record in military adventures and do you think that tells you anything about the types of problems we should try to solve using the US military?
7-a. Bonus. What can we do to stop nuclear proliferation in DPRK? Compare and contrast Iran and DPRK and various containment strategies that might be deployed.
If you’ve spent any time in an energy economics class, you have probably seen a slide that shows the essential equivalency of a carbon tax and a cap-and-trade system, at least with respect to their ability to internalize externalities and fix a market failure. However, if you scratch the surface of the simple model used to claim this equivalency and you realize it only works if you have a good knowledge of the supply and demand curves for carbon emissions. (There are other non-equivalencies, too, like where the incidence of the costs falls.)
The equivalency idea is that for a given market clearing of carbon emissions and price, you can either set the price, and get the emissions you want, or set the emissions and you will get the price. As it turns out, nobody really has a good grip on the nature of those curves, and we live in a world of uncertainty anyway, so there actually is a rather important difference: what variable are we going to “fix” about and which one will “float,” carrying all the uncertainty: the price of the carbon emissions quantity?
I bring this up because today I read a nice blog post by Severin Borenstein which I will reduce to its essential conclusion: A carbon tax is much better than cap-and-trade. He brings up the point above, stating that businesses just are much better able to adapt when they know what the price is going to be, but there are other advantages to a tax.
First, administratively, it is much easier to set a tax than it is to legislate an active and vibrant market into existence. If you’ve lived in the world of public policy, I hope you know that Administration Matters.
Furthermore, legislatures are not fast, closed-loop control systems. They can’t adapt their rules on the fly quickly as new information comes in, and sometimes political windows close entirely, making it impossible make corrections. As a result, the ability to adjust caps in a timely manner is, at best, difficult. This is a fundamentally harder problem then getting people to agree, a priori, on what an acceptable price — one with more than a pinch of pain, but not enough to kill the patient.
So, how did we end up with cap-and-trade rather than a carbon tax? Well, certainly a big reason is the deathly allergy legislatures have to the word “tax.” Even worse: “new tax.” Perhaps that was the show-stopper right there. But it certainly did not help that we had economists (I suspect Severin was not among them) providing the conventional wisdom that a carbon tax and a cap-and-trade system are essentially interchangeable. The latter is not true, unless a wise, active, and responsive regulator, free to pursue an agreed objective is at the controls. So pretty much never.
Short post here. I notice people are writing about self-driving cars a lot. There is a lot of excitement out there about our driverless future.
I have a few thoughts, to expand on at a later day:
I.
Apparently a lot of economic work on driving suggests that the a major externality of driving is from congestion. Simply, your being on the road slows down other people’s trips and causes them to burn more gas. It’s an externality because it is a cost of driving that you cause but don’t pay.
Now, people are projecting that a future society of driverless cars will make driving cheaper by 1) eliminating drivers (duh) and 2) getting more utilization out of cars. That is, mostly, our cars sit in parking spaces, but in a driverless world, people might not own cars so much anymore, but rent them by the trip. Such cars would be much better utilized and, in theory, cheaper on a per-trip basis.
So, if I understand my micro econ at all, people will use cars more because they’ll be cheaper. All else equal, that should increase congestion, since in our model, congestion is an externality. Et voila, a bad outcome.
II.
But, you say, driverless cars will operate more efficiently, and make more efficient use of the roadways, and so they generate less congestion than stupid, lazy, dangerous, unpredictable human drivers. This may be so, but I will caution with a couple of ideas. First, how much less congestion will a driverless trip cause than a user-operated one? 75% as much? Half? Is this enough to offset the effect mentioned above? Maybe.
But there is something else that concerns me: the difference between soft- and hard-limits.
Congestion as we experience it today, seems to come on gradually as traffic approaches certain limits. You’ve got cars on the freeway, you add cars, things get slower. Eventually, things somewhat suddenly get a lot slower, but even then it’s certain times of the day, in certain weather, etc.
Now enter a driverless cars that utilize capacity much more effectively. Huzzah! More cars on the road getting where they want, faster. What worries me is that was is really happening is not that the limits are raised, but that we are operating the system much close to existing, real limits. Furthermore, now that automation is sucking out all the marrow from the road bone — the limits become hard walls, not gradual at all.
So, imagine traffic is flowing smoothly until a malfunction causes an accident, or a tire blows out, or there is a foreign object in the road — and suddenly the driverless cars sense the problem, resulting in a full-scale insta-jam, perhaps of epic proportions, in theory, locking up an entire city nearly instantaneously. Everyone is safely stopped, but stuck.
And even scarier than that is the notion that the programmers did not anticipate such a problem, and the car software is not smart enough to untangle it. Human drivers, for example, might, in an unusual situation, use shoulders or make illegal u-turns in order to extricate themselves from a serious problem. That’d be unacceptable in a normal situation, but perhaps the right move in an abnormal one. Have you ever had a cop the scene of an accident wave at you to do something weird? I have.
Will self-driving cars be able to improvise? This is an AI problem well beyond that of “merely” driving.”
III.
Speaking of capacity and efficiency, I’ll be very interested to see how we make trade-offs of these versus safety. I do not think technology will make these trade-offs go away at all. Moving faster, closer will still be more dangerous than going slowly far apart. And these are the essential ingredients in better road capacity utilization.
What will be different will be how and when such decisions are made. In humans, the decision is made implicitly by the driver moment by moment. It depends on training, disposition, weather, light, fatigue, even mood. You might start out a trip cautiously and drive more recklessly later, like when you’re trying to eat fast food in your car. The track record for humans is rather poor, so I suspect that driverless cars will do much better overall.
But someone will still have to decide what is the right balance of safety and efficiency, and it might be taken out of the hands of passengers. This could go different ways. In a liability-driven culture me way end up with a system that is safer but maybe less efficient than what we have now. (call it “little old lady mode”) or we could end up with decisions by others forcing us to take on more risk than we’d prefer if we want to use the road system.
IV.
I recently read in the June IEEE Spectrum (no link, print version only) that some people are suggesting that driverless cars will be a good justification for the dismantlement of public transit. Wow, that is a bad idea of epic proportions. If, in the first half of the 21st century, the world not only continues to embrace car culture, but doubles down to the exclusion of other means of mobility, I’m going to be ill.
* * *
That was a bit more than I had intended to write. Anyway, one other thought is that driverless cars may be farther off than we thought. In a recent talk, Chris Urmson, the director of the Google car project explains that the driverless cars of our imaginations — the fully autonomous, all conditions, all mission cars — may be 30 years off or more. What will come sooner are a succession of technologies that will reduce driver workload.
So, I suspect we’ll have plenty of time to think about this. Moreover, the nearly 7% of our workforce that works in transportation will have some time to plan.
It’s not a new debate, and though I will get into some specifics of the discussion below, what really resonated for me is how religious and ideological is the belief that corporations just do everything better. It’s not like the WSJ made any attempt whatsoever to list (and even dismiss) counter-arguments to ATC privatization. It’s almost as if the notion that there could be some justification for a publicly funded and run ATC has just never occurred to them.
What both pieces seemed to have in common is a definition of dysfunction that hews very close to “not the outcome that a market would have produced.” That is to say, they see the output of non-market (that is, political) processes as fundamentally inferior and inefficient, if not outright illegitimate. Of course, the outcomes from political processes can be inefficient and dysfunctional, but this is hardly a law of nature.
For my loyal reader (sadly, not a typo), none of this is news, but it still saddens me that so many potentially interesting problems (like how best to provision air traffic control services) break down on such tired ideological grounds: do you want to make policy based on one-interested-dollar per vote or one-interested-person per vote?
I want us to be much more agnostic and much more empirical in these kinds of debates. Sometimes markets get good/bad outcomes, sometimes politics does.
For example, you might never have noticed that you can’t fly Lufthansa or Ryanair from San Francisco to Chicago. That’s because there are “cabotage” laws in the US that bar foreign carriers from offering service between US cities. Those laws are blatantly anti-competitive and the flying public is definitely harmed by this. This is a political outcome I don’t particularly like due, in part, to Congress paying better attention to the airlines than to the passengers. Yet, I’m not quite ready to suggest that politics does not belong in aviation.
Or, in terms of energy regulation, it’s worth remembering that we brought politics into the equation a very long time ago because “the market” was generating pretty crappy outcomes, too. What I’m saying is that neither approach has a exclusive rights to dysfunction.
OK. Let’s get back to ATC and the WSJ piece.
In it, the WSJ makes frequent reference to Canada’s ATC organization, NavCanada, that was privatized during a budget crunch a few years back, and has performed well since then. This is in contrast to to an FAA that has repeated “failed to modernize.”
But the US is not Canada, and our air traffic situation is very different. A lot of planes fly here! Anyone who has spent any serious time looking at our capacity problems knows that the major source of delay in the US is from insufficient runways and terminal airspace, not control capabilities per se. That is to say, modernizing the ATC system so that aircraft could fly more closely using GPS position information doesn’t really buy you all that much if the real crunch is access to the airport. If you are really interested, check out this comparison of the US and European ATC performance. The solution in the US is pouring more concrete in more places, not necessarily a revamped ATC. (It’s not that ATC equipment could not benefit from revamping, only that it is not the silver bullet promised.)
Here’s another interesting mental exercise: Imagine you have developed new technology to improve the throughput of an ATC facility by 30% — but the hitch is that when you deploy the technology, there will be diminution in performance during the switchover, as human learning, inevitable hiccups, and the need to temporary run the old and new systems in parallel takes its toll. Now imagine that you want to deploy that technology at a facility that is already operating at near its theoretical maximum capability. See a problem there? It’s not an easy thing.
Another issue in the article regards something called ADS-B (Automatic Dependent Surveillance – Broadcast), a system by which aircraft broadcast their GPS-derived position. Sounds pretty good, and yet, the US has taken a long time to get it going widely. (It’s not required on all aircraft until 2020) Why? Well, one reason is that a lot of the potential cost-savings from switching to ADS-B would come from the retirement of expensive, old primary radars that “paint” aircraft with radio waves and sense the reflected energy. Thing is, primary radars can see metal objects in the sky, and ADS-B receivers only see aircraft that are broadcasting their position. You may have heard in recent hijackings how transponders were disabled by pilot — so, though the system is cool, it certainly cannot alone replace the existing surveillance systems. The benefits are not immediate and large, and it leaves some important problems unsolved. Add in the high cost of equippage, and it was an easy target to delay. But is that a sign of dysfunction or good decision-making?
All of which is to say that I’m not sure a privately run organization, facing similar constraints, would make radically different decisions than has the FAA.
Funding the system is an interesting question, too. Yes, a private organization that can charge fees has a reliable revenue stream and is thus is able to go to financial markets to borrow for investment. This is in contrast to the FAA, which has had a hard time funding major new projects because of constant congressional budget can-kicking. Right now the FAA is operating on an extension of its existing authorization (from 2012), and a second extension is pending, with a real reauthorization still behind that. OK, so score one for a private organization. (Unless we can make Congress function again, at least.)
But what happens to privatized ATC if there is a major slowdown in air travel? Do investments stop, or is service degraded due to cost cutting, or does the government end up lending a hand anyway? And how might an airline-fee-based ATC operate differently from one that ostensibly serves the public? Even giving privatization proponents the benefit of the doubt that a privatized ATC would be more efficient and better at cost saving, would such an organization be as good at spending more money when an opportunity comes along to make flying safer, faster, or more convenient for passengers? How about if the costs of such changes fall primarily on the airlines, through direct equippage costs and ATC fees? Or, imagine a scenario where most airlines fly large aircraft between major cities, and an an upstart starts flying lots of small aircraft between small cities. Would a privatized ATC or publicly funded ATC better resist the airlines’ anti-competitive pressures to erect barriers to newcomers?
I actually don’t know the answers. The economics of aviation are somewhat mysterious to me, as they probably are to you unless your an economist or operations researcher. But I’m pretty sure the Scott McCartney of the WSJ knows even less.
Last week, I came across this interesting piece on the perils of using “big data” to draw conclusions about the world. It analyzes, among other things, the situation of Google Flu Trends, the much heralded public health surveillance system that turned out to be mostly a predictor of winter (and has since been withdrawn).
It seems to me that big data is a fun place to explore for patterns, and that’s all good, clean, fun — but it is the moment when you think you have discovered something new when the actual work really starts. I think “data scientists” are probably on top of this problem, but are most people going on about big data data scientists?
I really do not have all that much to add to the article, but I will amateurishly opine a bit about statistical inferencing generally:
1.
I’ve taken several statistics courses over my life (high school, undergrad, grad). In each one, I thought I had a solid grasp of the material (and got an “A”), until I took the next one, where I realized that my previous understanding was embarrassingly incorrect. I see no particular reason to think this pattern would ever stop if I took ever more stats classes. The point is, stats is hard. Big data does not make stats easier.
2
If you throw a bunch of variables at a model, it will find some that look like good predictors. This is true even if the variables are totally and utterly random and unrelated to the dependent variable (see try-it-at-home experiment below). Poking around in big data, unfortunately, only encourages people to do this and perhaps draw conclusions when they should not. So, if you are going to use big data, do have a plan in advance. Know what effect size would be “interesting” and disregard things well under that threshold, even if they appear to be “statistically significant.” Determine in advance how much power (and thus, observations) you should have to make your case, and sample from your ginormous set to a more appropriate size.
3
Big data sets seem like they were mostly created for other purposes than statistical inferencing. That makes them a form of convenience data. They might be big, but are the variables present really what you’re after? And was this data collected scientifically, in a manner designed to minimize bias? I’m told that collecting a quality data set takes effort (and money). If that’s so, it seems likely that the quality of your average big data set is low.
A lot of big data comes from log files from web services. That’s a lame place to learn about anything other than how the people who use those web services think or even how people who do use web services think while they’re doing something other than using that web service. Just sayin’.
Well, anyway, I’m perhaps out of my depth here, but I’ll leave you with this quick experiment, in R:
R
1
2
3
4
5
6
rows = 10000
vars = 200
x = data.frame(replicate(vars,runif(rows,0,1)))
y = runif(rows,0,1)
a.mod = lm(y ~ ., x)
str(summary(a.mod))
It generates 10,000 observations of 201 variables, each generated from a uniform random distribution on [0,1]. Then it runs an OLS model using one variable as the dependent and the remaining 200 as independents. R is even nice enough to put friendly little asterisks next to variables that have
p<0.05 .
When I run it, I get 10 variables that appear to be better than “statistically significant at the 5% level” — even though the data is nothing but pure noise. This is about what one should expect from random noise.
Of course, the r2 of the resulting model is ridiculously low (that is, the 200 variables together have low explanatory power ). Moreover, the effect size of the variables is small. All as it should be — but you do have to know to look. And in a more subtle case, you can imagine what happens if you build a model with a bunch of variables that do have explanatory power, and a bunch more that are crap. Then you will see a nice r2 overall, but you will still have some of your crap pop up.
Some clever economists have come up with a name for the religious application of simple economic principles to complex situations where they probably don’t apply: Econ-101ism.
See, folks at Berkeley touted the $15/hr minimum wage as a good thing, and then UC laid off a bunch of people. Coincidence? The good people at Irritable Bowel Disease think not!
Except, few at UC gets paid the minimum wage. And the $15/hr minimum wage has not taken effect and won’t take effect for years. And the reason for the job cuts are the highly strained budget situation at the UCs, a problem that is hardly new.
You could make an argument that a $15/hr minimum will strain the economy, resulting in lower tax revenue, resulting in less state money, resulting in layoffs at the UC’s. I guess. Quite a lot of moving parts in that story, though.
Smells like bullshit.
Edit: UCB does have its own minimum wage, higher than the California minimum. It has been $14/hr since 10/2015 and will be $15/hr starting in 2017. (http://www.mercurynews.com/business/ci_28522491/uc-system-will-raise-minimum-wage-15-an)
Another edit: Chancellor Dirks claim the 500 job cuts would save $50M/yr. That implies an average hourly cost of $50/hr. Even if 1/2 that goes to overhead and benefits, those would be $25/hr jobs, not near the minimum. In reality, the jobs probably had a range of salaries, and one can imagine some were near the $15 mark, but it is not possible that all or even most of them were.
I’ve known for some time that the semiconductor (computer chip) business has not been the most exciting place, but it still surprised me and bummed me out to see that Intel was laying off 11% of its workforce. There are lots of theories about what is happening in non-cloudy-software-appy tech, but I think fundamentally, the money is being drained out of “physical” tech businesses. The volumes are there, of course. Every gadget has a processor — but that processor doesn’t command as much margin as it once did.
A lot of people suggest that the decline in semiconductors is a result of coming to the end of the Moore’s Law epoch. The processors aren’t getting better as fast as they used to, and some argue (incorrectly) hardly at all. This explains the decline, because without anything new and compelling on the horizon, people do not upgrade.
But in order for that theory to work, you also have to assume that the demand for computation has leveled off. This, I think, is almost as monumental a shift as Moore’s Law ending. Where are the demanding new applications? In the past we always seemed to want more computer (better graphics, snappier performance, etc) and now we somewhat suddenly don’t. It’s like computers became amply adequate right about the same time that they stopped getting much better.
Does anybody else find that a little puzzling? Is it coincidence? Did one cause the other, and if so, which way does the causality go?
The situation reminds me a bit of “peak oil,” the early-2000’s fear that global oil production will peak and there will be massive economic collapse as a result. Well, we did hit a peak in oil production in 2008-9 time-frame, but it wasn’t from scarcity, it was from low demand in a faltering economy. Since then, production has been climbing again. But with the possibility of electrified transportation tantalizingly close, we may see true peak oil in the years ahead, driven by diminished demand rather than diminished supply.
I am not saying that we have reached “peak computer.” That would be absurd. We are all using more CPU instructions than ever, be it on our phones, in the cloud, or in our soon-to-be-internet-of-thinged everything. But the ever-present pent up demand for more and better CPU performance from a single device seems to be behind us. Which, for just about anyone alive but the littlest infants, is new and weird.
If someone invents a new activity to do on a computing device that is both popular and ludicrously difficult, that might put us back into the old regime. And given that Moore’s Law is sort of over, that could make for exciting times in Silicon Valley (or maybe Zhongguancun), as future performance will require the application of sweat and creativity, rather than just riding a constant wave. (NB: there was a lot of sweat and creativity required to keep the wave going.)
I’ve been thinking a lot lately about success and innovation. Perhaps its because of my lack of success and innovation.
Anyway, I’ve been wondering how the arrow of causality goes with those things. Are companies successful because they are innovative, or are they innovative because they are successful.
This is not exactly a chicken-and-egg question. Google is successful and innovative. It’s pretty obvious that innovation came first. But after a few “game periods,” the situation becomes more murky. Today, Google can take risks and reach further forward into the technology pipeline for ideas than a not-yet successful entrepeneur could. In fact, a whole lot of their innovation seems not to affect their bottom line much, in part because its very hard to grow a new business at the scale of their existing cash cows. This explains (along with impatience and the opportunity to invest in their high-returning existing businesses) Google’s penchant for drowning many projects in the bathtub.
I can think of other companies that had somewhat similar behavior over history. AT&T Bell Labs and IBM TJ Watson come to mind as places that were well funded due to their parent companies enormous success (success, derived at least in part, from monopoly or other market power). And those places innovated. A lot. As in Nobel Prizes, patents galore, etc. But, despite their productive output of those labs, I don’t think they ever contributed very much to the companies’ success. I mean, the transistor! The solar cell! But AT&T didn’t pursue these businesses because they had a huge working business that didn’t have much to do with those. Am I wrong about that assessment? I hope someone more knowledgable will correct me.
Anyway, that brings me back to the titans of today, Google, Facebook, etc. And I’ll continue to wonder out loud:
are they innovating?
is the innovation similar to their predecessors?
are they benefiting from their innovation?
if not, who does, and why do they do it?
So, this gets back to my postulate, which is that, much more often than not, success drives innovation, and not the reverse. That it ever happens the other way is rare and special.
Perhaps a secondary postulate is that large, successful companies do innovate, but they have weak incentives to act aggressively on those innovations, and so their creative output goes underutilized longer than it might if it had been in the hands of a less successful organization.
A couple of years ago Google announced an electrical engineering contest with a $1M prize. The goal was build the most compact DC to AC power inverter that could meet certain requirements, namely 2kVA power output at 240 Vac 60Hz, from a 450V DC source with a 10Ω impedance. The inverter had to withstand certain ambient conditions and reliability, and it had to meet FCC interference requirements.
Fast forward a few years, and the results are in. Several finalists met the design criteria, and the grand prize winner exceeded the energy density requirements by more than 3x!
First, Congrats, to the “Red Electrical Devils!” I wish I were smart enough to have been able to participate, but my knowledge of power electronics is pretty hands-off, unless you are impressed by using TRIACs to control holiday lighting. Here’s the IEEE on what they thought it would take to win.
Aside from general gEEkiness, two things interested me about this contest. First, from an econ perspective, contests are just a fascinating way to spur R&D. Would you be able to get entrants, given the cost of participation and the likelihood of winning the grand prize? Answer: yes. This seems to be a reliable outcome if the goal is interested enough to the right body of would-be participants.
The second thing that I found fascinating was the goal: power density. I think most people understand the goals of efficiency, but is it important that power inverters be small? The PV inverter on the side of your house, also probably around 2kW, is maybe 20x as big as these. Is that bad? How much is it worth it to shrink such an inverter? (Now, it is true if you want to achieve power density, you must push on efficiency quite a bit, as every watt of energy lost to heat needs to be dissipated somehow, and that gets harder and harder as the device gets smaller. But in this case, though efficiencies achieved were excellent, they were not cutting edge, and the teams instead pursued extremely clever cooling approaches.)
I wonder what target market Google has in mind for these high power density inverters. Cars perhaps? In that case, density is more important than a fixed PV inverter, but still seemingly not critical to this extreme. Specific density rather than volumetric seems like it would be more important. Maybe Google never had a target in mind. For sure, there was no big reveal with the winner announcement. Maybe Google just thought that this goal was the most likely to generate innovation in this space overall, without a particular end use in mind at all — it’s certainly true that power electronics are a huge enabling piece of our renewable energy future, and perhaps it’s not getting the share of attention it deserves.
I’m not the first, though, to wonder what this contest was “really about.” I did not have to scroll far down the comments to see one from Slobodon Ćuk, a rather famous power electronics researcher, inventor of a the Ćuk inverter.
Anyway, an interesting mini-mystery, but a cool achievement regardless.