Last year I designed and built a fun little electronic trinket that I shared with friends and family. A lot of people told me that I should try selling it. I didn’t think I could make a lot of money this way, but I thought it might be fun and a good learning experience to dip my toe into entrepeneurship, so I decided, sure why not?
The first batch of trinkets I had assembled by hand, but this batch I would pay to have assembled. This means a fair bit of work, finding an assembly house, cleaning up the design, refining the bill of materials, and interacting repeatedly with the assembler to answer their questions and make requested changes to the design.
Fast forward a few months and a couple of thousand dollars, and I was the proud owner of a couple of boxes of shiny trinkets. Well … almost. I needed to do some final assembly, program the microcontrollers, and test the devices. I also needed to bag them and add labels that point to my website. Oh, did I mention? I needed to build a website, too.
In total, I probably have spent dozens of late-night hours on this, but it was kinda, sort entertaining. Finally, I was ready to send these pups over to Amazon and let the magic happen, right?
Are we having fun yet?
Well, no. My plan all along was to send a big box of these to Amazon and sell them there, with them taking their cut and doing all the shipping.
So I logged into Amazon “Seller Central” and started creating my account. They asked for so much personal information that I repeatedly had to stop myself and check the site certificate to see if I was being fished. They wanted:
copy of driver’s license or passport
bank account number
copy of bank or credit card statements
Nervously, I uploaded all that junk, and waited for the nice login prompt telling me that I could start to create my listings.
Instead, I got a short message saying that my documents were not in order, and my identity could not be validated. Please go back and fix it. Actually, I’ll reproduce the contents of this message here because it is totally generic:
Thinking that was odd, I logged in again, and uploaded different scans of the same documents.
Rejected. Same message.
A different bank statement?
Passport instead of driver’s license? Credit card statement?
How about photos instead of scans? Scans instead of photos? With flash, without? Color, black and white, color photos of black and white documents?
Rejected, rejected, rejected.
At this point, I have wasted three days and literally provided them with my credit card statement, bank statement, California driver’s license, and US passport. All rejected with the same terse response.
What’s funny is that there is literally nothing left for me to do. There is only one me. I have one name, one SSN, one address. I have already given them all the documents I have. I mean, I literally cannot establish my identity better than I have already done.
Amazon provides absolutely no option for redress. No email. No phone number. No help chat. Nothing.
Wait, you say: why not go on the support forums and complain? Because the support forums are only accessible by people who have passed this step.
So, I guess I won’t be selling on Amazon.
The whole episode has been infuriating, to say the least and I am just astounded by a system that provides absolutely no opportunity for escalation. (I’ve been told by that my best hope at this point is that I can get people upset on social media.)
I’ve been an Amazon customer for almost as long as they’ve been in business, and I’ve easily spent tens of thousands of dollars there over the years. It had always been a great company to work with — as a customer, but as a partner, or in this case, potential partner, I’ve never witness such abject contempt for the user.
I think it’s telling when some companies treat vendors badly. It’s sort of the corporate version of “kissing-up and punching down.” Of course, Wal-Mart is famous for it. Apple has also been rather conspicuously developer hostile (and why there are no Safari extensions from ToolsOfOurTools). I did not know about Amazon. Now I do.
And I have to say, the bad taste left in my mouth after this episode will cause me to rethink doing any business at Amazon.
Plenty of people have written about the fact that in a world of companies selling IOT hardware, there is little or no incentive for them to maintain the software running on that hardware. Those people are right. But not only is there little incentive, keeping an IOT device current is actually fiendishly difficult — as I was reminded this past weekend.
I have an IOT alarm clock I built myself, back in 2011. It was based around a Raspberry Pi Model 1B, running Raspbian Wheezy. The software I wrote to imeplement the clock is simple, consisting of three major components:
An interface to the Google Calendar API, so it knows when I want to get up
An interface to an LCD Display so I can see the time and see when it plans to wake me next.
An interface to GPIO to drive a solenoid, which rings a physical chime. I wasn’t going for a wimpy electronic beeping; I wanted some Zen-level physical dinging.
Now, when I created this clock about seven years ago, my go-to language for this sort of thing was Perl. You can quibble with that choice, but Perl was my Swiss army knife at the time, and it also solved certain problems that other languages didn’t. For one, Perl has a fantastic no-no
nsense library for wrapping C code: Inline. You can basically “inline” C function right in your Perl, and it “just works.” This was really important for talking to the chip GPIO for the dinger and for the LCD, which were supported only in C — at the time.
One drawback of using Perl is that Google has never supported it for accessing their APIs. That is, Perl can generate an HTTP transaction just as well as the next language, but Google also provides nice wrapper code for a list of languages which they’ve made it pretty clear will never, ever include Perl. But someone else had written a similar wrapper for Perl, so I grabbed that and got things up and running. Over the years, that has turned out to be a pain as Google has revamped their Calendar API twice in that time, and my clock just broke each time. Fixing it as a pain, but I did it just to keep the project running.
Let’s get current!
So, on Friday, after thinking about all the exploits floating around and the fact that I was running a full-fledged OS on a clock on my home network, I decided I should really update all the software on the clock. Raspbian had moved from Debian 7 (Wheezy) to Debian 8 (Jessie) to Debian 9 (Stretch) in the intervening years, so the first step was to update the OS. Twice.
This went poorly. The update process was excruciatingly slow on this single-core processor, taking hours, and occasionally stopping entirely to ask me a question (“you want to overwrite this file?”). I managed to get the first update done, but the second update died entirely when the SD card holding everything filled up after the installer decided it needed to create a huge swapfile.
So I got a new SD card and installed Stretch on that cleanly. It was also pretty quick, and if you do a network install, you won’t need to do any package updates immediately after. (Microsoft could learn a lesson from that.) After the OS came up, I copied over my software and tried to get it running. No dice.
You won’t be surprised to hear that some things had changed:
The Perl libraries for Google had changed quite a bit over the years, so installing the new ones generated a bunch of errors. Overall, the main pain in this was that some of these libraries can be found in the Raspbian package manager, and some need to be installed from cpan. I prefer OS repository packages when available because they update along with the OS. Everything I install from cpan is just a snapshot that may may need to be installed after the next OS update, and worse, experience shows that the installation process can sometimes go from simple to epic if some underlying untracked dependency changes. But when you install from cpan, it installs dependency from cpan, even if the dependencies can be found in the OS repos. This basically sucks.
Anyway, the changes in the Perl libraries were mostly for the better, to make the Perl API better map to the way Google worked, but still, it required digging into my old Perl code and looking at the Google docs.
The LCD interface is in two parts. A C-based daemon from a package called lcdproc, and my client code in Perl that talks to the daemon. For the new OS I needed to rebuild that daemon from source. Luckily, lcdproc had not advanced in 7 years, so I could just rebuild the old code. This was particularly lucky because I had made a big patch to the hardware driver to talk to my particular i2c expander that drover the LCD controller. I’m glad I did not have to figure out how to apply that patch to some completely new, changed version.
Raspbian Stretch switched from System V init to systemd, so my startup stuff, which was init based needed to be changed to systemd unit files. This was not too painful, and I actually like systemd for daemons, but it took a little while to create the files, set permissions, fix my mistakes, yadda.
Overall, this whole project was not really that complicated in retrospect, but taking more or less an entire weekend day, it sure felt like a never-ending series of missteps and annoyances.
Getting Really Current
I should probably rewrite the clock in Python.
Python now has a mature library for talking to Raspberry Pi GPIO. It’s clean and simple.
Python has always had better Google integration, courtesy of Google. It would a pleasure to switch to this.
I had already written Python bindings to talk to the LCD daemon. I don’t remember doing that, but apparently this is not the first time I’ve considered doing a Python rewrite.
But there are two roadblocks. First, technically, being a clock, this code is time-sensitive, and so the Perl version has multiple threads. There is basically a thread that increments every second and various worker threads. The modern Pythonic way to accomplish the same thing (without threads — which Python has never done well and never will) is to use asyncio. Not to get into the details too deep, but I have some issues with asyncio. It’s complicated and it requires an all-or-nothing approach. Your whole program, even the non async parts need to be asyncio-ified, because they will otherwise block the parts that are.
Second, I just don’t want to. Writing code that does the same thing as other code is no fun.
Anyway, today my alarm clock works exactly as it did in 2011, but it is running on a current version of Perl with current libraries on a current OS. It only took me the better part of my weekend. 🙁
Whose going to do this for the IOT power outlet or window shade controller you bought at Fry’s?
Are you familiar with Bitcoin and other crypto-currencies? These are currencies not supported by any government, which can be traded anonymously and somewhat frictionlessly. They are gaining traction among people who want to make illegal transaction, who want to avoid taxes, and who just want freedom. And now, increasingly they are being used not as a currency for trade, but as an investment. As a result, people are working hard to make more Bitcoin, which is a complex mathematical operation called mining. Some organizations have set up huge computer farms employing custom hardware to do nothing more than mine bitcoin.
Anway, after some time, I calmed down and realized that this isn’t so bad, and it could even be … good?
You see, one cannot look at something we don’t like in a vacuum. It must be compared to real alternatives. We hear over and over from the advertising industry that websites need to make money. (not this one — ed.) That’s what pays for the content, the computers, and the personnel. Ads make the “free” Internet as we know it possible.
But ads suck. They are ugly and intrusive. They involves a third party — the advertiser in every page I visit. There’s me, there’s the website, and then there’s this guy over in the corner who wants to sell me viagra. Because the money is coming from the advertiser, he gets a say in the content of the site. Furthermore, he gets to know that I visited the site, and can start to collect all kinds of information on my browsing history, eventually creating a dossier on my personal habits that he will use to target me for the rest of my life. And finally, he gets to suck up my cpu cycles and my limited screen real estate in order to serve me his ads. It’s maddening!
I don’t like it, have never liked it, and would much prefer a subscription supported Internet. But that’s never going to happen, so I’m told.
So how is letting people mine bitcoin better?
no screen real-estate
no data collection
no third party
Sure, they’re sucking up my CPU and battery just as the advertisers, but probably no worse, and perhaps that’s a fair price to pay.
Electricity started out with small local generators, even “home” generators, then increasingly centralized for a long time, and today, there is a big push for “distributed” generation, which is basically decentralized power generation, but maintaining a connection to the power grid.
Computing started out small on home computers and has become increasingly centralized in big data centers. Will the next step to reverse that pattern?
The business model for smart devices in the home is shaping up to be simple and bad: cheap hardware and no service contracts. That sounds great for consumers — after all, why should I pay $100 for a smart power outlet made of a $0.40 microcontroller and a $1 relay, and why should I have to pay a monthly fee to switch it — but it is going to have serious negative ramifications.
Let me start by saying that many bits have already been spilled about basic IoT security:
making sure that messages sent to and from your device back to the manufacturer cannot be faked or intercepted
making sure that your IoT device is not hacked remotely, turning it into someone else’s IoT device
making sure that your data, when it is at rest in the vendor’s systems is not stolen and misused
As things stand, none of that is going to happen satisfactorily, primarily because of incompatible incentives. When you sell a device for the raw cost of its hardware, with minimal markups and no opportunity for ongoing revenue, you also have no incentive for ongoing security work. Or any kind of work for that matter. If you bought the device on the “razor + blade” model, where the device was cheap, but important revenue was based on your continued use of the product, things might be different.
Worse than that, however, in order to find new revenue streams (immediately, or at potential future streams), vendors have strong incentives to collect all the data they can from the device. You do not know — even when the devices are operating as designed — exactly what they are doing. They are in essence little listening bugs willingly planted all over your home, and you do not know what kind of information they are exfiltrating, nor do you know who is ultimately receiving that information.
I think there is a solution to this problem, if people want it, and it requires two basic parts to work properly:
We need a business model for smart devices that puts strong incentives in place for vendors to continue to support their products. This will never happen with the cheapie Fry’s Electronics special IoT Doohickey of the Week. Instead, we probably need a real engagement with sticks (liability) and carrots (enhanced revenue) that are driven by ongoing contractual engagement. That is, money should continue to flow.
We need a standardized protocol for IoT that provides for a gateway at the home, and encrypted data on both sides of the gateway, but with the gateway owner having access to the encryption keys on the inner side of the gateway. The standardized protocol would have fields for the vendor name and hosts, as well as a human readable json-style payload — and a rule that nothing can be double-encrypted in the payload, keeping it from the eyes of the user.
Under such an arrangement, users, or their gateways acting as proxies for them, could monitor what is coming and going. You could program your gateway, for example, to block unnecessary information from http messages sent by your device.
Of course, the vendors, seeing the blocked information might decide not to provide their service, and that’s their right, but at least everyone would know the score.
Will this happen? Well, I think vendors with the long view of things would probably see #1 as appealing. Users will not, perhaps. But that is because users are not fully aware of the consequences of inviting someone else to monitor their activities. Perhaps people will think differently after a few sensational misuses of their data.
Vendors will fight #2 mightily. Of course, they could ignore it completely, with the potential antidote that a large number of users who insist on it becoming excluded from their total available market. With a critical mass of people using gateways that implement #2, I think we could tip things, but it right now it seems a long shot.
I am quite pessimistic about all this. I don’t think we’ll see #1 or #2 unless something spectacularly bad happens first.
For the record, I do use a few IoT devices in my home. There are two flavors: those I built myself and those I bought. For the self-built, they exist entirely within my network and do not interact with any external server. I obviously know what they do. For those I bought, they they exist on a DMZ style network up with no access to my home network at all (at least if my router is working as intended). This mitigates the worry of pwned devices accessing my computer and files, but does not stop them from sending whatever they collect back to the mothership.
STEM, as most of us know, is Science Technology Engineering and Mathematics. Pundits the world over like to remind us how important it is that we graduate as many STEM folks as possible. (That notion is wrong, by the way. We should encourage people who like STEM to pursue STEM.)
STS is less commonly known. That’s “Science and Technology in Society,” and the name describes well enough. STS people study science itself: its processes, people, culture, and outcomes.
I believe I am one of a relatively small cohort of people who are both STEM-y and somewhat STS-y. The former, I get from my engineering degree and my natural proclivity to figure out how things work and to make my own working things. The latter I get from my policy training, which included an introduction to some of the basic concepts in that field. (My wife, an STS scholar herself is also a big factor!)
But I think the seeds of my STS-orientation came much earlier in life, when I was still an undergraduate in engineering school. My engineering program, at the University of Virginia, required all undergraduates to write a thesis, and that thesis had to address important STS concepts like engineering ethics. It was not just the thesis, either. My BSEE required several classes at SEAS’s own engineering humanities program, with required books, such as To Engineer Is Human: The Role of Failure in Successful Design (Petroski), The Design of Everyday Things (Norman), Normal Accidents (Perrow), The Civilized Engineer (Florman) and, of course, Frankenstein (Shelly). At the time we wondered, why, at a world-class university, would the school of engineering host its own humanities classes? Now I can see that there was something truly cutting-edge about it. (It’s not like we were barred from taking classes outside the engineering school.)
Perhaps because I was indoctrinated at a young age, or because the concepts are right, I firmly believe that an engineer who works without considering the consequences of his creativity is at risk of creating less valuable things than he might. We can all easily conjure a list of the “blockbuster bad ideas” of the 20th century (mustard gas, nuclear weapons, etc). But even when the engineering output is an unalloyed good, with a bit of STS consideration, it is entirely possibly that something even better could have been created. Also, I just find it kind of bizarre that STEM folks might be discouraged from thinking about what there work means. I guess its part of the myth of the objectivity of science that there is no meaning to think about. That’s wrong about science, and it should be prima facie obviously incorrect about engineering, which is by definition, a process directed by human desires.
But this kind of more holistic thinking isn’t particularly common, and as a result, places like Silicon Valley seem to be pretty bad at considering consequences. When you’re racing to create something, who has time to stop and think about its implications, much less let those implications determine the course of development? One simple example: hundreds of years of history led to the universally accepted notion that the integrity of a sealed letter should be maintained by all couriers involved in its delivery. When email came along, no such consideration was made. Why? How would the Internet as a means of communications have evolved if privacy were a consideration from the get go? Could the Internet have been “better?” (Yes, duh.)
Anyway, the IEEE article seems to conclude that most of the barriers to getting STEM folks to take on STS thinking are due to the culture of STEM. Though there is truth to that, it’s not the whole story, by far. For example, STS, philosphy, and policy folks have their own jargon and shibboleths, and it’s not easy for someone not trained in the game to participate. Furthermore, even when you do have something to add, I have found the policy crowd rather hostile to direct participation from STEM folks. One reason is that STEM folks are very analytical, and want to talk about all sides of an issue. On the other hand, policy people, at least non-academic “practicing” policy people are usually focused on a predetermined desired outcome, and the whishy-washiness of the engineers is not very welcome or useful to their campaign. It doesn’t help that engineers often expect carefully curated analysis to “speak for itself.” It doesn’t. I can also attest, again, from firsthand experience, that analysis is not highly prized in policy circles. Analysis comes with strings attached: subtlety, complexity, and confounding factors that are of no help when you are trying to persuade.
It’s also important to remember that most engineers work for someone else. They make their living realizing others’ goals. As such, their leeway to affect the direction of their work is limited and to engage in too much STS thinking is to risk their livelihoods.
And finally, in our toxically overspecialized world, it’s just punishing to be a “boundary spanner.” There are no rewards, and it’s a lot of work. If you have the skills, it is very difficult to find employment that will draw meaningfully on both reservoirs of knowledge. This, perhaps, has been the biggest frustration of my career, as I have bounced between these worlds repeatedly, missing one while in the other.
Finally, a parting shot: If you want to bring STS concepts to the fore, you need to bring them to the people with power. Those are not the heads-down STEM practitioners, those are the C-suite masters of the universe. Let’s see some STS thinking more deeply integrated into the curricula at top business schools. Not just an ethics class to check a requisite box, but something more integrated that leads students to think holistically about their companies’ activities and products rather than, say, applying some post-hoc greenwashing or CSR.
Today’s Wall Street Journal had an article about Facebook, in which they promise to change the way the serve advertising in order to defeat ad blockers. This quote, from an FB spokesperson was choice:
“Facebook is ad-supported. Ads are a part of the Facebook experience; they’re not a tack on”
I’ll admit, I use an ad block a lot of the time. It’s not that I’m anti ads totally, but I am definitely utter trash, garbage, useless ads that suck of compute and network resources, cause the page to load much more slowly, and often enough, include malware and tracking. The problem is most acute on the mobile devices, where bandwidth, CPU power, and pixels are all in short supply, and yet it’s harder to block ads there. In fact, you really can’t do it without rooting your phone or doing all your browsing through a proxy.
The ad-supported Internet is just The Worst. I know, I know, I’ve had plenty of people explain to me that that ship has sailed, but I can still hate our ad-supported present and future.
Today’s ads suck, and they seem to be getting worse. Based on trends in the per ad revenue, it appears that most of the world agrees with this. They are less and less valuable.
Ads create perverse incentives for content creators. Their customer is the advertising client, and the reader is the product. In a pay for service model, you are the customer.
Ads are an attack vector for malware.
Ads use resources on your computer. Sure, the pay the content provider, but the cpu cycles on your computer are stolen.
I’m sure I could come up with 50 sucky things about Internet advertising, but I think it’s overdetermined. What is good about it is that it provides a way for content generators to make money, and so far, nothing else has worked.
The sad situation is that people do not want to pay for the Internet. We shell out $50 or more each month for access to the Internet, but nobody wants to pay for the Internet itself. Why not? The corrosive effect of an ad-driven Internet is so ubiquitous that people cannot even see it anymore. Because we don’t “pay” for anything on the Internet, everything loses its value. Journalism? Gone. Music? I have 30k songs (29.5k about which I do not care one whit) on my iThing.
Here is a prescription for a better Internet:
Paywall every goddam thing
Create non-profit syndicates that exist to attract member websites and collect subscription revenue on their behalf, distributing it according to clicks, or views, or whatever, at minimal cost.
Kneecap all the rentier Internet businesses like Google and Facebook. They’re not very innovative and there is no justification for their outsized profits and “revenue requirements.” There is a solid case for economic regulation of Internet businesses with strong network effects. Do it.
I know this post is haphazard and touches on a bunch of unrelated ideas. If there is one idea I’d like to convey is: let’s get over our addiction to free stuff. It ain’t free.
I was slowly cranking my way of Claremont Avenue the other day on my trusty Bianchi when I started wondering why I was so slow. Well, that was easy. I’m pretty heavy and I’m somewhat out of shape. But which is more important, which would have a bigger impact if improved?
First, I used a website like this one to determine the average grade over a certain familiar portion of the route. In this case, it was 13.3%. I also have a speedometer on my bike that tells that I average about 5 mph over that stretch. Finally, I weigh about 100 kg, and my bike is another 10 kg.
So, given that the energy to raise a mass up h height is m*g*h, the power to raise a mass at r rate is m*g*r.
That is, that’s how much power it takes to lift my mass up a hill at that rate. Note the trig to change my speed up the hill to a vertical speed. There are losses in pedaling a bike, and on the tires on the road, etc, but this is a good estimate of the overall order of how much power I can comfortably sustain. Let’s call it 300W.
Now, another thing I’ve noticed while riding is that on flat ground, I can maintain about 17 mph. In that case, I’m not adding power to climb a hill at all, all of my power is overcoming road friction and drag.
This is a simplification of the more general equation linked above, assuming constant air density, yadda. For 17 mph and 317 W, I get about 0.72376 kg / m. kg/m is a strange dimension, but it it what it is.
So then, I wondered, how fast should I be able to go with a given power budget while climbing different grades?
I created this equation which combines the power to climb and the power to overcome drag
P is power,
c is the drag power constant calculated above,
m is mass,
g is the acceleration of gravity, and
theta is the angle of the hill. (The angle is the arctangent of the grade, by the way.) Oh, and
v is my speed.
It turns out that my brain doesn’t perform the way it once did and I can’t solve that cubic equation on my own, so I resorted to a Python-based solver which is part of the sympy package.
Note this equation has three solutions, two of which are complex. Only interested in the “real” solution.
Now, this is finally where the fun starts. Want to know how fast I can climb different grades, or how actual athletes who can summon more power than me can get up?
Like I said, I can make about 300W, but I saw a youtube video of a dude who could make about 1kW, at least for long enough to make toast.
Then I was wonder, would losing weight help much? It does. Interestingly, it helps on the middle grades. On the highest grades, I’m nearly stopped, and the numbers get small. On flat grades, drag (a function of my shape, not my size) dominates. But in the middle, yeah, there’s an effect.
So there you have it. If I lost 10 kg and could increase my power output by 15% I could go from about 5 mph on Claremont to about 6 mph.
This will come as no news to tech-savvy people, but when you open a webpage, you are running a metric shit-ton of code from all over the Internet.
And it’s from a lot companies, too. On the NYT alone, I get messages from various systems from google, amazon, facebook, doubleclick, moatads.com, brealtime.com.
Aside from the privacy and tracking aspects, it feels like a theft of resources, too. They’re using my CPU to do work that has nothing to do with rendering their page.