Shared posts

23 Oct 00:00

HN comments are underrated

HN comments are terrible. On any topic I’m informed about, the vast majority of comments are pretty clearly wrong. Most of the time, there are zero comments from people who know anything about the topic and the top comment is reasonable sounding but totally incorrect. Additionally, many comments are gratuitously mean. You’ll often hear mean comments backed up with something like “this is better than the other possibility, where everyone just pats each other on the back with comments like ‘this is great’”, as if being an asshole is some sort of talisman against empty platitudes. I’ve seen people push back against that; when pressed, people often say that it’s either impossible or inefficient to teach someone without being mean, as if telling someone that they’re stupid somehow helps them learn. It’s as if people learned how to explain things by watching Simon Cowell and can’t comprehend the concept of an explanation that isn’t littered with personal insults. Paul Graham has said, “Oh, you should never read Hacker News comments about anything you write”. Most of the negative things you hear about HN comments are true.

And yet, I haven’t found a public internet forum with better technical commentary. On topics I’m familiar with, while it’s rare that a thread will have even a single comment that’s well-informed, when those comments appear, they usually float to the top. On other forums, well-informed comments are either non-existent or get buried by reasonable sounding but totally wrong comments when they appear, and they appear even more rarely than on HN.

By volume, there are probably more interesting technical “posts” in comments than in links. Well, that depends on what you find interesting, but that’s true for my interests. If I see a low-level optimization comment from nkurz, a comment on business from patio11, a comment on how companies operate by nostrademons, I almost certainly know that I’m going to read an interesting comment. There are maybe 20 to 30 people I can think of who don’t blog much, but write great comments on HN and I doubt I even know of half the people who are writing great comments on HN1.

I compiled a very abbreviated list of comments I like because comments seem to get lost. If you write a blog post, people will refer it years later, but comments mostly disappear. I think that’s sad – there’s a lot of great material on HN (and yes, even more not-so-great material).

What’s the deal with MS Word’s file format?

Basically, the Word file format is a binary dump of memory. I kid you not. They just took whatever was in memory and wrote it out to disk. We can try to reason why (maybe it was faster, maybe it made the code smaller), but I think the overriding reason is that the original developers didn’t know any better.

Later as they tried to add features they had to try to make it backward compatible. This is where a lot of the complexity lies. There are lots of crazy workarounds for things that would be simple if you allowed yourself to redesign the file format. It’s pretty clear that this was mandated by management, because no software developer would put themselves through that hell for no reason.

Later they added a fast-save feature (I forget what it is actually called). This appends changes to the file without changing the original file. The way they implemented this was really ingenious, but complicates the file structure a lot.

One thing I feel I must point out (I remember posting a huge thing on slashdot when this article was originally posted) is that 2 way file conversion is next to impossible for word processors. That’s because the file formats do not contain enough information to format the document. The most obvious place to see this is pagination. The file format does not say where to paginate a text flow (unless it is explicitly entered by the user). It relies of the formatter to do it. Each word processor formats text completely differently. Word, for example famously paginates footnotes incorrectly. They can’t change it, though, because it will break backwards compatibility. This is one of the only reasons that Word Perfect survives today – it is the only word processor that paginates legal documents the way the US Department of Justice requires.

Just considering the pagination issue, you can see what the problem is. When reading a Word document, you have to paginate it like Word – only the file format doesn’t tell you what that is. Then if someone modifies the document and you need to resave it, you need to somehow mark that it should be paginated like Word (even though it might now have features that are not in Word). If it was only pagination, you might be able to do it, but practically everything is like that.

I recommend reading (a bit of) the XML Word file format for those who are interested. You will see large numbers of flags for things like “Format like Word 95”. The format doesn’t say what that is – because it’s pretty obvious that the authors of the file format don’t know. It’s lost in a hopeless mess of legacy code and nobody can figure out what it does now.

Fun with NULL

Here’s another example of this fine feature:

  #include <stdio.h>
  #include <string.h>
  #include <stdlib.h>
  #define LENGTH 128

  int main(int argc, char **argv) {
      char *string = NULL;
      int length = 0;
      if (argc > 1) {
          string = argv[1];
          length = strlen(string);
          if (length >= LENGTH) exit(1);

      char buffer[LENGTH];
      memcpy(buffer, string, length);
      buffer[length] = 0;

      if (string == NULL) {
          printf("String is null, so cancel the launch.\n");
      } else {
          printf("String is not null, so launch the missiles!\n");

      printf("string: %s\n", string);  // undefined for null but works in practice

      printf("%s\n", string);          // segfaults on null when bare "%s\n"

      return 0;

  nate@skylake:~/src$ clang-3.8 -Wall -O3 null_check.c -o null_check
  nate@skylake:~/src$ null_check
  String is null, so cancel the launch.
  string: (null)

  nate@skylake:~/src$ icc-17 -Wall -O3 null_check.c -o null_check
  nate@skylake:~/src$ null_check
  String is null, so cancel the launch.
  string: (null)

  nate@skylake:~/src$ gcc-5 -Wall -O3 null_check.c -o null_check
  nate@skylake:~/src$ null_check
  String is not null, so launch the missiles!
  string: (null)

It appear that Intel’s ICC and Clang still haven’t caught up with GCC’s optimizations. Ouch if you were depending on that optimization to get the performance you need! But before picking on GCC too much, consider that all three of those compilers segfault on printf(“string: “); printf(”%s\n”, string) when string is NULL, despite having no problem with printf(“string: %s\n”, string) as a single statement. Can you see why using two separate statements would cause a segfault? If not, see here for a hint:

How do you make sure the autopilot backup is paying attention?

Good engineering eliminates users being able to do the wrong thing as much as possible… . You don’t design a feature that invites misuse and then use instructions to try to prevent that misuse.

There was a derailment in Australia called the Waterfall derailment [1]. It occurred because the driver had a heart attack and was responsible for 7 deaths (a miracle it was so low, honestly). The root cause was the failure of the dead-man’s switch.

In the case of Waterfall, the driver had 2 dead-man switches he could use - 1) the throttle handle had to be held against a spring at a small rotation, or 2) a bar on the floor could be depressed. You had to do 1 of these things, the idea being that you prevent wrist or foot cramping by allowing the driver to alternate between the two. Failure to do either triggers an emergency brake.

It turns out that this driver was fat enough that when he had a heart attack, his leg was able to depress the pedal enough to hold the emergency system off. Thus, the dead-man’s system never triggered with a whole lot of dead man in the driver’s seat.

I can’t quite remember the specifics of the system at Waterfall, but one method to combat this is to require the pedal to be held halfway between released and fully depressed. The idea being that a dead leg would fully depress the pedal so that would trigger a brake, and a fully released pedal would also trigger a brake. I don’t know if they had that system but certainly that’s one approach used in rail.

Either way, the problem is equally possible in cars. If you lose consciousness and your foot goes limp, a heavy enough leg will be able to hold the pedal down a bit depending on where it’s positioned relative to the pedal and the leverage it has on the floor.

The other major system I’m familiar with for ensuring drivers are alive at the helm is called ‘vigilance’. The way it works is that periodically, a light starts flashing on the dash and the driver has to acknowledge that. If they do not, a buzzer alarm starts sounding. If they still don’t acknowledge it, the train brakes apply and the driver is assumed incapacitated. Let me tell you some stories of my involvement in it.

When we first started, we had a simple vigi system. Every 30 seconds or so (for example), the driver would press a button. Ok cool. Except that then drivers became so hard-wired to pressing the button every 30 seconds that we were having instances of drivers falling asleep/dozing off and still pressing the button right on every 30 seconds because it was so ingrained into them that it was literally a subconscious action.

So we introduced random-timing vigilance, where the time varies 30-60 seconds (for example) and you could only acknowledge it within a small period of time once the light started flashing. Again, drivers started falling asleep/semi asleep and would hit it as soon as the alarm buzzed, each and every time.

So we introduced random-timing, task-linked vigilance and that finally broke the back of the problem. Now, the driver has to press a button, or turn a knob, or do a number of different activities and they must do that randomly-chosen activity, at a randomly-chosen time, for them to acknowledge their consciousness. It was only at that point that we finally nailed out driver alertness.

See also.


Curious why he would need to move to a more prestigious position? Most people realize by their 30s that prestige is a sucker’s game; it’s a way of inducing people to do things that aren’t much fun and they wouldn’t really want to do on their own, by lauding them with accolades from people they don’t really care about.

Why is FedEx based in Mephis?

… we noticed that we also needed:
(1) A suitable, existing airport at the hub location.
(2) Good weather at the hub location, e.g., relatively little snow, fog, or rain.
(3) Access to good ramp space, that is, where to park and service the airplanes and sort the packages.
(4) Good labor supply, e.g., for the sort center.
(5) Relatively low cost of living to keep down prices.
(6) Friendly regulatory environment.
(7) Candidate airport not too busy, e.g., don’t want arriving planes to have to circle a long time before being able to land.
(8) Airport with relatively little in cross winds and with more than one runway to pick from in case of winds.
(9) Runway altitude not too high, e.g., not high enough to restrict maximum total gross take off weight, e.g., rule out Denver.
(10) No tall obstacles, e.g., mountains, near the ends of the runways.
(11) Good supplies of jet fuel.
(12) Good access to roads for 18 wheel trucks for exchange of packages between trucks and planes, e.g., so that some parts could be trucked to the hub and stored there and shipped directly via the planes to customers that place orders, say, as late as 11 PM for delivery before 10 AM.
So, there were about three candidate locations, Memphis and, as I recall, Cincinnati and Kansas City.
The Memphis airport had some old WWII hangers next to the runway that FedEx could use for the sort center, aircraft maintenance, and HQ office space. Deal done – it was Memphis.

Why etherpad joined Wave, and why it didn’t work out as expected

The decision to sell to Google was one of the toughest decisions I and my cofounders ever had to wrestle with in our lives. We were excited by the Wave vision though we saw the flaws in the product. The Wave team told us about how they wanted our help making wave simpler and more like etherpad, and we thought we could help with that, though in the end we were unsuccessful at making wave simpler. We were scared of Google as a competitor: they had more engineers and more money behind this project, yet they were running it much more like an independent startup than a normal big-company department. The Wave office was in Australia and had almost total autonomy. And finally, after 1.5 years of being on the brink of failure with AppJet, it was tempting to be able to declare our endeavor a success and provide a decent return to all our investors who had risked their money on us.

In the end, our decision to join Wave did not work out as we had hoped. The biggest lessons learned were that having more engineers and money behind a project can actually be more harmful than helpful, so we were wrong to be scared of Wave as a competitor for this reason. It seems obvious in hindsight, but at the time it wasn’t. Second, I totally underestimated how hard it would be to iterate on the Wave codebase. I was used to rewriting major portions of software in a single all-nighter. Because of the software development process Wave was using, it was practically impossible to iterate on the product. I should have done more diligence on their specific software engineering processes, but instead I assumed because they seemed to be operating like a startup, that they would be able to iterate like a startup. A lot of the product problems were known to the whole Wave team, but we were crippled by a large complex codebase built on poor technical choices and a cumbersome engineering process that prevented fast iteration.

The accuracy of tech news

When I’ve had inside information about a story that later breaks in the tech press, I’m always shocked at how differently it’s perceived by readers of the article vs. how I experienced it. Among startups & major feature launches I’ve been party to, I’ve seen: executives that flat-out say that they’re not working on a product category when there’s been a whole department devoted to it for a year; startups that were founded 1.5 years before the dates listed in Crunchbase/Wikipedia; reporters that count the number of people they meet in a visit and report that as a the “team size”, because the company refuses to release that info; funding rounds that never make it to the press; acquisitions that are reported as “for an undisclosed sum” but actually are less than the founders would’ve made if they’d taken a salaried job at the company; project start dates that are actually when the project was staffed up to its current size and ignore the year or so that a small team spent working on the problem (or the 3-4 years that other small teams spent working on the problem); and algorithms or other technologies that are widely reported as being the core of the company’s success, but actually aren’t even used by the company.

Self-destructing speakers from Dell

As the main developer of VLC, we know about this story since a long time, and this is just Dell putting crap components on their machine and blaming others. Any discussion was impossible with them. So let me explain a bit…

In this case, VLC just uses the Windows APIs (DirectSound), and sends signed integers of 16bits (s16) to the Windows Kernel.

VLC allows amplification of the INPUT above the sound that was decoded. This is just like replay gain, broken codecs, badly recorded files or post-amplification and can lead to saturation.

But this is exactly the same if you put your mp3 file through Audacity and increase it and play with WMP, or if you put a DirectShow filter that amplifies the volume after your codec output. For example, for a long time, VLC ac3 and mp3 codecs were too low (-6dB) compared to the reference output.

At worse, this will reduce the dynamics and saturate a lot, but this is not going to break your hardware.

VLC does not (and cannot) modify the OUTPUT volume to destroy the speakers. VLC is a Software using the OFFICIAL platforms APIs.

The issue here is that Dell sound cards output power (that can be approached by a factor of the quadratic of the amplitude) that Dell speakers cannot handle. Simply said, the sound card outputs at max 10W, and the speakers only can take 6W in, and neither their BIOS or drivers block this.

And as VLC is present on a lot of machines, it’s simple to blame VLC. “Correlation does not mean causation” is something that seems too complex for cheap Dell support…

Learning on the job, startups vs. big companies

Working for someone else’s startup, I learned how to quickly cobble solutions together. I learned about uncertainty and picking a direction regardless of whether you’re sure it’ll work. I learned that most startups fail, and that when they fail, the people who end up doing well are the ones who were looking out for their own interests all along. I learned a lot of basic technical skills, how to write code quickly and learn new APIs quickly and deploy software to multiple machines. I learned how quickly problems of scaling a development team crop up, and how early you should start investing in automation.

Working for Google, I learned how to fix problems once and for all and build that culture into the organization. I learned that even in successful companies, everything is temporary, and that great products are usually built through a lot of hard work by many people rather than great ah-ha insights. I learned how to architect systems for scale, and a lot of practices used for robust, high-availability, frequently-deployed systems. I learned the value of research and of spending a lot of time on a single important problem: many startups take a scattershot approach, trying one weekend hackathon after another and finding nobody wants any of them, while oftentimes there are opportunities that nobody has solved because nobody wants to put in the work. I learned how to work in teams and try to understand what other people want. I learned what problems are really painful for big organizations. I learned how to rigorously research the market and use data to make product decisions, rather than making decisions based on what seems best to one person.

We failed this person, what are we going to do differently?

Having been in on the company’s leadership meetings where departures were noted with a simple ‘regret yes/no’ flag it was my experience that no single departure had any effect. Mass departures did, trends did, but one person never did, even when that person was a founder.

The rationalizations always put the issue back on the departing employee, “They were burned out”, “They had lost their ability to be effective”, “They have moved on”, “They just haven’t grown with the company” never was it “We failed this person, what are we going to do differently?”

AWS’s origin story

Anyway, the SOA effort was in full swing when I was there. It was a pain, and it was a mess because every team did things differently and every API was different and based on different assumptions and written in a different language.

But I want to correct the misperception that this lead to AWS. It didn’t. S3 was written by its own team, from scratch. At the time I was at Amazon, working on the retail site, none of was running on AWS. I know, when AWS was announced, with great fanfare, they said “the services that power can now power your business!” or words to that effect. This was a flat out lie. The only thing they shared was data centers and a standard hardware configuration. Even by the time I left, when AWS was running full steam ahead (and probably running Reddit already), none of was running on AWS, except for a few, small, experimental and relatively new projects. I’m sure more of it has been adopted now, but AWS was always a separate team (and a better managed one, from what I could see.)

Why is Windows so slow?

I (and others) have put a lot of effort into making the Linux Chrome build fast. Some examples are multiple new implementations of the build system ( ), experimentation with the gold linker (e.g. measuring and adjusting the still off-by-default thread flags ) as well as digging into bugs in it, and other underdocumented things like ‘thin’ ar archives.

But it’s also true that people who are more of Windows wizards than I am a Linux apprentice have worked on Chrome’s Windows build. If you asked me the original question, I’d say the underlying problem is that on Windows all you have is what Microsoft gives you and you can’t typically do better than that. For example, migrating the Chrome build off of Visual Studio would be a large undertaking, large enough that it’s rarely considered. (Another way of phrasing this is it’s the IDE problem: you get all of the IDE or you get nothing.)

When addressing the poor Windows performance people first bought SSDs, something that never even occurred to me (“your system has enough RAM that the kernel cache of the file system should be in memory anyway!”). But for whatever reason on the Linux side some Googlers saw it fit to rewrite the Linux linker to make it twice as fast (this effort predated Chrome), and all Linux developers now get to benefit from that. Perhaps the difference is that when people write awesome tools for Windows or Mac they try to sell them rather than give them away.

Why is Windows so slow, an insider view

I’m a developer in Windows and contribute to the NT kernel. (Proof: the SHA1 hash of revision #102 of [Edit: filename redacted] is [Edit: hash redacted].) I’m posting through Tor for obvious reasons.

Windows is indeed slower than other operating systems in many scenarios, and the gap is worsening. The cause of the problem is social. There’s almost none of the improvement for its own sake, for the sake of glory, that you see in the Linux world.

Granted, occasionally one sees naive people try to make things better. These people almost always fail. We can and do improve performance for specific scenarios that people with the ability to allocate resources believe impact business goals, but this work is Sisyphean. There’s no formal or informal program of systemic performance improvement. We started caring about security because pre-SP3 Windows XP was an existential threat to the business. Our low performance is not an existential threat to the business.

See, component owners are generally openly hostile to outside patches: if you’re a dev, accepting an outside patch makes your lead angry (due to the need to maintain this patch and to justify in in shiproom the unplanned design change), makes test angry (because test is on the hook for making sure the change doesn’t break anything, and you just made work for them), and PM is angry (due to the schedule implications of code churn). There’s just no incentive to accept changes from outside your own team. You can always find a reason to say “no”, and you have very little incentive to say “yes”.

What’s the probability of a successful exit by city?

See link for giant table :-).

The hiring crunch

Broken record: startups are also probably rejecting a lot of engineering candidates that would perform as well or better than anyone on their existing team, because tech industry hiring processes are folkloric and irrational.

Too long to excerpt. See the link!

Should you leave a bad job?

I am 42-year-old very successful programmer who has been through a lot of situations in my career so far, many of them highly demotivating. And the best advice I have for you is to get out of what you are doing. Really. Even though you state that you are not in a position to do that, you really are. It is okay. You are free. Okay, you are helping your boyfriend’s startup but what is the appropriate cost for this? Would he have you do it if he knew it was crushing your soul?

I don’t use the phrase “crushing your soul” lightly. When it happens slowly, as it does in these cases, it is hard to see the scale of what is happening. But this is a very serious situation and if left unchecked it may damage the potential for you to do good work for the rest of your life.

The commenters who are warning about burnout are right. Burnout is a very serious situation. If you burn yourself out hard, it will be difficult to be effective at any future job you go to, even if it is ostensibly a wonderful job. Treat burnout like a physical injury. I burned myself out once and it took at least 12 years to regain full productivity. Don’t do it.

  • More broadly, the best and most creative work comes from a root of joy and excitement. If you lose your ability to feel joy and excitement about programming-related things, you’ll be unable to do the best work. That this issue is separate from and parallel to burnout! If you are burned out, you might still be able to feel the joy and excitement briefly at the start of a project/idea, but they will fade quickly as the reality of day-to-day work sets in. Alternatively, if you are not burned out but also do not have a sense of wonder, it is likely you will never get yourself started on the good work.

  • The earlier in your career it is now, the more important this time is for your development. Programmers learn by doing. If you put yourself into an environment where you are constantly challenged and are working at the top threshold of your ability, then after a few years have gone by, your skills will have increased tremendously. It is like going to intensively learn kung fu for a few years, or going into Navy SEAL training or something. But this isn’t just a one-time constant increase. The faster you get things done, and the more thorough and error-free they are, the more ideas you can execute on, which means you will learn faster in the future too. Over the long term, programming skill is like compound interest. More now means a LOT more later. Less now means a LOT less later.

So if you are putting yourself into a position that is not really challenging, that is a bummer day in and day out, and you get things done slowly, you aren’t just having a slow time now. You are bringing down that compound interest curve for the rest of your career. It is a serious problem. If I could go back to my early career I would mercilessly cut out all the shitty jobs I did (and there were many of them).

Creating change when politically unpopular

A small anecdote. An acquaintance related a story of fixing the ‘drainage’ in their back yard. They were trying to grow some plants that were sensitive to excessive moisture, and the plants were dying. Not watering them, watering them a little, didn’t seem to change. They died. A professional gardner suggested that their problem was drainage. So they dug down about 3’ (where the soil was very very wet) and tried to build in better drainage. As they were on the side of a hill, water table issues were not considered. It turned out their “problem” was that the water main that fed their house and the houses up the hill, was so pressurized at their property (because it had maintain pressure at the top of the hill too) that the pipe seams were leaking and it was pumping gallons of water into the ground underneath their property. The problem wasn’t their garden, the problem was that the city water supply was poorly designed.

While I have never been asked if I was an engineer on the phone, I have experienced similar things to Rachel in meetings and with regard to suggestions. Co-workers will create an internal assessment of your value and then respond based on that assessment. If they have written you off they will ignore you, if you prove their assessment wrong in a public forum they will attack you. These are management issues, and something which was sorely lacking in the stories.

If you are the “owner” of a meeting, and someone is trying to be heard and isn’t. It is incumbent on you to let them be heard. By your position power as “the boss” you can naturally interrupt a discussion to collect more data from other members. Its also important to ask questions like “does anyone have any concerns?” to draw out people who have valid input but are too timid to share it.

In a highly political environment there are two ways to create change, one is through overt manipulation, which is to collect political power to yourself and then exert it to enact change, and the other is covert manipulation, which is to enact change subtly enough that the political organism doesn’t react. (sometimes called “triggering the antibodies”).

The problem with the latter is that if you help make positive change while keeping everyone not pissed off, no one attributes it to you (which is good for the change agent because if they knew the anti-bodies would react, but bad if your manager doesn’t recognize it). I asked my manager what change he wanted to be ‘true’ yet he (or others) had been unsuccessful making true, he gave me one, and 18 months later that change was in place. He didn’t believe that I was the one who had made the change. I suggested he pick a change he wanted to happen and not tell me, then in 18 months we could see if that one happened :-). But he also didn’t understand enough about organizational dynamics to know that making change without having the source of that change point back at you was even possible.

How to get tech support from Google

Heavily relying on Google product? ✓
Hitting a dead-end with Google’s customer service? ✓
Have an existing audience you can leverage to get some random Google employee’s attention? ✓
Reach front page of Hacker News? ✓
Good news! You should have your problem fixed in 2-5 business days. The rest of us suckers relying on google services get to stare at our inboxes helplessly, waiting for a response to our support ticket (which will never come). I feel like it’s almost a [rite] of passage these days to rely heavily on a Google service, only to have something go wrong and be left out in the cold.

Taking funding

IIRC PayPal was very similar - it was sold for $1.5B, but Max Levchin’s share was only about $30M, and Elon Musk’s was only about $100M. By comparison, many early Web 2.0 darlings (, Blogger, Flickr) sold for only $20-40M, but their founders had only taken small seed rounds, and so the vast majority of the purchase price went to the founders. 75% of a $40M acquisition = 3% of a $1B acquisition.

Something for founders to think about when they’re taking funding. If you look at the gigantic tech fortunes - Gates, Page/Brin, Omidyar, Bezos, Zuckerburg, Hewlett/Packard - they usually came from having a company that was already profitable or was already well down the hockey-stick user growth curve and had a clear path to monetization by the time they sought investment. Companies that fight tooth & nail for customers and need lots of outside capital to do it usually have much worse financial outcomes.

StackOverflow vs. Experts-Exchange

A lot of the people who were involved in some way in Experts-Exchange don’t understand Stack Overflow.

The basic value flow of EE is that “experts” provide valuable “answers” for novices with questions. In that equation there’s one person asking a question and one person writing an answer.

Stack Overflow recognizes that for every person who asks a question, 100 - 10,000 people will type that same question into Google and find an answer that has already been written. In our equation, we are a community of people writing answers that will be read by hundreds or thousands of people. Ours is a project more like wikipedia – collaboratively creating a resource for the Internet at large.

Because that resource is provided by the community, it belongs to the community. That’s why our data is freely available and licensed under creative commons. We did this specifically because of the negative experience we had with EE taking a community-generated resource and deciding to slap a paywall around it.

The attitude of many EE contributors, like Greg Young who calculates that he “worked” for half a year for free, is not shared by the 60,000 people who write answers on SO every month. When you talk to them you realize that on Stack Overflow, answering questions is about learning. It’s about creating a permanent artifact to make the Internet better. It’s about helping someone solve a problem in five minutes that would have taken them hours to solve on their own. It’s not about working for free.

As soon as EE introduced the concept of money they forced everybody to think of their work on EE as just that – work.

Making money from amazon bots

I saw that one of my old textbooks was selling for a nice price, so I listed it along with two other used copies. I priced it $1 cheaper than the lowest price offered, but within an hour both sellers had changed their prices to $.01 and $.02 cheaper than mine. I reduced it two times more by $1, and each time they beat my price by a cent or two. So what I did was reduce my price by a few dollars every hour for one day until everybody was priced under $5. Then I bought their books and changed my price back.

What running a business is like

While I like the sentiment here, I think the danger is that engineers might come to the mistaken conclusion that making pizzas is the primary limiting reagent to running a successful pizzeria. Running a successful pizzeria is more about schlepping to local hotels and leaving them 50 copies of your menu to put at the front desk, hiring drivers who will both deliver pizzas in a timely fashion and not embezzle your (razor-thin) profits while also costing next-to-nothing to employ, maintaining a kitchen in sufficient order to pass your local health inspector’s annual visit (and dealing with 47 different pieces of paper related to that), being able to juggle priorities like “Do I take out a bank loan to build a new brick-oven, which will make the pizza taste better, in the knowledge that this will commit $3,000 of my cash flow every month for the next 3 years, or do I hire an extra cook?”, sourcing ingredients such that they’re available in quantity and quality every day for a fairly consistent price, setting prices such that they’re locally competitive for your chosen clientele but generate a healthy gross margin for the business, understanding why a healthy gross margin really doesn’t imply a healthy net margin and that the rent still needs to get paid, keeping good-enough records such that you know whether your business is dying before you can’t make payroll and such that you can provide a reasonably accurate picture of accounts for the taxation authorities every year, balancing 50% off medium pizza promotions with the desire to not cannibalize the business of your regulars, etc etc, and by the way tomato sauce should be tangy but not sour and cheese should melt with just the faintest whisp of a crust on it.

Do you want to write software for a living? Google is hiring. Do you want to run a software business? Godspeed. Software is now 10% of your working life.

How to handle mismanagement?

The way I prefer to think of it is: it is not your job to protect people (particularly senior management) from the consequences of their decisions. Make your decisions in your own best interest; it is up to the organization to make sure that your interest aligns with theirs.

Google used to have a severe problem where code refactoring & maintenance was not rewarded in performance reviews while launches were highly regarded, which led to the effect of everybody trying to launch things as fast as possible and nobody cleaning up the messes left behind. Eventually launches started getting slowed down, Larry started asking “Why can’t we have nice things?”, and everybody responded “Because you’ve been paying us to rack up technical debt.” As a result, teams were formed with the express purpose of code health & maintenance, those teams that were already working on those goals got more visibility, and refactoring contributions started counting for something in perf. Moreover, many ex-Googlers who were fed up with the situation went to Facebook and, I’ve heard, instituted a culture there where grungy engineering maintenance is valued by your peers.

None of this would’ve happened if people had just heroically fallen on their own sword and burnt out doing work nobody cared about. Sometimes it takes highly visible consequences before people with decision-making power realize there’s a problem and start correcting it. If those consequences never happen, they’ll keep believing it’s not a problem and won’t pay much attention to it.

Some downsides of immutability

People who aren’t exactly lying

It took me too long to figure this out. There are some people to truly, and passionately, believe something they say to you, and realistically they personally can’t make it happen so you can’t really bank on that ‘promise.’

I used to think those people were lying to take advantage, but as I’ve gotten older I have come to recognize that these ‘yes’ people get promoted a lot. And for some of them, they really do believe what they are saying.

As an engineer I’ve found that once I can ‘calibrate’ someone’s ‘yes-ness’ I can then work with them, understanding that they only make ‘wishful’ commitments rather than ‘reasoned’ commitments.

So when someone, like Steve Jobs, says “we’re going to make it an open standard!”, my first question then is “Great, I’ve got your support in making this an open standard so I can count on you to wield your position influence to aid me when folks line up against that effort, right?” If the answer that that question is no, then they were lying.

The difference is subtle of course but important. Steve clearly doesn’t go to standards meetings and vote etc, but if Manager Bob gets push back from accounting that he’s going to exceed his travel budget by sending 5 guys to the Open Video Chat Working Group which is championing the Facetime protocol as an open standard, then Manager Bob goes to Steve and says “I need your help here, these 5 guys are needed to argue this standard and keep it from being turned into a turd by the 5 guys from Google who are going to attend.” and then Steve whips off a one liner to accounting that says “Get off this guy’s back we need this.” Then its all good. If on the other hand he says “We gotta save money, send one guy.” well in that case I’m more sympathetic to the accusation of prevarication.

What makes engineers productive?

For those who work inside Google, it’s well worth it to look at Jeff & Sanjay’s commit history and code review dashboard. They aren’t actually all that much more productive in terms of code written than a decent SWE3 who knows his codebase.

The reason they have a reputation as rockstars is that they can apply this productivity to things that really matter; they’re able to pick out the really important parts of the problem and then focus their efforts there, so that the end result ends up being much more impactful than what the SWE3 wrote. The SWE3 may spend his time writing a bunch of unit tests that catch bugs that wouldn’t really have happened anyway, or migrating from one system to another that isn’t really a large improvement, or going down an architectural dead end that’ll just have to be rewritten later. Jeff or Sanjay (or any of the other folks operating at that level) will spend their time running a proposed API by clients to ensure it meets their needs, or measuring the performance of subsystems so they fully understand their building blocks, or mentally simulating the operation of the system before building it so they rapidly test out alternatives. They don’t actually write more code than a junior developer (oftentimes, they write less), but the code they do write gives them more information, which makes them ensure that they write the rightcode.

I feel like this point needs to be stressed a whole lot more than it is, as there’s a whole mythology that’s grown up around 10x developers that’s not all that helpful. In particular, people need to realize that these developers rapidly become 1x developers (or worse) if you don’t let them make their own architectural choices - the reason they’re excellent in the first place is because they know how to determine if certain work is going to be useless and avoid doing it in the first place. If you dictate that they do it anyway, they’re going to be just as slow as any other developer

Do the work, be a hero

I got the hero speech too, once. If anyone ever mentions the word “heroic” again and there isn’t a burning building involved, I will start looking for new employment immediately. It seems that in our industry it is universally a code word for “We’re about to exploit you because the project is understaffed and under budgeted for time and that is exactly as we planned it so you’d better cowboy up.”

Maybe it is different if you’re writing Quake, but I guarantee you the 43rd best selling game that year also had programmers “encouraged onwards” by tales of the glory that awaited after the death march.

Learning English from watching movies

I was once speaking to a good friend of mine here, in English.
“Do you want to go out for yakitori?”
“Go fuck yourself!”
“… switches to Japanese Have I recently done anything very major to offend you?”
“No, of course not.”
“Oh, OK, I was worried. So that phrase, that’s something you would only say under extreme distress when you had maximal desire to offend me, or I suppose you could use it jokingly between friends, but neither you nor I generally talk that way.”
“I learned it from a movie. I thought it meant ‘No.’”

Being smart and getting things done

True story: I went to a talk given by one of the ‘engineering elders’ (these were low Emp# engineers who were considered quite successful and were to be emulated by the workers :-) This person stated when they came to work at Google they were given the XYZ system to work on (sadly I’m prevented from disclosing the actual system). They remarked how they spent a couple of days looking over the system which was complicated and creaky, they couldn’t figure it out so they wrote a new system. Yup, and they committed that. This person is a coding God are they not? (sarcasm) I asked what happened to the old system (I knew but was interested on their perspective) and they said it was still around because a few things still used it, but (quite proudly) nearly everything else had moved to their new system.

So if you were reading carefully, this person created a new system to ‘replace’ an existing system which they didn’t understand and got nearly everyone to move to the new system. That made them uber because they got something big to put on their internal resume, and a whole crapload of folks had to write new code to adapt from the old system to this new system, which imperfectly recreated the old system (remember they didn’t understand the original), such that those parts of the system that relied on the more obscure bits had yet to be converted (because nobody undersood either the dependent code or the old system apparently).

Was this person smart? Blindingly brilliant according to some of their peers. Did they get things done? Hell yes, they wrote the replacement for the XYZ system from scratch! One person? Can you imagine? Would I hire them? Not unless they were the last qualified person in my pool and I was out of time.

That anecdote encapsulates the dangerous side of smart people who get things done.

Public speaking tips

Some kids grow up on football. I grew up on public speaking (as behavioral therapy for a speech impediment, actually). If you want to get radically better in a hurry:

Too long to excerpt. See the link.

A reason a company can be a bad fit

I can relate to this, but I can also relate to the other side of the question. Sometimes it isn’t me, its you. Take someone who gets things done and suddenly in your organization they aren’t delivering. Could be them, but it could also be you.

I had this experience working at Google. I had a horrible time getting anything done there. Now I spent a bit of time evaluating that since it had never been the case in my career, up to that point, where I was unable to move the ball forward and I really wanted to understand that. The short answer was that Google had developed a number of people who spent much, if not all, of their time preventing change. It took me a while to figure out what motivated someone to be anti-change.

The fear was risk and safety. Folks moved around a lot and so you had people in charge of systems they didn’t build, didn’t understand all the moving parts of, and were apt to get a poor rating if they broke. When dealing with people in that situation one could either educate them and bring them along, or steam roll over them. Education takes time, and during that time the ‘teacher’ doesn’t get anything done. This favors steamrolling evolutionarily :-)

So you can hire someone who gets stuff done, but if getting stuff done in your organization requires them to be an asshole, and they aren’t up for that, well they aren’t going to be nearly as successful as you would like them to be.

What working at Google is like

I can tell that this was written by an outsider, because it focuses on the perks and rehashes several cliches that have made their way into the popular media but aren’t all that accurate.

Most Googlers will tell you that the best thing about working there is having the ability to work on really hard problems, with really smart coworkers, and lots of resources at your disposal. I remember asking my interviewer whether I could use things like Google’s index if I had a cool 20% idea, and he was like “Sure. That’s encouraged. Oftentimes I’ll just grab 4000 or so machines and run a MapReduce to test out some hypothesis.” My phone screener, when I asked him what it was like to work there, said “It’s a place where really smart people go to be average,” which has turned out to be both true and honestly one of the best things that I’ve gained from working there.

NSA vs. Black Hat

This entire event was a staged press op. Keith Alexander is a ~30 year veteran of SIGINT, electronic warfare, and intelligence, and a Four-Star US Army General — which is a bigger deal than you probably think it is. He’s a spy chief in the truest sense and a master politician. Anyone who thinks he walked into that conference hall in Caesars without a near perfect forecast of the outcome of the speech is kidding themselves.

Heckling Alexander played right into the strategy. It gave him an opportunity to look reasonable compared to his detractors, and, more generally (and alarmingly), to have the NSA look more reasonable compared to opponents of NSA surveillance. It allowed him to “split the vote” with audience reactions, getting people who probably have serious misgivings about NSA programs to applaud his calm and graceful handling of shouted insults; many of those people probably applauded simply to protest the hecklers, who after all were making it harder for them to follow what Alexander was trying to say.

There was no serious Q&A on offer at the keynote. The questions were pre-screened; all attendees could do was vote on them. There was no possibility that anything would come of this speech other than an effectively unchallenged full-throated defense of the NSA’s programs.

Are deadlines necessary?

Interestingly one of the things that I found most amazing when I was working for Google was a nearly total inability to grasp the concept of ‘deadline.’ For so many years the company just shipped it by committing it to the release branch and having the code deploy over the course of a small number of weeks to the ‘fleet’.

Sure there were ‘processes’, like “Canary it in some cluster and watch the results for a few weeks before turning it loose on the world.” but being completely vertically integrated is a unique sort of situation.

Debugging on Windows vs. Linux

Being a very experienced game developer who tried to switch to Linux, I have posted about this before (and gotten flamed heavily by reactionary Linux people).

The main reason is that debugging is terrible on Linux. gdb is just bad to use, and all these IDEs that try to interface with gdb to “improve” it do it badly (mainly because gdb itself is not good at being interfaced with). Someone needs to nuke this site from orbit and build a new debugger from scratch, and provide a library-style API that IDEs can use to inspect executables in rich and subtle ways.

Productivity is crucial. If the lack of a reasonable debugging environment costs me even 5% of my productivity, that is too much, because games take so much work to make. At the end of a project, I just don’t have 5% effort left any more. It requires everything. (But the current Linux situation is way more than a 5% productivity drain. I don’t know exactly what it is, but if I were to guess, I would say it is something like 20%.)

What happens when you become rich?

What is interesting is that people don’t even know they have a complex about money until they get “rich.” I’ve watched many people, perhaps a hundred, go from “working to pay the bills” to “holy crap I can pay all my current and possibly my future bills with the money I now have.” That doesn’t include the guy who lived in our neighborhood and won the CA lottery one year.

It affects people in ways they don’t expect. If its sudden (like lottery winning or sudden IPO surge) it can be difficult to process. But it is an important thing to realize that one is processing an exceptional event. Like having a loved one die or a spouse suddenly divorcing you.

Not everyone feels “guilty”, not everyone feels “smug.” A lot of millionaires and billionaires in the Bay Area are outwardly unchanged. But the bottom line is that the emotion comes from the cognitive dissonance between values and reality. What do you value? What is reality?

One woman I knew at Google was massively conflicted when she started work at Google. She always felt that she would help the homeless folks she saw, if she had more money than she needed. Upon becoming rich (on Google stock value), now she found that she wanted to save the money she had for her future kids education and needs. Was she a bad person? Before? After? Do your kids hate you if you give away their college education to the local foodbank? Do your peers hate you because you could close the current food gap at the foodbank and you don’t?

Microsoft’s Skype acquisition

This is Microsoft’s ICQ moment. Overpaying for a company at the moment when its core competency is becoming a commodity. Does anyone have the slightest bit of loyalty to Skype? Of course not. They’re going to use whichever video chat comes built into their SmartPhone, tablet, computer, etc. They’re going to use FaceBook’s eventual video chat service or something Google offers. No one is going to actively seek out Skype when so many alternatives exist and are deeply integrated into the products/services they already use. Certainly no one is going to buy a Microsoft product simply because it has Skype integration. Who cares if it’s FaceTime, FaceBook Video Chat, Google Video Chat? It’s all the same to the user.

With $7B they should have just given away about 15 million Windows Mobile phones in the form of an epic PR stunt. It’s not a bad product – they just need to make people realize it exists. If they want to flush money down the toilet they might as well engage users in the process right?


How did HN get get the commenter base that it has? If you read HN, on any given week, there are at least as many good, substantial, comments as there are posts. This is different from every other modern public news aggregator I can find out there, and I don’t really know what the ingredients are that make HN successful.

For the last couple years (ish?), the moderation regime has been really active in trying to get a good mix of stories on the front page and in tamping down on gratuitously mean comments. But there was a period of years where the moderation could be described as sparse, arbitrary, and capricious, and while there are fewer “bad” comments now, it doesn’t seem like good moderation actually generates more “good” comments.

The ranking scheme seems to penalize posts that have a lot of comments on the theory that flamebait topics will draw a lot of comments. That sometimes prematurely buries stories with good discussion, but much more often, it buries stories that draw pointless flamewars. If you just read HN, it’s hard to see the effect, but if you look at forums that use comments as a positive factor in ranking, the difference is dramatic – those other forums that boost topics with many comments (presumably on theory that vigorous discussion should be highlighted) often have content-free flame wars pinned at the top for long periods of time.

Something else that HN does that’s different from most forums is that user flags are weighted very heavily. On reddit, a downvote only cancels out an upvote, which means that flamebait topics that draw a lot of upvotes like “platform X is cancer” “Y is doing some horrible thing” often get pinned to the top of r/programming for a an entire day, since the number of people who don’t want to see that is drowned out by the number of people who upvote outrageous stories. If you read the comments for one of the “X is cancer” posts on r/programming, the top comment will almost inevitably that the post has no content, that the author of the post is a troll who never posts anything with content, and that we’d be better off with less flamebait by the author at the top of r/programming. But the people who will upvote outrage porn outnumber the people who will downvote it, so that kind of stuff dominates aggregators that use raw votes for ranking. Having flamebait drop off the front page quickly is significant, but it doesn’t seem sufficient to explain why there are so many more well-informed comments on HN than on other forums with roughly similar traffic.

Maybe the answer is that people come to HN for the same reason people come to Silicon Valley – despite all the downsides, there’s a relatively large concentration of experts there across a wide variety of CS-related disciplines. If that’s true, and it’s a combination of path dependence on network effects, that’s pretty depressing since that’s not replicable.

If you liked this curated list of comments, you’ll probably also like this list of books and this list of blogs.

This is part of an experiment where I write up thoughts quickly, without proofing or editing. Apologies if this is less clear than a normal post. This is probably going to be the last post like this, for now, since, by quickly writing up a post whenever I have something that can be written up quickly, I’m building up a backlog of post ideas that require re-reading the literature in an area or running experiments.

P.S. Please suggest other good comments! By their nature, HN comments are much less discoverable than stories, so there are a lot of great coments that I haven’t seen.

  1. if you’re one of those people, you’ve probably already thought of this, but maybe consider, at the margin, blogging more and commenting on HN less? As a result of writing this post, I looked through my old HN comments and noticed that I wrote this comment three years ago, which is another way of stating the second half of this post I wrote recently. Comparing the two, I think the HN comment is substantially better written. But, like most HN comments, it got some traffic while the story was still current and is now buried, and AFAICT, nothing really happened as a result of the comment. The blog post, despite being “worse”, has gotten some people to contact me personally, and I’ve had some good discussions about that and other topics as a result. Additionally, people occasionally contact me about older posts I’ve written; I continue to get interesting stuff in my inbox as a result of having written posts years ago. Writing your comment up as a blog post will almost certainly provide more value to you, and if it gets posted to HN, it will probably provide no less value to HN.

    Steve Yegge has a pretty list of reasons why you should blog that I won’t recapitulate here. And if you’re writing substantial comments on HN, you’re already doing basically everything you’d need to do to write a blog except that you’re putting the text into a little box on HN instead of into a static site generator or some hosted blogging service. BTW, I’m not just saying this for your benefit: my selfish reason for writing this appeal is that I really want to read the Nathan Kurz blog on low-level optimizations, the Jonathan Tang blog on what it’s like to work at startups vs. big companies, etc.

23 Oct 07:06

Comic for October 23, 2016

by Scott Adams
Adam Victor Brandizzi

Meus emails são assim hehehe.

23 Oct 15:08

Saturday Morning Breakfast Cereal - Phonemes


Time for me to lock in the erotic linguistics audience.

New comic!
Today's News:


19 Oct 12:19

The Ig Nobel prizes in Economics – in praise of ridiculous research

by Tim Harford
Undercover Economist

Congratulations are in order to Oliver Hart and Bengt Holmstrom, winners on 10 October of the Nobel Memorial Prize in Economics. Even though economics is not a full-fledged Nobel Prize, it has been earned by some splendid social scientists over the years — including a number of people who are not economists at all, from Herbert Simon and John Nash to Daniel Kahneman and Elinor Ostrom.

Yet this week I would rather discuss a different prize: the Ig Nobel prize for economics. The Ig Nobels are an enormously silly affair: they have been awarded for a study of dinosaur gaits that involved attaching weighted sticks to chickens (the biology prize), for studying stinky feet (medicine) and for figuring out why shower curtains tend to billow inwards when you’re taking a shower (physics).

But one of the Ig Nobel’s charms is that this ridiculous research might actually tell us something about the world. David Dunning and Justin Kruger received an Ig Nobel prize in psychology for their discovery that incompetent people rarely realise they are incompetent; the Dunning-Kruger effect is now widely cited. Dorian Raymer and Douglas Smith won an Ig Nobel in physics for their discovery that hair and string have a tendency to become tangled — potentially an important line of research in understanding the structure of DNA. Most famously, Andre Geim’s Ig Nobel in physics for levitating a live frog was promptly followed by a proper Nobel Prize in the same subject for the discovery of graphene.

A whimsical curiosity about the world is something to be encouraged. No wonder that the credo of the Ig Nobel prizes is that they should make you laugh, then make you think. In 2001, the Ig Nobel committee did just that, awarding the economics prize to Joel Slemrod and Wojciech Kopczuk, who demonstrated that people will try to postpone their own deaths to avoid inheritance tax. This highlights an important point about the power of incentives — and the pattern has since been discovered elsewhere.

Alas, most economics Ig Nobel prizes provoke little more than harsh laughter. They’ve been awarded to Nick Leeson and Barings Bank, Iceland’s Kaupthing Bank, AIG, Lehman Brothers, and so on. The first economics prize was awarded to Michael Milken, one of the inventors of the junk bond. He was in prison at the time.

Fair game. Still, surely there is something in economics that is ludicrous on the surface yet thought-provoking underneath? (The entire discipline, you say? Very droll.)

Where is the award for Dean Karlan and Chris Udry? These two bold Yale professors wanted to figure out whether lack of access to crop insurance was damaging Ghana’s agricultural productivity, so they set up an insurance company, sold insurance to Ghanaian farmers, and accidentally got themselves on the hook for half a million dollars if it didn’t rain in Ghana. (Happy ending: it rained. Also, crop insurance is very helpful.)

Psychologists Bernhard Borges, Dan Goldstein, Andreas Ortmann and Gerd Gigerenzer found they could construct a market-beating portfolio of stocks by stopping people on street corners, showing them a list of company names, and asking which they recognised. Surely an Ig Nobel in finance?

Another psychologist, Dan Ariely, already shared an Ig Nobel prize for medicine in 2008 for demonstrating that expensive placebos work better than cheap placebos. But in recent research with Emir Kamenica and Drazen Prelec, described in Ariely’s forthcoming book Payoff, Ariely paid people to build robots out of Lego. The researchers’ aim: to examine the nature of the modern workplace by dismantling the Lego in front of their subjects’ eyes, to see if they could dishearten them. (They could.) An Ig Nobel in management beckons.

Richard Thaler deserves an Ig Nobel in economics for his long-running column Anomalies, in which he asked his fellow economists a series of questions that seem straightforward but are enormously difficult for economics to answer. Why do investment banks pay high wages even to the receptionists? Why do people overpay in auctions? Why are people often nice to each other? If the Ig Nobel committee wants to repeat the Andre Geim trick, it should hurry up: Thaler may well win the economics Nobel before his Ig Nobel can be awarded.

But my preferred candidate for an Ig Nobel prize in economics is Thomas Thwaites. A few years ago Thwaites set himself the simple-seeming task of replicating from scratch a cheap Argos toaster (retail price: £3.99). He smelted iron in a microwave, tried to produce plastic from potato starch, and generally made a colossal mess. His toaster cost £1,187.54, resembled a disastrously iced birthday cake and melted when plugged into the mains. (“A partial success,” says Thwaites.)

Thwaites’s toaster project thus tells us more about the brilliance and dizzying complexity of the interconnected global economy than any textbook could. He is a shoo-in for the economics Ig Nobel. Perplexingly, however, the Ig Nobel committee awarded him this year’s prize for biology instead after he attempted to live life dressed as a goat. How silly.

Written for and first published in the Financial Times.

My new book “Messy” is now out. If you like my writing, why not buy a copy? (US) (UK)

Free email updates

(You can unsubscribe at any time)

Email Address

19 Oct 09:55

Mentirinhas #1055

by Fábio Coala


A inteligência vale mais do que a espada  OU a arte de arregar e ainda sair com a vitória moral.

O post Mentirinhas #1055 apareceu primeiro em Mentirinhas.

21 Oct 15:11

Unfair to Spiders

by Reza


21 Oct 12:58

Icelandic Cookbook

by Scandinavia and the World
Icelandic Cookbook

Icelandic Cookbook

View Comic!

20 Oct 15:53

Viva Intensamente # 281

by Will Tirando


– Uau! Que coleira inteligente!

20 Oct 06:11

Comic for October 20, 2016

by Scott Adams
20 Oct 02:37


by Raphael Salimena

19 Oct 20:06



19 Oct 12:26


19 Oct 08:22

10/19/16 PHD comic: 'Abstract Art'

Piled Higher & Deeper by Jorge Cham
Click on the title below to read the comic
title: "Abstract Art" - originally published 10/19/2016

For the latest news in PHD Comics, CLICK HERE!

19 Oct 00:00

Future Archaeology

"The only link we've found between the two documents is that a fragment of the Noah one mentions Aaron's brother Moses parting an ocean. Is that right?" "... yes. Yes, exactly."
19 Oct 05:37

Comic for October 19, 2016

by Scott Adams
18 Oct 19:59


by Will Tirando


18 Oct 03:25

How the Ballpoint Pen Changed Handwriting - The Atlantic

by brandizzi
Adam Victor Brandizzi

Via Slate Star Codex.

Recently, Bic launched a campaign to “save handwriting.” Named “Fight for Your Write,” it includes a pledge to “encourage the act of handwriting” in the pledge-taker’s home and community, and emphasizes putting more of the company’s ballpoints into classrooms.

As a teacher, I couldn’t help but wonder how anyone could think there’s a shortage. I find ballpoint pens all over the place: on classroom floors, behind desks. Dozens of castaways collect in cups on every teacher’s desk. They’re so ubiquitous that the word “ballpoint” is rarely used; they’re just “pens.” But despite its popularity, the ballpoint pen is relatively new in the history of handwriting, and its influence on popular handwriting is more complicated than the Bic campaign would imply.

The creation story of the ballpoint pen tends to highlight a few key individuals, most notably the Hungarian journalist László Bíró, who is credited with inventing it. But as with most stories of individual genius, this take obscures a much longer history of iterative engineering and marketing successes. In fact, Bíró wasn’t the first to develop the idea: The ballpoint pen was originally patented in 1888 by an American leather tanner named John Loud, but his idea never went any further. Over the next few decades, dozens of other patents were issued for pens that used a ballpoint tip of some kind, but none of them made it to market.

These early pens failed not in their mechanical design, but in their choice of ink. The ink used in a fountain pen, the ballpoint’s predecessor, is thinner to facilitate better flow through the nib—but put that thinner ink inside a ballpoint pen, and you’ll end up with a leaky mess. Ink is where László Bíró, working with his chemist brother György, made the crucial changes: They experimented with thicker, quick-drying inks, starting with the ink used in newsprint presses. Eventually, they refined both the ink and the ball-tip design to create a pen that didn’t leak badly. (This was an era in which a pen could be a huge hit because it only leaked ink sometimes.)

The Bírós lived in a troubled time, however. The Hungarian author Gyoergy Moldova writes in his book Ballpoint about László’s flight from Europe to Argentina to avoid Nazi persecution. While his business deals in Europe were in disarray, he patented the design in Argentina in 1943 and began production. His big break came later that year, when the British Air Force, in search of a pen that would work at high altitudes, purchased 30,000 of them. Soon, patents were filed and sold to various companies in Europe and North America, and the ballpoint pen began to spread across the world.

The ballpoint’s universal success has changed how most people experience ink.

Businessmen made significant fortunes by purchasing the rights to manufacture the ballpoint pen in their country, but one is especially noteworthy: Marcel Bich, the man who bought the patent rights in France. Bich didn’t just profit from the ballpoint; he won the race to make it cheap. When it first hit the market in 1946, a ballpoint pen sold for around $10, roughly equivalent to $100 today. Competition brought that price steadily down, but Bich’s design drove it into the ground. When the Bic Cristal hit American markets in 1959, the price was down to 19 cents a pen. Today the Cristal sells for about the same amount, despite inflation.

The ballpoint’s universal success has changed how most people experience ink. Its thicker ink was less likely to leak than that of its predecessors. For most purposes, this was a win—no more ink-stained shirts, no need for those stereotypically geeky pocket protectors. However, thicker ink also changes the physical experience of writing, not necessarily all for the better.

I wouldn’t have noticed the difference if it weren’t for my affection for unusual pens, which brought me to my first good fountain pen. A lifetime writing with the ballpoint and minor variations on the concept (gel pens, rollerballs) left me unprepared for how completely different a fountain pen would feel. Its thin ink immediately leaves a mark on paper with even the slightest, pressure-free touch to the surface. My writing suddenly grew extra lines, appearing between what used to be separate pen strokes. My hand, trained by the ballpoint, expected that lessening the pressure from the pen was enough to stop writing, but I found I had to lift it clear off the paper entirely. Once I started to adjust to this change, however, it felt like a godsend; a less-firm press on the page also meant less strain on my hand.

My fountain pen is a modern one, and probably not a great representation of the typical pens of the 1940s—but it still has some of the troubles that plagued the fountain pens and quills of old. I have to be careful where I rest my hand on the paper, or risk smudging my last still-wet line into an illegible blur. And since the thin ink flows more quickly, I have to refill the pen frequently. The ballpoint solved these problems, giving writers a long-lasting pen and a smudge-free paper for the low cost of some extra hand pressure.

As a teacher whose kids are usually working with numbers and computers, handwriting isn’t as immediate a concern to me as it is to many of my colleagues. But every so often I come across another story about the decline of handwriting. Inevitably, these articles focus on how writing has been supplanted by newer, digital forms of communication—typing, texting, Facebook, Snapchat. They discuss the loss of class time for handwriting practice that is instead devoted to typing lessons. Last year, a New York Times article—one that’s since been highlighted by the Bic’s “Fight for your Write” campaign—brought up an fMRI study suggesting that writing by hand may be better for kids’ learning than using a computer.

I can’t recall the last time I saw students passing actual paper notes in class, but I clearly remember students checking their phones (recently and often). In his history of handwriting, The Missing Ink, the author Philip Hensher recalls the moment he realized that he had no idea what his good friend’s handwriting looked like. “It never struck me as strange before… We could have gone on like this forever, hardly noticing that we had no need of handwriting anymore.”

No need of handwriting? Surely there must be some reason I keep finding pens everywhere.

Of course, the meaning of “handwriting” can vary. Handwriting romantics aren’t usually referring to any crude letterform created from pen and ink. They’re picturing the fluid, joined-up letters of the Palmer method, which dominated first- and second-grade pedagogy for much of the 20th century. (Or perhaps they’re longing for a past they never actually experienced, envisioning the sharply angled Spencerian script of the 1800s.) Despite the proliferation of handwriting eulogies, it seems that no one is really arguing against the fact that everyone still writes—we just tend to use unjoined print rather than a fluid Palmerian style, and we use it less often.

Fountain pens want to connect letters. Ballpoint pens need to be convinced to write.

I have mixed feelings about this state of affairs. It pained me when I came across a student who was unable to read script handwriting at all. But my own writing morphed from Palmerian script into mostly print shortly after starting college. Like most gradual changes of habit, I can’t recall exactly why this happened, although I remember the change occurred at a time when I regularly had to copy down reams of notes for mathematics and engineering lectures.

In her book Teach Yourself Better Handwriting, the handwriting expert and type designer Rosemary Sassoon notes that “most of us need a flexible way of writing—fast, almost a scribble for ourselves to read, and progressively slower and more legible for other purposes.” Comparing unjoined print to joined writing, she points out that “separate letters can seldom be as fast as joined ones.” So if joined handwriting is supposed to be faster, why would I switch away from it at a time when I most needed to write quickly? Given the amount of time I spend on computers, it would be easy for an opinionated observer to count my handwriting as another victim of computer technology. But I knew script, I used it throughout high school, and I shifted away from it during the time when I was writing most.

My experience with fountain pens suggests a new answer. Perhaps it’s not digital technology that hindered my handwriting, but the technology that I was holding as I put pen to paper. Fountain pens want to connect letters. Ballpoint pens need to be convinced to write, need to be pushed into the paper rather than merely touch it. The No.2 pencils I used for math notes weren’t much of a break either, requiring pressure similar to that of a ballpoint pen.

Moreover, digital technology didn’t really take off until the fountain pen had already begin its decline, and the ballpoint its rise. The ballpoint became popular at roughly the same time as mainframe computers. Articles about the decline of handwriting date back to at least the 1960s—long after the typewriter, but a full decade before the rise of the home computer.

Sassoon’s analysis of how we’re taught to hold pens makes a much stronger case for the role of the ballpoint in the decline of cursive. She explains that the type of pen grip taught in contemporary grade school is the same grip that’s been used for generations, long before everyone wrote with ballpoints. However, writing with ballpoints and other modern pens requires that they be placed at a greater, more upright angle to the paper—a position that’s generally uncomfortable with a traditional pen hold. Even before computer keyboards turned so many people into carpal-tunnel sufferers, the ballpoint pen was already straining hands and wrists. Here’s Sassoon:

We must find ways of holding modern pens that will enable us to write without pain. …We also need to encourage efficient letters suited to modern pens. Unless we begin to do something sensible about both letters and penholds we will contribute more to the demise of handwriting than the coming of the computer has done.

I wonder how many other mundane skills, shaped to accommodate outmoded objects, persist beyond their utility. It’s not news to anyone that students used to write with fountain pens, but knowing this isn’t the same as the tactile experience of writing with one. Without that experience, it’s easy to continue past practice without stopping to notice that the action no longer fits the tool. Perhaps “saving handwriting” is less a matter of invoking blind nostalgia and more a process of examining the historical use of ordinary technologies as a way to understand contemporary ones. Otherwise we may not realize which habits are worth passing on, and which are vestiges of circumstances long since past.

Let's block ads! (Why?)

18 Oct 07:02

Comic for October 18, 2016

by Scott Adams
17 Oct 20:31

More Cluster Fudge Here

More Cluster Fudge Here

05 Oct 20:46

In Defense of Mobile Homes

In Defense of Mobile Homes

Consider the trailer park.

Or, better yet, reconsider the trailer park, whose stereotypical association with peeling paint and unemployed seniors is outdated. 

The quiet story of trailer parks over the last two decades is their reinvention as “mobile home communities” by investors who saw a lucrative opportunity in providing housing to low-income Americans. 

The billionaire investor and real estate mogul Sam Zell recently said of his investment fund that owns mobile home communities—some of which advertise amenities like pools and tennis clubs—that he doesn’t “know of any stock or property I’m involved in that has a better prospect.”

Since 2003, Warren Buffett has owned Clayton Homes, which builds houses destined for trailer parks across the country. At 1400 square feet, many of the homes don’t look like they were delivered on the back of a truck. 

Franke Rolfe, a Stanford graduate who teaches people how to profit in the mobile home industry, buys dilapidated trailer parks, cleans them up, and rents mobile homes to the working poor. A 2014 New York Times Magazine article reported that he and a partner earned a 25% return on their investment.

Trailer parks’ appeal to these investors is simple. Millions of Americans struggle with rent payments, but still want a lawn. For them, mobile homes are the cheapest form of housing available. At the same time, it’s rare for someone to build a new mobile home park, because no homeowner wants a trailer park nearby. An industry with healthy demand but a fixed supply attracts the country’s capitalists. 

These capitalists realized that trailer parks are an undervalued asset. But maximizing profits at a mobile home park isn’t pretty. It means taking advantage of the lack of supply and the expense of moving a mobile home to raise rents every year. These investors avoid states with rent control, and they’re attuned to just how much a family can pay without becoming insolvent.

But in their pursuit of profit, investors also dramatically increased the stock of well managed, affordable housing. And they’d create a lot more—at better prices—if America’s homeowners weren’t dead set against trailer parks. 

Building the Affordable House

During his presidency, Bill Clinton championed efforts to “steadily expand the dream of homeownership to all Americans.” George W. Bush, too, promoted an “ownership society.” Helping Americans to own their homes, he said, would “put light where there’s darkness.” 

Neither president could change the reality that many Americans cannot afford a house. The percentage of homeowners increased from 64% to a peak of 69% during their tenures. But the bump relied on no money down mortgages and irresponsible lending—the phenomenon satirized in The Big Short when the character played by Steve Carell tells a stripper in Florida that she really shouldn’t have five home loans. (In real life, it happened in Vegas.) Since the housing bubble burst, the homeownership rate has returned to 64%. 

Why can’t we build affordable homes? America is full of gargantuan houses that are customized like each owner is a reality TV contestant. It sure seems like more people could afford homes if we made them smaller and more efficiently. 

But according to Witold Rybczynski, an emeritus professor of architecture at the University of Pennsylvania, that is not the case. 

An old picture of Levittown, Pennsylvania, which features Levitt and Sons standardized, one-story homes. 

Writing in The Wilson Quarterly, Rybczynski cites the famous case of the “Levittowns” built for newly married GIs after World War II. Levitt and Sons constructed houses like Henry Ford built cars. “Teams of workers performed repetitive tasks,” Rybczynski writes, and every house was a one-story with the kitchen and bathroom facing the street to reduce the length of the pipes. 

Each “Levittowner” was identical except for its paint job, and thanks to the cold efficiency of the process, they cost $9,900 in the 1950s. That’s around $90,000 in 2016 dollars. A new home today is more than twice as large and sells for $300,000.

So if we made ‘em like we did in the old days, could we slash the price? Rybczynski says no. 

The modern McMansion seems like an expensive, bespoke product. But its construction is actually a triumph of industrial efficiency. Windows, doors, and other parts arrive prefabricated, Rybczynski writes, so labor costs have actually halved since 1949. Levitt and Sons spent $4 to $5 per square foot building Levittowners, and, adjusted for inflation, builders today spend the same amount. 

Instead the problem is almost wholly that land is too expensive. Reduce the size of a new, modern house by 50%, Rybczynski notes, and houses in metropolitan areas will still cost over $200,000. 

That’s the secret to the extreme affordability of a mobile home—take land out of the equation.

The Extreme Profitability of Mobile Homes

Franke Rolfe is the landlord for tens of thousands of people. Along with a business partner and his backers in private equity, he co-owns mobile home communities in 16 states. 

But, he explains, “a mobile home park is by definition a parking lot. Legally, our parks are no different from a parking lot by an airport.”

This is why used mobile homes only cost $10,000-$20,000. They make it possible for someone to buy a home but not the earth it’s parked on. As a Times profile of Rolfe reported, his average tenant pays $250 to $300 in monthly rent. If the tenant doesn’t own her home, she might pay another $200 or $300, with the option to apply half of that toward purchasing a mobile home.

“We’re the cheapest form of detached housing there is,” says Rolfe. “You can’t do cheaper.” 

Harvard's Joint Center for Housing recently reported that one in five renters earns less than $15,000 a year. For this group, paying more than $400 a month in rent is unaffordable. Especially for a family, finding an apartment is tough, and renting a typical house may be impossible—the Harvard report notes that “single-family homes have among the highest median rents of any type of rental housing.” 

Buying a mobile home also compares favorably with a mortgage: According to a 2015 U.S. Census survey, only 3.4% of Americans with a mortgage paid less than $600 in monthly housing costs.

The economics of mobile homes has attracted new residents. When Times reporter Gary Rivlin attended Rolfe’s class on the trailer park industry, which Rolfe calls Mobile Home University, a manager explained that his tenants were once all retirees. But the park was now full of two-income families “making minimum wage at Taco Bell.”

So why do investors like Rolfe and Warren Buffett see gold in these cash-strapped Americans?

When a business makes boatloads of money from poor customers, the answer can be unseemly if not illegal. In the subprime auto loan industry, executives offer low-income Americans car loans with such high fees that 1 in 4 customers default. Hidden fees are common, and when a customer fails to pay, companies still profit by repossessing and re-lending the car. The LA Times found that one Kia Optima was lent out and repossessed eight times in under three years. 

Warren Buffett’s mobile home construction business, Clayton Homes, has been accused of using a similar, exploitative business model. According to reporting by the Seattle Times and the Center for Public Integrity, Clayton Homes lent customers money to buy their mobile homes at above-average rates and unexpectedly added fees or changed the terms of the loan. When low income buyers fell behind on their mobile home payments, Clayton Homes profited by repossessing the house and re-selling it.

The situation at mobile home parks like Rolfe’s, however, looks more like the poor’s experience with financial services. Since their meager savings aren’t profitable for banks, they pay for services wealthier Americans get for free: a few dollars to cash a check, a few dollars to send money to an aunt. And since they can’t afford a mortgage, they pay more dearly for the land under their mobile home. 

As Rivlin recounts of his time at Rolfe’s Mobile Home University, he learned that “one of the best things about investing in trailer parks is that ambitious landlords can raise the rent year after year without losing tenants. The typical resident is more likely to endure the increase than pay a trucking company the $3,000 it can easily cost to move even a single-wide trailer to another park.” A 30% increase in rent “might sound steep,” but Rolfe says residents “can always pick up extra hours.” 

This realization has earned Rolfe, his partners, and his fellow mobile home moguls handsome returns: poor Americans could afford to pay much more for a taste of homeownership. 

American Dream; American Reality

Rolfe bought his first trailer park in 1996 from an owner who, he told the Times, “would open the door just wearing his underwear, totally hungover.”

You’ll find more Stanford graduates like Rolfe selling software than mobile homes. But Rolfe saw that the park had unrealized, financial potential. His business partner, Dave Reynolds, had the same epiphany in 1993. This has now become a formula they teach at Mobile Home University: identify promising trailer parks and buy them from their mom-and-pop owners. 

But when savvy investors like Rolfe and Reynolds buy a mobile home park, they do not just passively collect rent payments. “The parks they take over tend to be in lousy shape,” Rivlin writes, “and they spend hundreds of thousands of dollars fixing them up.” They also hire tough managers who pay equal attention to keeping the grass mowed and kicking out tenants who break the law. 

“There’s more money in decent than slumlording,” Rivlin quotes Rolfe telling a Mobile Home U class. He wants responsible tenants who will see the value in paying increasing rent payments. 

  A mobile home park In the UK. Copyright Clive Perrin and licensed for reuse.

The result is that their tenants have a safe place to live (and a lawn) at a low price point. Some parks are easily identified by the occasional beat up trailer, but others are easy to mistake for a cookie cutter neighborhood of small to mid-size houses. In his visits to Rolfe’s trailer parks, Rivlin was struck by how many tenants said they were satisfied with the landlord and pleased with their home. 

One resident said her only complaint was “the redneck jokes I’ve been hearing since the day I moved in.”


The phrase affordable housing evokes New York City’s projects and government subsidies. But as emeritus architecture professor Witold Rybczynski likes to point out, affordable housing “once meant commercially built houses that ordinary working people could afford.”

A profit motive is pushing Franke Rolfe and Dave Reynolds and Sam Zell and private equity partners to invest in turning rundown trailer parks into safe, decent, affordable housing. 

It’s not pretty, but it gets results. In the Times, Rivlin noted that as of 2014, Reynolds and Rolfe’s 10,000 mobile home lots made them “about equivalent in size to a public-housing agency in a midsize city.” Sam Zell’s Equity Lifestyle Properties owns around 140,000 lots. 

They are responding to an unmet need. Limited government funding for affordable housing means that only one in four eligible households receive assistance. California’s governor has noted that it takes a $5 billion subsidy to provide 100 people with affordable housing in San Francisco, where units are distributed by lottery. And according to a Harvard housing report, public, affordable housing is (on average) in the worst shape of all of the country’s rental properties. 

But the market mechanism that drives investors to turn trailer parks into decent, affordable housing is half broken.

One reason Rolfe, Zell, and others invest in mobile home parks is that it’s nearly impossible to build a new one. “Cities fight tooth and nail,” says Rolfe. “In Louisiana, we drew a standing room only crowd [at a public hearing] to fight an expansion of twenty lots to our trailer park.” With residents and city officials blocking construction, investors like Rolfe can raise their tenants’ rents without worrying about a competitor popping up nearby.

Rolfe is the first to say that residents’ concerns about mobile home parks are “valid.” Trailer parks drag down property values, often due to the expectation that they’ll bring crime. (“It would be nice if television showed something positive happening in a mobile home park,” Rolfe says.) Since mobile homes are cheap and taxed like cars, their owners don’t pay much in taxes even as they send their children to the local school. 

“From a financial perspective,” says Rolfe, “it’s simple to see why cities don’t want mobile home parks.” 

But just because it’s understandable, that doesn’t make it a good idea. 

Few Americans would cop to wanting to live in a town without poor people, but that’s the effect of their actions when they oppose a trailer park or dense or affordable housing. And while residents may not want to subsidize newcomers, our tax code is progressive because rich people benefit when their taxes fund the school attended by a poor but brilliant young girl. Saying “not in my backyard” is also the exact reason that land and single-family homes are so expensive in the first place. 

California governor Jerry Brown recently proposed legislation that would limit the ability of locals to block the development of new (and denser) housing. But it’s unlikely someone will champion trailer parks—even though they’ll be immediately affordable. 

They have good reasons. Unlike a typical house, mobile homes decrease in value over time, so many are worth little once families pay off their loans. Nor does a home offer much security if you can be evicted from the land it sits on. It’s also not good politics to suggest Americans settle for housing that is synonymous with “trailer trash.” 

That’s too bad. Clinton and Bush’s dream of every American owning a home ended in tragedy. But driven by profits, trailer park moguls are meeting America’s fast food workers and low-income retirees where they live. 

Our next article explores what a baboon massacre at the London Zoo tells us about human nature. To get notified when we post it →  join our email list

Announcement: The Priceonomics Content Marketing Conference is on November 1 in San Francisco. Get your early bird ticket now.

Let's block ads! (Why?)

17 Oct 19:13

Like a Movie

by Reza


13 Oct 11:59

Editing Wikipedia for a decade: Gareth Owen – Wikimedia Blog

by brandizzi
Adam Victor Brandizzi

"The problem with Wikipedia is that it only works in practice; in theory it’s a total disaster." Good one :)


Photo via Gareth Owen

Gareth Owen is one of the earliest contributors to Wikipedia—his user ID is 151, and his first edit was to create the “Hobbits” article in March 2001.

He has seen Wikipedia grow “in just a couple of years from a sparse website to something where you could look something up with a reasonable chance of getting a non-terrible response.” As described in his own voice.

Owen, a native of North West England who now lives just outside Manchester in the United Kingdom, has been using the internet when he was a student in the early 90s. He discovered Wikipedia in its earliest days, when the site was just a “silly little spin-off” from a less-collaborative wiki called Nupedia. Owen was most active during Wikipedia’s first few years. Collaborating with people from different places on providing information to the public about topics of interest to him was his motivating force.

“I enjoyed doing research on my favourite topics,” Owen explains. “I enjoyed the collaborative process and watching people devote their time to something really worthwhile—essentially altruistically—and expecting little in return.”

So far, Owen has edited Wikipedia over 6,000 times and has created 113 new articles. He has been most interested in editing articles about music, sports and mathematics, his field of study. He has started some important articles about music, covering bands like The Beatles and The Velvet Underground, and he has rewritten articles about  Bob Dylan, Miles Davis, and The Rolling Stones.

The sports category on Wikipedia is rich with many articles first written by Owen, such as Manchester United F.C., Rugby World Cup and expanding the existing article on the Summer Olympic Games to include details about the history of the Olympic Games from the beginning until Sydney 2000. These are just a few examples.

“Everything was up for grabs back then and there was so much to be done,” Owen elaborated. “If you started working on an area, you could expand an entry with a few token sentences to something with a larger overview of a big subject. And if you did a good job, Larry Sanger or Jimmy Wales would add your article to the “Brilliant Prose” list, which was a pretty good feeling.”

The Wikipedia quality standards have changed over the years. The  “brilliant prose” selection system has since been replaced with new criteria for selecting Wikipedia’s best articles, which are now called “featured articles.” Some of Owen’s articles that were cited in the “brilliant prose” list in the early 2000s now appear as featured articles, thanks to the efforts of other Wikipedians. Some examples include The Beatles, Sandy Koufax and Babe Ruth.

During this time, Wikipedia was very quiet with a minimal rate of spam and edit wars. Owen remembers that “the rate of editing was slow enough that only a few people would keep an eye on anon edits and correct the most egregious damage manually. Jimmy Wales would arbitrate anything that ran on too long. Obviously that didn’t scale very well, and extra layers of administrative oversight came in by the mid-2000s.”

“The George W. Bush article was a battleground even then, but I shudder to think how much admin time has been devoted to trying to impose a neutral point of view on articles about the Clinton/Trump presidential race,” he adds.

A couple of years ago, Owen and his family were surprised by a show host who quoted Owen on BBC Radio 4. The show had been discussing Wikipedia when the host John Lloyd closed by saying, “In the words of an original Wikipedian, Gareth Owen, ‘The problem with Wikipedia is that it only works in practice; in theory it’s a total disaster.’” The quote has been used several times, including once by the New York Times.

“This remark has cropped up in a number of articles and features (sometimes credited to me, sometimes others). I wonder if in 100 years it’ll be the only trace of me left on the internet.”

Samir Elsharbaty, Digital Content Intern
Wikimedia Foundation

“The decade” is a new blog series that profiles Wikipedians who have spent ten years or more editing Wikipedia. If you know of a long-time editor who we should talk to, send us an email at blogteam[at]

Let's block ads! (Why?)

17 Oct 07:01

Doing Stuff

by Doug

Doing Stuff

Dedicated to Barbara! Happy 30th!!

And here’s more productivity.

17 Oct 00:00

Spider Paleontology

Whenever you see a video of birds doing something weird, remember: Birds are a small subset of dinosaurs, so the weirdness of birds is a small subset of the weirdness of dinosaurs.
06 Oct 19:42

The Massacre at Monkey Hill

The Massacre at Monkey Hill

Photo from Malcolm Peaker


In 1932, Solly Zuckerman sat down to write a book about a baboon massacre at the London Zoo.

The carnage in question had started seven years earlier when the Zoological Society of London opened a new baboon exhibit. The enclosure, called “Monkey Hill,” was state-of-the-art: an open-air rock cliff with simian-friendly amenities designed to keep the resident primates happy and healthy.

But something had gone terribly wrong.

As soon as the first batch of hamadryas baboons were ushered onto the artificial rock face, war broke out. By the end of the decade, two-thirds had been killed.

The seven-year bloodbath was a hit with the viewing public, and the misdeeds of these murderous monkeys made a splash in papers on both sides of the Atlantic.

For Zuckerman, who dissected and studied animal carcasses at the zoo, the violence provided an important insight into primate behavior. Ape and monkey society is built on violence and sexual dominance, he theorized. Humanity’s closest relatives were creatures of chaos, lust, and slaughter. This insight launched Zuckerman’s career and defined the early scientific field of primatology.

For many, the theory also offered an appealing explanation for human behavior. In political science, psychiatry, and the popular press, the slaughter at Monkey Hill was taken as evidence of the inherent depravity of humanity.

There was just one problem: the science was bunk. Baboons do not routinely butcher one another in the wild, nor do all primates behave like hamadryas baboons.

Primatologists have long since dismissed Zuckerman’s theory of primate behavior. But for decades, the mistaken lessons of Monkey Hill defined primatology, permeated popular culture, and contributed to a widespread view of humanity as a species on the brink of mayhem.

The Benefits of Outdoor Living

Monkey Hill was supposed to save the baboons.

Until the 1920s, the London Zoo housed most of its animals indoors, in the dark, and behind bars. But the sickly, sad-looking creatures depressed the zoo visitors and dropped dead of preventable illnesses at alarming rates. The primates were of particular concern. Pneumonia, tuberculosis, and vitamin-D deficiency were the scourge of the Monkey House.

Taking a cue from German zookeepers, the London Zoological Society designed an outdoor enclosure called Monkey Hill in 1924—a place where primates could roam about and live as if they were in their native habitat. Well, nearly. The zookeepers used shelters, heaters, and UV-lights to combat the perennial gloom of fog-swaddled London.

As science writer Jonathan Burt explains, the open-air exhibit was “intended to showcase the benefits of outdoor living for animal health.” Plus, he adds, from behind the 12-foot outer wall that surrounded the enclosure zoo, visitors were afforded a much better show.

They got one.

In 1925, a shipment of ninety-seven hamadryas baboons arrived by boat from the Horn of Africa. They were all supposed to be male. As Jane Goodall’s biographer Dale Peterson writes, the zoo’s gender preference was “based on the idea that the males—big and dramatically fanged and caped, with pink buttocks—would appeal to a zoo-going public more than the smaller and less gaudy females.” But whether by neglect or indifference, six females were included in the shipment. The result was a bloodbath.

Once the exhibit opened, the males immediately went to war over access to the females. With the former outnumbering the latter 15-to-1, the competition was ferocious. Caught in a constant tug-of-war between dozens of brawny, sharp-fanged males, most of the females were killed over the coming months. Even so, the males fought over their bodies. The violence was sometimes so fierce that zookeepers had to wait days before they could scale Monkey Hill and retrieve a carcass. Within two years, nearly half of the baboons were dead.

A male hamadryas baboon. Photo by Sonja Pauen.

But rather than remove the remaining females from the enclosure, the Zoological Society of London thought that they could quell the violence by adding more. Just as the fighting began to simmer down in 1927, the zoo introduced 30 additional females and five adolescent males. The violence exploded anew.

By 1931, 64% of all the males and 92% of the females that had been brought to Monkey Hill had died. While some of the males had died of disease, virtually all of the females died violent deaths. Of the 15 infants born in the enclosure, only one managed to survive.

This all came as a shock to the zookeepers. Instead of a sanctuary, they had inadvertently created a fighting pit—a gladiator arena with sex. And there was a lot of sex. In 1929, one member of the Zoological Society worried that the constant baboon copulation—violent, polygamous, and occasionally necrophilic—would have a “demoralizing” effect on the public.

Just the opposite was true. The sex and violence on Monkey Hill was a draw for zoo visitors. The tabloid press in England took a prurient interest in the sorry state of the females on the hill, with one describing “meek, subdued, tractable creatures” who “live on the leavings and take life sadly.” In 1930, even Time magazine profiled the “wife-stealer” primates at the London Zoo.

The same year, the zoo superintendent finally removed the surviving females. It was a moral choice, but not a shrewd business decision: the violence dropped along with attendance at the exhibit. To at least one tabloid, the conclusion was obvious: “men are a peaceable race, once you eliminate the women.”

This was only the first of many flawed conclusions to be drawn from Monkey Hill.

Red in Claw and Fang

When Solly Zuckerman was hired by the London Zoo to dissect baboon carcasses, he came with experience. 

Born in South Africa, where baboons were regarded as a pest, he had spent his teenage years shooting the animals for bounty and dissecting their bodies for fun. This odd adolescent hobby notwithstanding, Zuckerman had very little experience observing how baboon troops behaved in the wild. He wasn’t alone. Very little was known about the social lives of nonhuman primates.

The exhibit at Monkey Hill gave Zuckerman an opportunity to watch a group of monkeys being monkeys. What he saw, both on the hill and the autopsy table, was a revelation. After a few years of observation, lab work, and a short stint in the field back in South Africa, Zuckerman published his grand theory on primate society: The Social Life of Monkeys and Apes.

According to Zuckerman, life in the jungle is nasty, brutish, and short. All “sub-human” primate relationships fit into strict, male-dominated hierarchies that are maintained through coercion, intimidation, and bloodshed. The only thing that prevents the chest-thumping testosterone fest from descending into absolute chaos is the prospect of copulation. Whenever conflict arises within a troop, females offer sex to paper things over. When that fails, the alpha males resort to murder. Such is the way not only of baboons, wrote Zuckerman, but all non-human primates. 

Male hamadryas baboons are considerably larger than the females. Photo by Ruben Undheim.

This conclusion was based on his observations at Monkey Hill, but it was also, according to Zuckerman, an inescapable result of primate biological characteristics. Males dominate females because they are bigger, and they keep females as permanent members of their troops because they are fertile year-round. “Reproductive physiology is the fundamental mechanism of society,” he wrote. Violent behavior is baked in.

The Social Life of Monkeys and Apes was published in 1932 (over the objections of the zoo management, who found Zuckerman’s lengthy descriptions of baboon sex unseemly). This Clockwork Orange-view of ape and monkey behavior dominated the field of primatology for the next three decades.

To his credit, Zuckerman was careful to say that his theory only explained non-human primate behavior. But not everyone got the memo. Social theorists, journalists, and the public at large took Monkey Hill and Zuckerman’s grand theory as a description of humanity unencumbered by law and social mores.

In their political tract, Personal Aggressiveness and War, the English politician Evan Durbin and psychologist John Bowlby used the story of Monkey Hill as evidence that human beings had a natural propensity towards war. “Fighting is infectious in the highest degree,” they wrote in 1938. “It is one of the most dangerous parts of our animal inheritance.”

This must have been a compelling conclusion at the time. This was the same year that Germany annexed Austria, laying the groundwork for the Second World War. 

In the following decades, scientists came to doubt Zuckerman’s theory. But in popular culture, the idea stuck around. 

In 1957, the popular science magazine, New Scientist, wrote the “sadistic society of the totalitarian monkeys” on Monkey Hill and drew a parallel to the mutiny on the Bounty. This was the infamous maritime rebellion in which men, adrift upon the lawless sea, seized the HMS Bounty and settled as outlaws in the South Pacific. Some 30 men took up residence on Pitcairn Island in particular.

“In 10 years all the men had been killed except one,” the article explained. “The principal difference between Pitcairn Island and Monkey Hill seemed to be that the abnormal social conditions resulted more lethally for the males in the human colony and for females in the sub-human colony.”

The public’s infatuation with this theory of the violent primate came at an odd time. As the primatologist Shirley Strum and William Mitchell wrote in the 1980s, “just as specialists were abandoning [Zuckerman’s] baboon model, the popular press and nonspecialists interested in interpreting human evolution adopted and championed that view of primate society.”

These nonspecialists included writers like Robert Ardrey, who began championing the so-called “killer ape theory”—the idea that the evolutionary success of homo sapiens was the result of our inherent aggression. Thus, 2001: A Space Odyssey depicts the “dawn of man” as an early hominid learning how to use a weapon and beat a cow skeleton to bits in a fit of rage.

The idea that the violence at Monkey Hill reflected our natural state contributed not only to a grim view of human nature, but of the natural world as a whole.

“The hamadryas colony at the London Zoo reinforced a widespread belief in an unconstrained Darwinian struggle for existence,” write Carl Sagan and Ann Druyan in their 1993 book, Shadows of Forgotten Ancestors. “Many people felt that they had now glimpsed Nature in the raw, a brutal Nature, red in claw and fang, a Nature from which we humans are insulated and protected by our civilized institutions and sensibilities.”

There but for the grace of God, law, and authority go we to Monkey Hill.

Lab versus Field 

But, of course, this was all based on bad science.

As anyone who has watched nature documentaries on bonobos can tell you, not all primates are patriarchal killing machines. And even the male-centric hamadryas baboons only live up to Zuckerman’s hyper-violent description in certain, artificial circumstances.

Circumstances like Monkey Hill. 

First, there was the issue of space. A typical troop of 100 baboons in the scrublands of Ethiopia will range over an area of roughly 50,000 square meters. The enclosure at the London Zoo was a little over 500 square meters, nearly one-hundredth the size.

Beyond the crowding, the exhibit’s extreme gender lopsidedness was far from anything that might be observed in the wild. Hamadryas baboons are the only primate species besides gorillas that maintain harems: family units that consist of a single male and up to ten females and infants. These harems form together into clans. Overtime, these clans establish a clear hierarchy of dominance. If that hierarchy is violated, the culprit will be severely punished by the various males.

But the London Zoo had penned nearly one hundred males with no prior social ties together with half a dozen unfortunate females.

In short, trying to generalize about primate behavior based on Monkey HiIll would be like trying to learn about human nature by watching a prison riot.

Still, Zuckerman’s grand theory of primate behavior had staying power because it tapped into a key bias within the biological sciences: work in the lab outranked research in the field. 

When other primatologists, like Clarence Ray Carpenter, offered contradictory accounts of monkeys in Panama notably not murdering one another with abandon, much of the research community looked down upon these observations as second-class science. How could observations made by a sweaty, exhausted, mosquito-bitten field worker compare to the conclusions of a sober-minded scientist in the lab?

Solly Zuckerman, 1943.

It wasn’t until the early 1960s that Solly Zuckerman’s theoretical superstructure finally came toppling down.

By then, Zuckerman had been knighted, had served as chief scientific adviser to the Ministry of Defense (alongside primate anatomy, he took an interest in ballistics), and was serving as Secretary of the London Zoological Society.

As an elder zoologist, he was particularly dismissive of a young “amateur” scientist named Jane Goodall. With few credentials to her name, Goodall had spent months observing chimpanzees at Gombe National Park in Tanzania. This was an “act of radical immersion” that was unheard of in the primatology community, writes Goodall’s biographer, Dale Peterson. Returning from East Africa, she published findings in which she observed that chimpanzees, unlike their baboon cousins, did not form harems. More generally, they didn’t seem to behave as the violent patriarches Zuckerman insisted they were.

Though Zuckerman never admitted that he was wrong, in the words of Dale Peterson, this showed that primate life was not solely a “masculine melodrama on the themes of sex and violence.” Through field observations, Goodall and a new generation of primatologists, showed that primate behavior is complex, varied, and heavily influenced by environment. It was, in other words, immune to the sweeping generalizations put forward by Zuckerman.

The end of the Monkey Hill era, writes Peterson, marked “the debut of primatology as a modern science.” 

The Moral of Monkey Hill

Humans are suckers for a good animal allegory.

When Ivan Pavlov learned that he could trick his dog into salivating at the sound of a bell, “Pavlov’s dog” became cultural shorthand for the mechanical naiveté of human behavior.

When B.F. Skinner discovered that birds could be taught to bob their heads in a certain way to receive food, “Skinner’s pigeons” likewise became a reference for the universality of human superstition. 

And when animal behaviorist John Calhoun created an overpopulated mouse enclosure at the National Institute of Mental Health—and watched as the rodents descended into a “behavioral sink” of asexualism, cannibalism, and violence—John Calhoun’s “rats of NIMH” became a modern parable about overcrowding in urban centers.

As Eric Michael Johnson writes in Scientific American, Monkey Hill is now its own kind of a parable.

The case of Zuckerman and the baboons of London, he writes, is “a zoological case study that reveals the danger of embracing a faulty assumption about ‘natural’ behavior.” The baboons on Monkey Hill were not emblematic of all apes and monkeys. They certainly weren’t emblematic of humans. Trapped, overcrowded, and placed within an unnatural social environment, they weren’t even emblematic of their own species.

If there is a lesson to be learned from the story of Monkey Hill, it is that we are too eager to learn lessons from the animal kingdom.

Our next article looks at prison data to show that the reduction in mass incarceration in America is a glitch. To get notified when we post it →  join our email list

Announcement: The Priceonomics Content Marketing Conference is on November 1 in San Francisco. Get your early bird ticket now.

Let's block ads! (Why?)

17 Oct 02:19

O barato das campanhas

by Jose Roberto de Toledo
Adam Victor Brandizzi

As mudanças nos gastos foram bem curiosas, eu diria.

O fim do financiamento empresarial funcionou. Ainda não se conhece o preço total da eleição, porque as prestações de contas definitivas ainda estão por vir, mas as campanhas ficaram mais baratas. Podem ter custado menos da metade, talvez até um terço das de 2012. E isso é bom? Menor a necessidade de pedir dinheiro para empresas, menores as oportunidades para corrupção.

Tirando os sócios de uma ou outra construtora local, não se viu empreiteiros entre os doadores de 2016, por exemplo. Aumentou a participação do financiamento público, é fato. As contribuições de pequenos valores de pessoas físicas pela internet beiraram o irrisório, também é verdade. Mas diminuiu muito o peso e, portanto, a influência do capital empresarial sobre as urnas.

Do lado das despesas, houve um barateamento generalizado do marketing eleitoral. Marqueteiros precisaram trabalhar em mais campanhas para ganhar um pedaço do que costumavam receber. Serviços e equipamentos também passaram por um enxugamento.

Em 2012, gabavam-se de usar a RED – empregada em Hollywood para filmar longas metragens – ao custo de R$ 7,5 mil por dia de aluguel da câmera. Em 2016, a vedete foi a DJI Osmo, uma portátil com estabilização de imagem para tomadas em movimento e que usa o celular do fotógrafo como tela para ver as imagens. Preço: R$ 2,5 mil. Não para alugá-la, mas para comprá-la.

Mudaram também os profissionais. Até 2012, diretores de filmes de publicidade comercial eram os mais requisitados para gravar os candidatos das cidades mais populosas para suas aparições no programa eleitoral e nos spots de TV. Em 2016, seu preço ficou alto demais para as campanhas. Entraram em cena diretores de filmagem de casamento, que cobram um décimo. Não fez diferença visível nem na urna. Vários “noivos” acabaram eleitos.

Do lado da receita, o fim das doações empresarias foi ainda mais importante. Não houve nada parecido com a churrascada eleitoral promovida pela JBS em 2014. O frigorífico doou mais de R$ 350 milhões naquela eleição, e ajudou a eleger mais de 180 deputados federais – foi, disparada, a maior bancada eleita na Câmara. Nenhum doador universal se destacou em 2016. Só os partidos.

As direções partidárias exerceram uma influência maior no destino da eleição, ao decidir quais candidatos seriam privilegiados com mais recursos e quais ficariam por conta própria. As estratégias variaram de agremiação para agremiação.

O PRB, presidido por um bispo licenciado da Igreja Universal do Reino de Deus, concentrou a liberação de recursos no diretório nacional. Foram quase R$ 50 milhões, metade disse oriunda do Fundo Partidário – ou seja, recursos públicos. Marcelo Crivella, que lidera as pesquisas no segundo turno no Rio de Janeiro, foi quem mais recebeu. Logo a seguir, ficou Celso Russomanno, que terminou em terceiro lugar para prefeito paulistano.

Já o PMDB dispersou os recursos entre centenas de diretórios locais, como era esperado. Ao contrário do centralizado PRB, o PMDB é uma confederação de partidos estaduais que funciona como condomínio, e onde as decisões, inclusive as financeiras, dependem de acordo entre os seus vários caciques.

Os auto-financiados, como João Doria (PSDB), levaram vantagem, porque a legislação ainda é falha em muitos aspectos. Candidatos ricos podem tirar quanto quiserem do próprio bolso e colocar na campanha. Não deixa de ser um financiamento empresarial. A diferença é que a empresa pertence ao candidato.

Outra falha é não haver limite absoluto para as doações individuais, apenas uma proporção da renda do doador. Quanto mais rico é, mais ele pode doar e influir na campanha.

Mesmo assim, foi melhor do que a farra das doações empresariais. Por isso é temerário quando políticos começam a falar em reforma eleitoral. Sabe-se lá quais contrabandos passarão através dela.

17 Oct 08:49

Comic for October 17, 2016

by Scott Adams
15 Oct 11:02

Operations for software developers for beginners

I work as a software developer. A few years ago I had no idea what “operations” was. I had never met anybody whose job it was to operate software. What does that even mean? Now I know a tiny bit more about it so I want to write down what I’ve figured out.

operations: what even is it?

I made up these 3 stages of operating software. These are stages of understanding about operations I am going through.

Stage 1: your software just works. It’s fine.

You’re a software developer. You are running software on computers. When you write your software, it generally works okay – you write tests, you make sure it works on localhost, you push it to production, everything is fine. You’re a good programmer!

Sometimes you push code with bugs to production. Someone tells you about the bugs, you fix them, it’s not a big deal.

I used to work on projects which hardly anyone used. It wasn’t a big deal if there was a small bug! I had no idea what operations was and it didn’t matter too much.

Stage 2: omg anything can break at any time this is impossible

You’re running a site with a lot of traffic. One day, you decide to upgrade your database over the weekend. You have a bad weekend. Charity writes a blog post saying you should have spent more than 3 days on a database upgrade.

I think if in my “what even is operations” state somebody had told me “julia!! your site needs to be up 99.95% of the time” I would have just have hid under the bed.

Like, how can you make sure your site is up 99.95% of the time? ANYTHING CAN HAPPEN. You could have spikes in traffic, or some random piece of critical software you use could just stop working one day, or your database could catch fire. And what if I need to upgrade my database? How do I even do that safely? HELP.

I definitely went from “operations is trivial, whatever, how hard can keeping a site up be?” to “OMG THIS IS IMPOSSIBLE HOW DOES ANYONE EVER DO THIS”.

Stage 2.5: learn to be scared

I think learning to be scared is a really important skill – you should be worried about upgrading a database safely, or about upgrading the version of Ruby you’re using in production. These are dangerous changes!

But you can’t just stop at being scared – you need to learn to have a healthy concern about complex parts of your system, and then learn how to take the appropriate precautionary steps and then confidently make the upgrade or deploy the big change of whatever the thing you are appropriately scared of is.

If you stop here then you just end up using a super-old Ruby version for 4 years because you were too scared to upgrade it. That is no good either!

Stage 3: keeping your site up is possible

So, it turns out that there is a huge body of knowledge about keeping your site up!

There are people who, when you show them a large complicated software system running on thousands or tens of thousands of computers, and tell them “hey, this needs to be up 99.9% of the time”, they’re like “yep, that is a normal problem I have worked on! Here’s the first step we can take!”

These people sometimes have the job title “operations engineer” or “SRE” or “devops engineer” or “software engineer” or “system administrator”. Like all things, it’s a skillset that you can learn, not a magical innate quality.

Charity is one of these people! That blog post (”The Accidental DBA”)) I linked to before has a bunch of extremely practical advice about how to upgrade a database safely. If you’re running a database and you’re scared – you’re right! But you can learn about how to upgrade it from someone like Charity and then it will go a lot better.

getting started with operations

So, we’ve convinced ourselves that operations is important.

Last year I was on a team that had some software. It mostly ran okay, but infrequently it would stop working or get super slow. There were a bunch of different reasons it had problems! And it wasn’t a disaster, but it also wasn’t as awesome as we wanted it to be.

For me this was a really cool way to get a little bit better at operations! I worked on making the service faster and more reliable. And it worked! I made a couple of good improvements, and I was happy.

Some stuff that helped:

  • work on a dashboard for the service that clearly shows its current state (this is surprisingly hard!)
  • move some complicated code that did a lot of database operations into a separate webservice so we could easily time it out if something went wrong
  • do some profiling and remove some unnecessarily slow code

The most cool part of this, though, is that a much more experienced SRE later came in to work with the team on making the same service operate better, and I got to see what he did and what his process for improving things looked like!

It’s really helped me to realize that you don’t turn into a Magical Operations Person overnight. Instead, I can take whatever I’m working on right now, and make small improvements to make it operate better! That makes me a better programmer.

you can make operations part of your job

As an industry, we used to have “software development” teams who wrote code and threw it over the wall to “operations teams” who ran that code. I feel like we’ve collectively decided that we want a different model (“devops”) – that we should have teams who both write code and know how to operate it. And there are a lot of details of how that works exactly (do you have “SRE”s?)

But as an individual software engineer, what does that mean for you? I thiiink it means that you get to LEARN COOL STUFF. You can learn about how to deploy changes safely, and observe what your code is doing. And then when something has gone wrong in production, you’ll both understand what the code is doing (because you wrote it!!) and you’ll have the skills to figure it out and systematically prevent it in the future (because you are better at operations!).

I have a lot more to say about this (how I really love being a generalist, how doing some operations work has been an awesome way to improve my debugging skills and my ability to reason about complex systems and plan how to build complicated software). And I need to write in the future about super useful Ideas For Operating Software Safely I’ve learned about (like dark reads and circuit breakers). But I’m going to stop here for now. If you want more reading The Ops Identity Crisis is a good post about software developers doing operations, from the point of view of an ops person.

This is my favorite paragraph from Charity’s “WTF is operations?” blog post (which you should just go read instead of reading me):

The best software engineers I know are the ones who consistently value the impact and lifecycle of the code they ship, and value deployment and instrumentation and observability. In other words, they rock at ops stuff.

16 Oct 05:21

Comic for October 16, 2016

by Scott Adams
16 Oct 08:06

Programming books you might want to consider reading

There are a lot of “12 CS books every programmer must read” lists floating around out there. That’s nonsense. The field is too broad for almost any topic to be required reading for all programmers, and even if a topic is that important, people’s learning preferences differ too much for any book on that topic to be the best book on the topic for all people.

This is a list of topics and books where I’ve read the book, am familiar enough with the topic to say what you might get out of learning more about the topic, and have read other books and can say why you’d want to read one book over another.

Algorithmic game theory / auction theory / mechanism design

Why should you care? Some of the world’s biggest tech companies run on ad revenue, and those ads are sold through auctions. This field explains how and why they work. Additionally, this material is useful any time you’re trying to figure out how to design systems that allocate resources effectively.1

In particular, incetive compatiable mechanism deisgn (rouhgly, how to create systems that provide globally optimal outcomes when people behave in their own selfish best interest) should be required reading for anyone who designs internal incentive systems at companies. If you’ve ever worked at a large company that “gets” this and one that doesn’t, you’ll see that the copmany that doesn’t get it has giant piles of money that are basically being lit on fire because the people who set up incentives created systems that are hugely wasteful. This field gives you the background to understand what sorts of mechanisms give you what sorts of outcomes; reading case studies gives you a very long (and entertaining) list of mistakes that can cost millions or even billions of dollars.

Krishna; Auction Theory

The last time I looked, this was the only game in town for a comprehensive, modern, introduction to auction theory. Covers the classic second price auction result in the first chapter, and then moves on to cover risk aversion, bidding rings, interdependant values, multiple auctions, assymetrical information, and other real-world issues.

Relatively dry. Unlikely to be motivating unless you’re already interested in the topic. Requires an understanding of basic probability and calculus.

Steighlitz; Snipers, Shills, and Sharks: eBay and Human Behavior

Seems designed as an entertaining introduction to auction theory for the layperson. Requires no mathematical background and relegates math to the small print. Covers maybe, 1/10th of the material of Krishna, if that. Fun read.

Crampton, Shoham, and Steinberg; Combinatorial Auctions

Discusses things like how FCC spectrum auctions got to be the way they are and how “bugs” in mechanism design can leave hundreds of millions or billions of dollars on the table. This is one of those books where each chapter is by a different author. Despite that, it still manages to be coherent and I didn’t mind reading it straight through. It’s self-contained enough that you could probably read this without reading Krishna first, but I wouldn’t recommend it.

Shoham and Leyton-Brown; Multiagent Systems: Algortihmic, Game-Theoretic, and Logical Foundations

The title is the worst thing about this book. Otherwise, it’s a nice introduction to algorithmic game theory. The book covers basic game theory, auction theory, and other classic topics that CS folks might not already know, and then covers the intersection of CS with these topics. Assumes no particular background in the topic.

Nisan, Roughgarden, Tardos, and Vazirani; Algorithmic Game Theory

A survey of various results in algorithmic game theory. Requires a fair amount of background (consider reading Shoham and Leyton-Brown first). For example, chapter five is basically Devanur, Papadimitriou, Saberi, and Vazirani’s JACM paper, Market Equilibrium via a Primal-Dual Algorithm for a Convex Program, with a bit more motivation and some related problems thrown in. The exposition is good and the result is interesting (if you’re into that kind of thing), but it’s not necessarily what you want if you want to read a book straight through and get an introduction to the field.

Algorithms / Data Structures / Complexity

Why should you care? Well, there’s the pragmatic argument: even if you never use this stuff in your job, most of the best paying companies will quiz you on this stuff in interviews. On the non-bullshit side of things, I find algorithms to be useful in the same way I find math to be useful. The probability of any particular algorithm being useful for any particular problem is low, but having a general picture of what kinds of problems are solved problems, what kinds of problems are intractable, and when approximations will be effective, is often useful.

McDowell; Cracking the Coding Interview

Some problems and solutions, with explanations, matching the level of questions you see in entry-level interviews at Google, Facebook, Microsoft, etc. I usually recommend this book to people who want to pass interviews but not really learn about algorithms. It has just enough to get by, but doesn’t really teach you the why behind anything. If you want to actually learn about algorithms and data structures, see below.

Dasgupta, Papadimitriou, and Vazirani; Algorithms

Everything about this book seems perfect to me. It breaks up algorithms into classes (e.g., divide and conquer or greedy), and teaches you how to recognize what kind of algorithm should be used to solve a particular problem. It has a good selection of topics for an intro book, it’s the right length to read over a few weekends, and it has exercises that are appropriate for an intro book. Additionally, it has sub-questions in the middle of chapters to make you reflect on non-obvious ideas to make sure you don’t miss anything.

I know some folks don’t like it because it’s relatively math-y/proof focused. If that’s you, you’ll probably prefer Skiena.

Skiena; The Algorithm Design Manual

The longer, more comprehensive, more practical, less math-y version of Dasgupta. It’s similar in that it attempts to teach you how to identify problems, use the correct algorithm, and give a clear explanation of the algorithm. Book is well motivated with “war stories” that show the impact of algorithms in real world programming.

CLRS; Introduction to Algorithms

This book somehow manages to make it into half of these “N books all programmers must read” lists despite being so comprehensive and rigorous that almost no practitioners actually read the entire thing. It’s great as a textbook for an algorithms class, where you get a selection of topics. As a class textbook, it’s nice bonus that it has exercises that are hard enough that they can be used for graduate level classes (about half the exercises from my grad level algorithms class were pulled from CLRS, and the other half were from Kleinberg & Tardos), but this is wildly impractical as a standalone introduction for most people.

Just for example, there’s an entire chapter on Van Emde Boas trees. They’re really neat – it’s a little surprising that a balanced-tree-like structure with O(lg lg n) insert, delete, as well as find, successor, and predecessor is possible, but a first introduction to algorithms shouldn’t include Van Emde Boas trees.

Kleinberg & Tardos; Algorithm Design

Same comments as for CLRS – it’s widely recommended as an introductory book even though it doesn’t make sense as an introductory book. Personally, I found the exposition in Kleinberg to be much easier to follow than in CLRS, but plenty of people find the opposite.

Demaine; Advanced Data Structures

This is a set of lectures and notes and not a book, but if you want a coherent (but not intractably comprehensive) set of material on data structures that you’re unlikely to see in most undergraduate courses, this is great. The notes aren’t designed to be standalone, so you’ll want to watch the videos if you haven’t already seen this material.

Okasaki; Purely Functional Data Structures

Fun to work through, but, unlike the other algorithms and data structures books, I’ve yet to be able to apply anything from this book to a problem domain where performance really matters.

For a couple years after I read this, when someone would tell me that it’s not that hard to reason about the performance of purely functional lazy data structures, I’d ask them about part of a proof that stumped me in this book. I’m not talking about some obscure super hard exercise, either. I’m talking about something that’s in the main body of the text that was considered too obvious to the author to explain. No one could explain it. Reasoning about this kind of thing is harder than people often claim.

Dominus; Higher Order Perl

A gentle introduction to functional programming that happens to use Perl. You could probably work through this book just as easily in Python or Ruby.

If you keep up with what’s trendy, this book might seem a bit dated today, but only because so many of the ideas have become mainstream. If you’re wondering why you should care about this “functional programming” thing people keep talking about, and some of the slogans you hear don’t speak to you or are even off-putting (types are propositions, it’s great because it’s math, etc.), give this book a chance.

Levitin; Algorithms

I ordered this off amazon after seeing these two blurbs: “Other learning-enhancement features include chapter summaries, hints to the exercises, and a detailed solution manual.” and “Student learning is further supported by exercise hints and chapter summaries.” One of these blurbs is even printed on the book itself, but after getting the book, the only self-study resources I could find were some yahoo answers posts asking where you could find hints or solutions.

I ended up picking up Dasgupta instead, which was available off an author’s website for free.

Mitzenmacher & Upfal; Probability and Computing: Randomized Algorithms and Probabilistic Analysis

I’ve probably gotten more mileage out of this than out of any other algorithms book. A lot of randomized algorithms are trivial to port to other applications and can simpliify things a lot.

The text has enough of an intro to probability that you don’t need to have any probability background. Also, the material on tails bounds (e.g., Chernoff bounds) is useful for a lot of CS theory proofs and isn’t covered in the intro probability texts I’ve seen.

Sipser; Introduction to the Theory of Computation

Classic intro to theory of computation. Turing machines, etc. Proofs are often given at an intuitive, “proof sketch”, level of detail. A lot of important results (e.g, Rice’s Theorem) are pushed into the exercises, so you really have to do the key exercises. Unfortunately, most of the key exercises don’t have solutions, so you can’t check your work.

For something with a more modern topic selection, maybe see Aurora & Barak.

Bernhardt; Computation

Covers a few theory of computation highlights. The explanations are delightful and I’ve watched some of the videos more than once just to watch Bernhardt explain things. Targeted at a general programmer audience with no background in CS.

Kearns & Vazirani; An Introduction to Computational Learning Theory

Classic, but dated and riddled with errors, with no errata available. When I wanted to learn this material, I ended up cobbling together notes from a couple of courses, one by Klivans and one by Blum.

Operating Systems

Why should you care? Having a bit of knowledge about operating systems can save days or week of debugging time. This is a regular theme on Julia Evans’s blog, and I’ve found the same thing to be true of my experience. I’m hard pressed to think of anyone who builds practical systems and knows a bit about operating systems who hasn’t found their operating systems knowledge to be a time saver. However, there’s a bias in who reads operatings systems books – it tends to be people who do related work! It’s possible you won’t get the same thing out of reading these if you do really high-level stuff.

Silberchatz, Galvin, and Gagne; Operating System Concepts

This was what we used at Wisconsin before the comet book became standard. I guess it’s ok. It covers concepts at a high level and hits the major points, but it’s lacking in technical depth, details on how things work, advanced topics, and clear exposition.

BTW, I’ve heard very good things about the comet book. I just can’t say much about it since I haven’t read it.

Cox, Kasshoek, and Morris; xv6

This book is great! It explains how you can actually implement things in a real system, and it comes with its own implementation of an OS that you can play with. By design, the authors favor simple implementations over optimized ones, so the algorithms and data structures used are often quite different than what you see in production systems.

This book goes well when paired with a book that talks about how more modern operating systems work, like Love’s Linux Kernel Development or Russinovich’s Windows Internals.

Love; Linux Kernel Development

The title can be a bit misleading – this is basically a book about how the Linux kernel works: how things fit together, what algorithms and data structures are used, etc. I read the 2nd edition, which is now quite dated. The 3rd edition has some updates, but introduced some errors and inconsistencies, and is still dated (it was published in 2010, and covers 2.6.34). Even so, it’s a nice introduction into how a relatively modern operating system works.

The other downside of this book is that the author loses all objectivity any time Linux and Windows are compared. Basically every time they’re compared, the author says that Linux has clearly and incontrovertibly made the right choice and that Windows is doing something stupid. On balance, I prefer Linux to Windows, but there are a number of areas where Windows is superior, as well as areas where there’s parity but Windows was ahead for years. You’ll never find out what they are from this book, though.

Russinovich, Solomon, and Ionescu; Windows Internals

The most comprehensive book about how a modern operating system works. It just happens to be about Windows. Coming from a *nix background, I found this interesting to read just to see the differences.

This is definitely not an intro book, and you should have some knowledge of operating systems before reading this. If you’re going to buy a physical copy of this book, you might want to wait until the 7th edition is released (early in 2017).

Downey; The Little Book of Semaphores

Takes a topic that’s normally one or two sections in an operating systems textbook and turns it into its own 300 page book. The book is a series of exercises, a bit like The Little Schemer, but with more exposition. It starts by explaining what semaphore is, and then has a series of exercises that builds up higher level concurrency primitives.

This book was very helpful when I first started to write threading/concurrency code. I subscribe to the Butler Lampson school of concurrency, which is to say that I prefer to have all the concurrency-related code stuffed into a black box that someone else writes. But sometimes you’re stuck writing the black box, and if so, this book has a nice introduction to the style of thinking required to write maybe possibly not totally wrong concurrent code.

I wish someone would write a book in this style, but both lower level and higher level. I’d love to see exercises like this, but starting with instruction-level primitives for a couple different architectures with different memory models (say, x86 and Alpha) instead of semaphores. If I’m writing grungy low-level threading code today, I’m overwhelmingly like to be using c++11 threading primitives, so I’d like something that uses those instead of semaphores, which I might have used if I was writing threading code against the Win32 API. But since that book doesn’t exist, this seems like the next best thing.

I’ve heard that Doug Lea’s Concurrent Programming in Java is also quite good, but I’ve only taken a quick look at it.

Computer architecture

Why should you care? The specific facts and trivia you’ll learn will be useful when you’re doing low-level performance optimizations, but the real value is learning how to reason about tradeoffs between performance and other factors, whether that’s power, cost, size, weight, or something else.

In theory, that kind of reasoning should be taught regardless of specialization, but my experience is that comp arch folks are much more likely to “get” that kind of reasoning and do back of the envelope calculations that will save them from throwing away a 2x or 10x (or 100x) factor in performance for no reason. This sounds obvious, but I can think of multiple production systems at large companies that are giving up 10x to 100x in performance which are operating at a scale where even a 2x difference in performance could pay a VP’s salary – all because people didn’t think through the performance implications of their design.

Hennessy & Patterson; Computer Architecture: A Quantitative Approach

This book teaches you how to do systems design with multiple constraints (e.g., performance, TCO, and power) and how to reason about tradeoffs. It happens to mostly do so using microprocessors and supercomputers as examples.

New editions of this book have substantive additions and you really want the latest version. For example, the latest version added, among other things, a chapter on data center design, and it answers questions like, how much opex/capex is spent on power, power distribution, and cooling, and how much is spent on support staff and machines, what’s the effect of using lower power machines on tail latency and result quality (bing search results are used as an example), and what other factors should you consider when designing a data center.

Assumes some background, but that background is presented in the appendices (which are available online for free).

Shen & Lipasti: Modern Processor Design

Presents most of what you need to know to architect a high performance Pentium Pro (1995) era microprocessor. That’s no mean feat, considering the complexity involved in such a processor. Additionally, presents some more advanced ideas and bounds on how much parallelism can be extracted from various workloads (and how you might go about doing such a calculation). Has an unusually large section on value prediction, because the authors invented the concept and it was still hot when the first edition was published.

For pure CPU architecture, this is probably the best book available.

Hill, Jouppi, and Sohi, Readings in Computer Architecture

Read for historical reasons and to see how much better we’ve gotten at explaining things. For example, compare Amdahl’s paper on Amdahl’s law (two pages, with a single non-obvious graph presented, and no formulas), vs. the presentation in a modern textbook (one paragraph, one formula, and maybe one graph to clarify, although it’s usually clear enough that no extra graph is needed).

This seems to be worse the further back you go; since comp arch is a relatively young field, nothing here is really hard to understand. If you want to see a dramatic exmaple of how we’ve gotten better at explaining things, compare Maxwell’s original paper on Maxwell’s equations to a modern treatment of the same material. Fun if you like history, but a bit of slog if you’re just trying to learn something.


Beyer, Jones, Petoff, and Murphy; Site Reliability Engineering

A description of how Google handles operations. Has the typical Google tone, which is off-putting to a lot of folks with a “traditional” ops background, and assumes that many things can only be done with the SRE model when they can, in fact, be done without going full SRE.

For a much longer description, see this 22 page set of notes on Google’s SRE book.

Fowler, Beck, Brant, Opdyke, and Roberts; Refactoring

At the time I read it, it was worth the price of admission for the section on code smells alone. But this book has been so successful that the ideas of refactoring and code smells have become mainstream.

Steve Yegge has a great pitch for this book:

When I read this book for the first time, in October 2003, I felt this horrid cold feeling, the way you might feel if you just realized you’ve been coming to work for 5 years with your pants down around your ankles. I asked around casually the next day: “Yeah, uh, you’ve read that, um, Refactoring book, of course, right? Ha, ha, I only ask because I read it a very long time ago, not just now, of course.” Only 1 person of 20 I surveyed had read it. Thank goodness all of us had our pants down, not just me.

If you’re a relatively experienced engineer, you’ll recognize 80% or more of the techniques in the book as things you’ve already figured out and started doing out of habit. But it gives them all names and discusses their pros and cons objectively, which I found very useful. And it debunked two or three practices that I had cherished since my earliest days as a programmer. Don’t comment your code? Local variables are the root of all evil? Is this guy a madman? Read it and decide for yourself!

Demarco & Lister, Peopleware

This book seemed convincing when I read it in college. It even had all sorts of studies backing up what they said. No deadlines is better than having deadlines. Offices are better than cubicles. Basically all devs I talk to agree with this stuff.

But virtually every successful company is run the opposite way. Even Microsoft is remodeling buildings from individual offices to open plan layouts. Could it be that all of this stuff just doesn’t matter that much? If it really is that important, how come companies that are true believers, like Fog Creek, aren’t running roughshod over their competitors?

This book agrees with my biases and I’d love for this book to be right, but the meta evidence makes me want to re-read this with a critical eye and look up primary sources.

Drummond; Renegades of the Empire

This is the story of the development of DirectX. It also explains how Microsoft’s aggresive culture got to be the way it is today. The intro reads:

Microsoft didn’t necessarily hire clones of Gates (although there were plenty on the corporate campus) so much as recruiter those who shared some of Gates’s more notable traits – arrogance, aggressiveness, and high intelligence.

Gates is infamous for ridiculing someone’s idea as “stupid”, or worse, “random”, just to see how he or she defends a position. This hostile managerial technique invariably spread through the chain of command and created a culture of conflict.

Microsoft nurtures a Darwinian order where resources are often plundered and hoarded for power, wealth, and prestige. A manager who leaves on vacation might return to find his turf raided by a rival and his project put under a different command or canceled altogether

On interviewing at Microsoft:

“What do you like about Microsoft?” “Bill kicks ass”, St. John said. “I like kicking ass. I enjoy the feeling of killing competitors and dominating markets”.

St. John was hired, and he could do no wrong for years. This book tells the story of him and a few others like him. Read this book if you’re considering a job at Microsoft. I wish I’d read this before joining and not after!


Why should you care? From a pure ROI perspective, I doubt learning math is “worth it” for 99% of jobs out there. AFAICT, I use math more often than most programmers, and I don’t use it all that often. But having the right math background sometimes comes in handy and I really enjoy learning math. YMMV.

Bertsekas; Introduction to Probability

Introductory undergrad text that tends towards intuitive explanations over epsilon-delta rigor. For anyone who cares to do more rigorous derivations, there are some exercises at the back of the book that go into more detail.

Has many exercises with available solutions, making this a good text for self-study.

Ross; A First Course in Probability

This is one of those books where they regularly crank out new editions to make students pay for new copies of the book (this is presently priced at a whopping $174 on Amazon)2. This was the standard text when I took probability at Wisconsin, and I literally cannot think of a single person who found it helpful. Avoid.

Brualdi; Introductory Combinatorics

Brualdi is a great lecturer, one of the best I had in undergrad, but this book was full of errors and not particularly clear. There have been two new editions since I used this book, but according to the Amazon reviews the book still has a lot of errors.

For an alternate introductory text, I’ve heard good things about Camina & Lewis’s book, but I haven’t read it myself. Also, Lovasz is a great book on combinatorics, but it’s not exactly introductory.

Apostol; Calculus

Volume 1 covers what you’d expect in a calculus I + calculus II book. Volume 2 covers linear algebra and multivariable calculus. It covers linear algebra before multivariable calculus, which makes multi-variable calculus a lot easier to understand.

It also makes a lot of sense from a programming standpoint, since a lot of the value I get out of calculus is its applications to approximations, etc., and that’s a lot clearer when taught in this sequence.

This book is probably a rough intro if you don’t have a professor or TA to help you along. The Spring SUMS series tends to be pretty good for self-study introductions to various areas, but I haven’t actually read their intro calculus book so I can’t actually recommend it.

Stewart; Calculus

Another one of those books where they crank out new editions with trivial changes to make money. This was the standard text for non-honors calculus at Wisconsin, and the result of that was I taught a lot of people to do complex integrals with the methods covered in Apostol, which are much more intuitive to many folks.

This book takes the approach that, for a type of problem, you should pattern match to one of many possible formulas and then apply the formula. Apostol is more about teaching you a few tricks and some intuition that you can apply to a wide variety of problems. I’m not sure why you’d buy this unless you were required to for some class.

Hardware basics

Why should you care? People often claim that, to be a good programmer, you have to understand every abstraction you use. That’s nonsense. Modern computing is too complicated for any human to have a real full-stack understanding of what’s going on. In fact, one reason modern computing can accomplish what it does is that it’s possible to be productive without having a deep understanding of much of the stack that sits below the level you’re operating at.

That being said, if you’re curious about what sits below software, here are a few books that will get you started.

Nisan & Shocken; nand2tetris

If you only want to read one single thing, this should probably be it. It’s a “101” level intro that goes down to gates and boolean logic. As implied by the name, it takes you from NAND gates to a working tetris program.

Roth; Fundamentals of Logic Design

Much more detail on gates and logic design than you’ll see in nand2tetris. The book is full of exercises and appears to be designed to work for self-study. Note that the link above is to the 5th edition. There are newer, more expensive, editions, but they don’t seem to be much improve, have a lot of errors in the new material, and are much more expensive.

Weste; Harris, and Bannerjee; CMOS VLSI Design

One level below boolean gates, you get to VLSI, a historical acronym (very large scale integration) that doesn’t really have any meaning today.

Broader and deeper than the alternatives, with clear exposition. Explores the design space (e.g., the section on adders doesn’t just mention a few different types in an ad hoc way, it explores all the tradeoffs you can make. Also, has both problems and solutions, which makes it great for self study.

Kang & Leblebici; CMOS Digital Integrated Circuits

This was the standard text at Wisconsin way back in the day. It was hard enough to follow that the TA basically re-explained pretty much everything necessary for the projects and the exams. I find that it’s ok as a reference, but it wasn’t a great book to learn from.

Compared to West et al., Weste spends a lot more effort talking about tradeoffs in design (e.g., when creating a parallel prefix tree adder, what does it really mean to be at some particular point in the design space?).

Pierret; Semiconductor Device Fundamentals

One level below VLSI, you have how transistors actually work.

Really beautiful explanation of solid state devices. The text nails the fundamentals of what you need to know to really understand this stuff (e.g., band diagrams), and then uses those fundamentals along with clear explanations to give you a good mental model of how different types of junctions and devices work.

Streetman & Bannerjee; Solid State Electronic Devices

Covers the same material as Pierret, but seems to substitute mathematical formulas for the intuitive understanding that Pierret goes for.

Ida; Engineering Electromagnetics

One level below transistors, you have electromagnetics.

Two to three times thicker than other intro texts because it has more worked examples and diagrams. Breaks things down into types of problems and subproblems, making things easy to follow. For self-study, A much gentler introduction than Griffiths or Purcell.

Shanley; Pentium Pro and Pentium II System Architecture

Unlike the other books in this section, this book is about practice instead of theory. It’s a bit like Windows Internals, in that it goes into the details of a real, working, system. Topics include hardware bus protocols, how I/O actually works (e.g., APIC), etc.

The problem with a practical introduction is that there’s been an exponential increase in complexity ever since the 8080. The further back you go, the easier it is to understand the most important moving parts in the system, and the more irrelevant the knowledge. This book seems like an ok compromise in that the bus and I/O protocols had to handle multiprocessors, and many of the elements that are in modern systems were in these systems, just in a simpler form.

Not covered

Of the books that I’ve liked, I’d say this captures at most 25% of the software books and 5% of the hardware books. On average, the books that have been left off the list are more specialized. This list is also missing many entire topic areas, like PL, practical books on how to learn languages, networking, etc.

The reasons for leaving off topic areas vary; I don’t have any PL books listed because I don’t read PL books. I don’t have any networking books because, although I’ve read a couple, I don’t know enough about the area to really say how useful the books are. The vast majority of hardware books aren’t included because they cover material that you wouldn’t care about unless you were a specialist (e.g., Skew-Tolerant Circuit Design or Ultrafast Optics). The same goes for areas like math and CS theory, where I left off a number of books that I think are great but have basically zero probability of being useful in my day-to-day programming life, e.g., Extremal Combinatorics. I also didn’t include books I didn’t read all or most of, unless I stopped because the book was atrocious. This means that I don’t list classics I haven’t finished like SICP and The Little Schemer, since those book seem fine and I just didn’t finish them for one reason or another.

This list also doesn’t include books on history and culture, like Inside Intel or Masters of Doom. I’ll probably add a category for those at some point, but I’ve been trying an experiment where I try to write more like Julia Evans (stream of consciousness, fewer or no drafts). I’d have to go back and re-read the books I read 10+ years ago to write meaningful comments, which doesn’t exactly fit with the experiment. On that note, since this list is from memory and I got rid of almost all of my books a couple years ago, I’m probably forgetting a lot of books that I meant to add.

If you liked this, you might also like this list of programming blogs, which is written in a similar style.

  1. Also, if you play boardgames, auction theory explains why fixing game imbalance via an auction mechanism is non-trivial and often makes the game worse. [return]
  2. I talked to the author of one of these books. He griped that the used book market destroys revenue from textbooks after a couple years, and that authors don’t get much in royalties, so you have to charge a lot of money and keep producing new editions every couple of years to make money. That griping goes double in cases where a new author picks up a book classic book that someone else originally wrote, since the original author often has a much larger share of the royalties than the new author, despite doing no work no the later editions. [return]