Shared posts

09 Jul 15:28

They Called them Computers.

I've told this story before; but it bears repeating in more detail because of some facts I've recently learned.

The year was probably 1967. I was 15 years old, and a Freshman in High-school. I was sitting in the lunch room watching a man roll in a machine that looked, to me, like the helm of the Starship Enterprise.

The machine was called ECP-18. It was the size and shape of an office desk. Indeed, it was a desk. You could sit in front of it, and it had a large flat writing surface. Protruding up from that surface was a console -- a fascinating array of dozens and dozens of buttons and lights arranged in parallel rows. Indeed, the buttons were lights. I watched, enthralled, as the man pushed those buttons making them light up in a kaleidoscope of patterns.

I was already a computer geek by this time. I had taught myself binary math, and knew the binary representation of the octal digits by heart. I knew boolean algebra, and had been constructing logic circuits from old telephone relays in my basement. I had read book after book on computers; and had an inkling of what a computer language, like Basic, was. But I had never touched a real computer until this day.

I watched as the man toggled in the bootstrap loader. I didn't know it was the bootstrap loader at the time. He didn't tell me what he was doing. I was the annoying kid looking too closely over his shoulder, intently staring at every gesture, listening raptly to every word. I'm sure I weirded him out.

He would push the button/lights in the row marked address and mutter under his breath: "at address two-zero-zero". I could see the octal 0200 in the lights! I knew what he was saying! Egad! I understood!

He would push the button/lights in the row marked Memory Buffer, and mutter under his breath: "Store in two one five". The row of lights read 150215. I saw the 0215 at the end, and inferred that the '15' meant store. OH! Those numbers are instructions stored in the memory!

Bit by bit, as I watched that man toggle in diagnostic programs, and execute them, I picked up enough knowledge to formulate a working hypothesis about how this machine worked. When he left for the day, I leapt to the machine and toggled in some simple programs. And, by God, I got them to work!

I cannot recall a moment in my life when I have felt quite so victorious as that moment, when I first touched a real computer, and got that real computer to do my bidding. I was going to be a programmer. At that moment, I knew it in my bones. And from that moment on virtually every thought and every action in my young life was directed towards that goal.

My exposure to that machine was short lived. A week later it was wheeled away, and I never saw it again. But the course of my life had been set.


Judy Allen

I repeated that story so I could tell you this.

A few months ago a man named Tom Polk wrote to me. His letter said:

Hi, Bob, Tom Polk here. I Googled ECP-18 and found you. Below is a picture of a couple of my classmates in Big Spring, Texas around 1968 or so. You can see our school's ECP-18 in it. I'm sorry I don't have a bigger picture, but you'll recognize this. I learned to program on this machine and I really enjoyed your comments.

Tom and I struck up a conversation, and from him I learned much more about that machine, and the people behind it. One thing he said struck a special chord in me:

If your sales tech was a small woman, it was probably the lady who wrote this. She mentions her 40% interest in the company and selling it to a high profile educational company in Texas.

No, the tech was male. I recall nothing more about him. My focus was on those lights, and on his voice, and on his fingers pushing those buttons. But I am absolutely certain he was male.

And it is that certainty that shall be the topic of the rest of this blog.

Go read Judy Allen's story at the link Tom shared above. Read three or four of her short chapters, perhaps as far as "The Perkin's Pub Protest". It won't take you long. You'll know when you can return here. But I warn you, you won't want to stop; and you'll almost certainly later return to her fascinating, and inspiring story.

Whoa!

OK, pardon my language, but there's no other way to say this.

This woman was Bad ASS! She wrote a M_____ F____ two-pass assembler, in binary machine language, in 1024 words, in a couple of months, using nothing but the front panel switches, and a 10 character per second teletype with a paper tape reader/punch -- while taking care of a household, a husband, and a gaggle of young kids. God Damn! I dare you to try that!

And she wasn't just a good/great/radical programmer either. Did you read the part where she faced down the Union bullies? Did you read the part where she faced down the executive who couldn't imagine paying a man's salary to a woman? Did you read the Perkin's Pub Story?

No, this was no ordinary woman. Judy Allen was a force to be reckoned with.

But lest you think she was unique, lest you think she was a fluke, let me tell you about my early career.

Almost 50-50

I got my first job as a programmer in 1970 when I was 18 years old. I was hired to help write a large on-line time-sharing union accounting system on a Varian 620i minicomputer. This was a 16 bit machine with 32K x 16 of Core, and two 16 bit registers. Our team consisted of three teenage boys, two men, and three women. That was a pretty typical ratio of men to women in this company. Just under half the programmers were women who were writing COBOL, BAL, PL/1, and minicomputer assembler.

After that time, I watched the ratio of women to men in software plunge. By 1972 I was working for a large company, and the ratio had already dropped to less than 10%. By 1977 the ratio was virtually zero. The women had simply disappeared.

Ironically, in the '60s it was common for women to be programmers. Indeed, in the early '60s there could very well have been more female programmers than male programmers. And the reason for this was simple: To some extent programming was considered to be Women's Work.

Women's Work

Men were engineers. They conceived of machines and built them with their hands. They wielded the creative energies. The drudgery of tabulating and calculating was left to women.

This is a tradition that goes all the way back to Charles Babbage, and probably beyond. Ada Lovelace had been commissioned to translate into English, the lecture notes written in French by an Italian student of Babbage's. A drudgery if there ever was one. And yet, in so doing, she conceived of and documented the notion of programming Babbage's machine with algorithms that could deal with non-mathematical topics. She may have been the first person to realize that a computer manipulates symbols, not numbers. She may have been the first person to understand what that implies.

In the 1880s, women were commonly recruited, but seldom paid, to do the painstaking measurements and calculations requires by the male scientists of the day. This was especially true in Astronomy. Charles Pickering assembled a rather large squadron of such women (They were called: Pickering's Harem) to analyze the huge quantity of photographic plates being produced by the telescopes.

'Pickering's Computers' standing in front of Building C at the Harvard College Observatory, 13 May 1913.

The women were not allowed to touch the telescopes. That was Man's Work. But the women could do the drudgery of the computations. Computing was Women's Work. Indeed, they called the women: Computers.

Those "computers" did fantastic work. It was the deeply insightful work of Women like Annie Jump Cannon, and Henrietta Levitt, that allowed us to understand and measure nothing less than the scale and composition of the universe.

Bletchley Park

The tradition continued through the early part of the twentieth century, and into the '30s. At Bletchley Park, where Alan Turing and his team were breaking the German Enigma ciphers, there were perhaps 10,000 people; two thirds of whom were women. Teams of these women were gathered together to do the vital, but painstaking, work of listening, gathering, and collating the German messages. They learned to operate and "program" the machines that Turing and his team had built.

The Women of Bletchley Park

Grace Hopper

After the second World War, this role for women continued. Grace Hopper, for example, worked in the Navy programming the Mark I computer. Later she was hired by EMCC to work on the software for the UNIVAC. In 1952 she conceived of, and wrote, the very first compiler, and coined the term. She was the first Director of Automatic Programming at Remington Rand, and was the visionary behind the development of COBOL.

Grace Murray Hopper at the UNIVAC keyboard, c. 1960 (Uploaded by Jan Arkesteijn)

There were other women in software in those days. And all these women did fantastic, ground-breaking, work. And yet it was Women's Work. It was considered appropriate that Women should deal with the drudgery of programming. Men built the machines!

The Revelation of Symbols.

When did it change? When did programming become Man's work? When did Men invade this traditional role for Women, and drive drive the women out?

I think there's a clue in one of Grace Hopper's statements. She had developed the very first compiler but she later said:

"Nobody believed that, I had a running compiler and nobody would touch it. They told me computers could only do arithmetic."

Only Arithmetic? They didn't understand, did they? They didn't see the implications. They didn't see what a computer really was.

  • Ada Lovelace had seen it. She had seen that Babbage's engine, and engines like it were not merely calculators. Ada Lovelace saw that these machines could manipulate symbols.
  • Alan Turing certainly saw it. Indeed, he could be said to have given the notion it's mathematical foundation.
  • Grace Hopper saw it for sure. She implemented it by writing the very first compiler.
  • Judy Allen saw it. She wrote a symbolic assembler in binary.
  • Heinlein saw it. The Moon is a Hash Mistress was as clear as a Clarion.
  • Arthur C. Clarke And Stanley Kubrick saw it. Boy, did they ever.
  • Gene Roddenberry saw it. And he popularized it in a way that nobody else ever had.
  • And I saw it, as I stared over that male technician's shoulder.

These machines manipulated symbols. And if you can manipulate symbols, you can do anything! If you can manipulate symbols, you have power!

Power.

Power has a gender, and that gender is Male.

Judy?

That's when it shifted. At least that's my hypothesis. In the late '60s, and early '70s the popular culture began to see computers as more than just calculators. On Star Trek, Captan Kirk could talk to the computer. In 2001: a Space Odyssey, a computer was simultaneously sympathetic and malevolent. As a society we were beginning to see that computers had capabilities that were almost boundless. And it was becoming clear that it was the software, more than the hardware, from which those boundless possibilities arose.

The change was in the preception of who had the power. And in the late '60s we were realizing that it was the programmers, and not the hardware developers, who would wield the power.

When programmers became powerful, programming became Man's Work.

I felt that power. I knew it for what it was, while looking over that male technician's shoulder, watching him push those buttons. I felt that power, and I knew that I would have that power. I absorbed the power from him as I watched, and learned.

It was as man, who's shoulder I was looking over, wasn't it? It had to be! You couldn't absorb that kind of power from a woman, could you? It couldn't have been a woman, it couldn't have been -- Judy Allen, -- could it?

Oh Christ Almighty, could it?

09 Jul 15:08

OO vs FP

A friend of mine posted the following on facebook. He meant it as a troll; and it worked, because it irked me.

There are many programmers who have said similar things over the years. They consider Object Orientation and Functional Programming to be mutually exclusive forms of programming. From their ivory towers in the clouds some FP super-beings occasionally look down on the poor naive OO programmers and cluck their tongues.

That clucking is echoed by the OO super-beings in their ivory towers, who look askance at the waste and parentheses pollution of functional languages.

These views are based on a deep ignorance of what OO and FP really are.

Let me make a few points:

  • OO is not about state.

    Objects are not data structures. Objects may use data structures; but the manner in which those data structures are used or contained is hidden. This is why data fields are private. From the outside looking in you cannot see any state. All you can see are functions. Therefore Objects are about functions not about state.

    When objects are used as data structures it is a design smell; and it always has been. When tools like Hibernate call themselves object-relational mappers, they are incorrect. ORMs don't map relational data to objects; they map relational data to data structures. Those data structures are not objects.

    Objects are bags of functions, not bags of data.

  • Functional Programs, like OO Programs, are composed of functions that operate on data.

    Every functional program ever written is composed of a set of functions that operate on data. Every OO program ever written is composed of a set of functions that operate on data.

    It is common for OO programmers to define OO as functions and data bound together. This is true so far as it goes; but then it has always been true irrespective of OO. All programs are functions bound to data.

    You might protest and suggest that it is the manner of that binding that matters. But think about it. That's silly. Is there really so much difference between f(o), o.f(), and (f o)? Are we really saying that the difference is just about the syntax of a function call?[0]

The Differences

So what are the differences between OO and FP? What does OO have that FP doesn't, and what does FP have that OO doesn't?

  • FP imposes discipline upon assignment.

    A true functional programming language has no assignment operator. You cannot change the state of a variable. Indeed, the word "variable" is a misnomer in a functional language because you cannot vary them.

    Yes, I know, Funcitonal Programmers often say hi-falutin' things like "Functions are first-class objects in functional languages." That may be true; but functions are first-class objects in Smalltalk, and Smalltalk is an OO language, not a functional language.

    The overriding difference between a functional language and a non-functional language is that functional languages don't have assignment statements.[1]

    Does this mean that you can never change the state of something in a functional language? No, not at all. Functional languages generally offer ceremonies that you can perform in order to change the state of something. F# allows you to declare "mutable variables" for example. Clojure allows you to create special, uh, objects who's values can be changed using various magic incantations.

    The point is that a functional language imposes some kind of ceremony or discipline on changes of state. You have to jump through the right hoops in order to do it.

    And so, for the most part, you don't.

  • OO imposes discipline on function pointers.

    "Huh?" you say. But that, in fact, is what OO comes down to. For all the hi-falutin' rhetoric about OO and "real-world objects" and "programming closer to the way we think", what OO really comes down to is that OO languages replace function pointers with convenient polymorphism. [2]

    How do you implement polymorphism? You implement it with function pointers. OO languages simply do that implementation for you, and hide the function pointers from you. This is nice because function pointers are very difficult to manage well. Trying to write polymorphic code with function pointers (as in C) depends on complex and inconvenient conventions that everyone must follow in every case. This is usually unrealistic.

    In Java, every function you call is polymorphic. There is no way you can call a function that is not polymorphic. And that means that every java function is called indirectly through a pointer to a function.[3]

    If you wanted polymophism in C, you'd have to manage those pointers yourself; and that's hard. If you wanted polymorphism in Lisp you'd have to manage those pointers yourself (pass them in as arguments to some higher level algorithm (which, by the way IS the Strategy pattern.)) But in an OO language, those pointers are managed for you. The language takes care to initialize them, and marshal them, and call all the functions through them.

Mutually Exclusive?

Are these two disciplines mutually exclusive? Can you have a language that imposes discipline on both assignment and pointers to functions? Of course you can. These two things don't have anything to do with each other. And that means that OO and FP are not mutually exclusive at all. It means that you can write OO-Functional programs.

It also means that all the design principles, and design patterns, used by OO programmers can be used by functional programmers if they care to accept the discipline that OO imposes on their pointers to functions.

But why would a functional programmer do that? What benefit does polymorphism have that normal Functional Programs don't have? By the same token, what benefit would OO programmers gain from imposing discipline on assignment?

Benefits of Polymorphism.

There really is only one benefit to Polymorphism; but it's a big one. It is the inversion of source code and run time dependencies.

In most software systems when one function calls another, the runtime dependency and the source code dependency point in the same direction. The calling module depends on the called module. However, when polymorphism is injected between the two there is an inversion of the source code dependency. The calling module still depends on the called module at run time. However, the source code of the calling module does not depend upon the source code of the called module. Rather both modules depend upon a polymorphic interface.

This inversion allows the called module to act like a plugin. Indeed, this is how all plugins work.

Plugin architectures are very robust because stable high value business rules can be kept from depending upon volatile low value modules such as user interfaces and databases.

The net result is that in order to be robust a system must employ polymorphism across significant architectural boundaries.

Benefits of Immutability

The benefit of not using assignment statements should be obvious. You can't have concurrent update problems if you never update anything.

Since functional programming languages do not have assignment statements, programs written in those languages don't change the state of very many variables. Mutation is reserved for very specific sections of the system that can tolerate the high ceremony required. Those sections are inherently safe from multiple threads and multiple cores.

The bottom line is that functional programs are much safer in multiprocessing and multiprocessor environments.

The Deep Philosophies

Of course adherents to both Object Orientation and Functional Programming will protest my reductionist analysis. They will contend that there are deep philosophical, psychological, and mathematical reasons why their favorite style is better than the other.

My reaction to that is: Phooey!

Everybody thinks their way is the best. Everybody is wrong.

What about Design Principles, and Design Patterns?

The passage at the start of this article that irked me suggests that all the design principles and design patterns that we've identified over the last several decades apply only to OO; and that Functional Programming reduces them all down to: functions.

Wow! Talk about being reductionist!

This idea is bonkers in the extreme. The principles of software design still apply, regardless of your programming style. The fact that you've decided to use a language that doesn't have an assignment operator does not mean that you can ignore the Single Responsibility Principle; or that the Open Closed Principle is somehow automatic. The fact that the Strategy pattern makes use of polymorphism does not mean that the pattern cannot be used in a good functional language[4].

The bottom, bottom line here is simply this. OO programming is good, when you know what it is. Functional programming is good when you know what it is. And functional OO programming is also good once you know what it is.


  • [0] I imagine there are a few python programmers who might have something to say about that.
  • [1] This, of course, means that Scala is not a "true" functional language.
  • [2] This, of course, means that C++ is not a "true" OO language.
  • [3] Yeah, don't say it, I know. OK, an "analog" to a pointer to a function. (sigh).
  • [4] A good functional language is one that allows for convenient polymorphism. Clojure is a good example.
02 Apr 12:15

Git and GitHub Resources

Learning Git and GitHub can be daunting if you're new to it. I recently gave a small presentation where I pretty much firehosed a group of people about Git and GitHub for one hour. I felt bad that I could only really scratch the surface.

I thought it might be useful to collect some resources that have helped me understand Git and GitHub better. If you only read one thing, read Think like a git. That'll provide a good understanding and maybe motivate you to read the others.

Git

  • Pro Git If you have time to read a full book, read this one. It's free!
  • The Git Parable This story walks through what it would be like to create a Git like system from the ground up. In the process, you learn a lot about how Git is designed.
  • Think like a git This is for someone who's been using Git, but doesn't feel they really understand it. If you're afraid of the rebase, this is for you. It made Git click for me and inspired me to build SeeGit.
  • The thing about Git This is a bit of a philosophical piece with practical Git workflow suggestions.
  • GitHub Flow Like a Pro with these 13 Git Aliases This is about Git, but also GitHub workflows. It's a useful collection of aliases I put together.

Git on Windows

  • Better Git with PowerShell Introduces Posh-Git. But don't follow the instructions for installing Posh-Git here. Instead use...
  • Introducing GitHub for Windows GitHub for Windows is not only a nice UI client for Git geared towards GitHub, but it also is a great way to get the git command line and Posh-Git onto your machine.

GitHub

This is by no means a comprehensive list, and perhaps not the best list, but it's my list. Happy reading!

02 Apr 11:33

Management Bullshit

A lot of the advice you see about management is bullshit. For example, I recently read some post, probably on some pretentious site like medium.com, about how you shouldn’t send emails late as night if you’re a manager because it sends the wrong message to your people. It creates the impression that your people should be working all the time and destroys the idea of work-life balance.

whaaaaat's happening?

Don’t get me wrong, I get where they’re coming from. The 1990s.

For some reason, this piece of management advice made me angry. Let me describe my team. I have one person in San Francisco, two in Canada, one in Sweden, one in Copenhagen, a couple in Ohio, one in Australia, and I live in Washington. So pray tell me, when exactly can I send an email that won’t be received by someone out of “normal” working hours?

I believe the advice is well meaning, but it’s severely out of date with how distributed modern teams work today. I also think it mythologizes managers. It creates this mindset that managers wield some magical power in the actions they take.

True, there’s an implicit power structure at work between managers and those they manage. But healthy organizations understand that managers are servant leaders. They serve the needs of the team. Managers are not a special class of people. They are beautifully flawed like the rest of us. I sometimes have too much to drink and write tirades like this. Sometimes I get caught up in work and am short with my spouse or children. I say things I don’t mean at work because I’m angry or tired. We have to recognize management as a role, not a status.

The point is, rather than rely on these “rules” of business conduct, we’d be better served by building real trust amongst members of a team. My team understands that I might send an email at night not because I expect a response at night. It’s not because I expect people to work night and day. No, it’s because I understand we all work in different time zones. They know that I sometimes work at night because I took two hours out during the middle of the day to play soccer. And I understand they’ll respond to my emails when they’re damn good and ready to.

17 Feb 07:48

Avoid async void methods

Repeat after me, "Avoid async void!" (Now say that ten times fast!) Ha ha ha. You sound funny.

In C#, async void methods are a scourge upon your code. To understand why, I recommend this detailed Stephen Cleary article, Best Practices in Asynchronous Programming. In short, exceptions thrown when calling an async void method isn't handled the same way as awaiting a Task and will crash the process. Not a great experience.

Recently, I found another reason to avoid async void methods. While investigating a bug, I noticed that the unit test that should have ostensibly failed because of the bug passed with flying colors. That's odd. There was no logical reason for the test to pass given the bug.

Then I noticed that the return type of the method was async void. On a hunch I changed it to async Task and it started to fail. Ohhhhh snap!

If you write unit tests using XUnit.NET and accidentally mark them as async void instead of async Task, the tests are effectively ignored. I furiously looked for other cases where we did this and fixed them.

Pretty much the only valid reason to use async void methods is in the case where you need an asynchronous event handler. But if you use Reactive Extensions, there's an even better approach that I've written about before, Observable.FromEventPattern.

Because there are valid reasons for async void methods, Code analysis won't flag them. For example, Code Analysis doesn't flag the following method.

public async void Foo()
{
    await Task.Run(() => {});
}

It's pretty easy to manually search for methods with that signature. You might even catch them in code review. But there are other ways where async void methods crop up that are extremely subtle. For example, take a look at the following code.

new Subject<Unit>().Subscribe(async _ => await Task.Run(() => {}));

Looks legit, right? You are wrong my friend. Take a shot of whiskey (or tomato juice if you're a teetotaler)! Do it even if you were correct, because, hey! It's whiskey (or tomato juice)!

If you look at all the overloads of Subscribe you'll see that we're calling one that takes in an Action<T> and not a Func<T, Task>. In other words, we've unwittingly passed in an async void lambda. Because of the beauty of type inference and extension methods, it's hard to look at code like this and know whether that's being called correctly. You'd have to know all the overloads as well as any extension methods in play.

Here I Come To Save The Day

Clearly I should tighten up code reviews to keep an eye out for this problem, right? Hell nah! Let a computer do this crap for you. I wrote some code I'll share here to look out for this problem.

These tests make use of this method I wrote a while back to grab all loadable types from an assembly.

public static IEnumerable<Type> GetLoadableTypes(this Assembly assembly)
{
    if (assembly == null) throw new ArgumentNullException("assembly");
    try
    {
        return assembly.GetTypes();
    }
    catch (ReflectionTypeLoadException e)
    {
        return e.Types.Where(t => t != null);
    }
}

I also wrote this other extension method to make the final result a bit cleaner.

public static bool HasAttribute<TAttribute>(this MethodInfo method) where TAttribute : Attribute
{
    return method.GetCustomAttributes(typeof(TAttribute), false).Any();
}

And the power of Reflection compels you! Here's a method that will return every async void method or lambda in an assembly.

public static IEnumerable<MethodInfo> GetAsyncVoidMethods(this Assembly assembly)
{
    return assembly.GetLoadableTypes()
      .SelectMany(type => type.GetMethods(
        BindingFlags.NonPublic
        | BindingFlags.Public
        | BindingFlags.Instance
        | BindingFlags.Static
        | BindingFlags.DeclaredOnly))
      .Where(method => method.HasAttribute<AsyncStateMachineAttribute>())
      .Where(method => method.ReturnType == typeof(void));
}

And using this method, I can write a helper method for all my unit tests.

public static void AssertNoAsyncVoidMethods(Assembly assembly)
{
    var messages = assembly
        .GetAsyncVoidMethods()
        .Select(method =>
            String.Format("'{0}.{1}' is an async void method.",
                method.DeclaringType.Name,
                method.Name))
        .ToList();
    Assert.False(messages.Any(),
        "Async void methods found!" + Environment.NewLine + String.Join(Environment.NewLine, messages));
}

Here's an example where I use this method.

[Fact]
public void EnsureNoAsyncVoidTests()
{
    AssertExtensions.AssertNoAsyncVoidMethods(GetType().Assembly);
    AssertExtensions.AssertNoAsyncVoidMethods(typeof(Foo).Assembly);
    AssertExtensions.AssertNoAsyncVoidMethods(typeof(Bar).Assembly);
}

Here's an example of the output. In this case, it found two async void lambdas.

------ Test started: Assembly: GitHub.Tests.dll ------

Test 'GitHub.Tests.IntegrityTests.EnsureNoAsyncVoidTests' failed: Async void methods found!
'<>c__DisplayClass10.<RetrievesOrgs>b__d' is an async void method.
'<>c__DisplayClass70.<ClearsExisting>b__6f' is an async void method.
    IntegrityTests.cs(104,0): at GitHub.Tests.IntegrityTests.EnsureNoAsyncVoidTests()

0 passed, 1 failed, 0 skipped, took 0.97 seconds (xUnit.net 1.9.2 build 1705).

These tests will help ensure my team doesn't make this mistake again. It's really subtle and easy to miss during code review if you're not careful. Happy coding!

15 Feb 07:17

Our Programs Are Fun To Use

by Jeff Atwood

These two imaginary guys influenced me heavily as a programmer.

Instead of guaranteeing fancy features or compatibility or error free operation, Beagle Bros software promised something else altogether: fun.

Playing with the Beagle Bros quirky Apple II floppies in middle school and high school, and the smorgasboard of oddball hobbyist ephemera collected on them, was a rite of passage for me.

Here were a bunch of goofballs writing terrible AppleSoft BASIC code like me, but doing it for a living – and clearly having fun in the process. Apparently, the best way to create fun programs for users is to make sure you had fun writing them in the first place.

But more than that, they taught me how much more fun it was to learn by playing with an interactive, dynamic program instead of passively reading about concepts in a book.

That experience is another reason I've always resisted calls to add "intro videos", external documentation, walkthroughs and so forth.

One of the programs on these Beagle Bros floppies, and I can't for the life of me remember which one, or in what context this happened, printed the following on the screen:

One day, all books will be interactive and animated.

I thought, wow. That's it. That's what these floppies were trying to be! Interactive, animated textbooks that taught you about programming and the Apple II! Incredible.

This idea has been burned into my brain for twenty years, ever since I originally read it on that monochrome Apple //c screen. Imagine a world where textbooks didn't just present a wall of text to you, the learner, but actually engaged you, played with you, and invited experimentation. Right there on the page.

(Also, if you can find and screenshot the specific Beagle Bros program that I'm thinking of here, I'd be very grateful: there's a free CODE Keyboard with your name on it.)

Between the maturity of JavaScript, HTML 5, and the latest web browsers, you can deliver exactly the kind of interactive, animated textbook experience the Beagle Bros dreamed about in 1985 to billions of people with nothing more than access to the Internet and a modern web browser.

Here are a few great examples I've collected. Screenshots don't tell the full story, so click through and experiment.

As suggested in the comments, and also excellent:

(There are also native apps that do similar things; the well reviewed Earth Primer, for example. But when it comes to education, I'm not too keen on platform specific apps which seem replicable in common JavaScript and HTML.)

In the bad old days, we learned programming by reading books. But instead of reading this dry old text:

Now we can learn the same concepts interactively, by reading a bit, then experimenting with live code on the same page as the book, and watching the results as we type.

C'mon. Type something. See what happens.

I certainly want my three children to learn from other kids and their teachers, as humans have since time began. But I also want them to have access to a better class of books than I did. Books that are effectively programs. Interactive, animated books that let them play and experiment and create, not just passively read.

I want them to learn, as I did, that our programs are fun to use.

[advertisement] Stack Overflow Careers matches the best developers (you!) with the best employers. You can search our job listings or create a profile and even let employers find you.
14 Feb 09:47

The Case Against Pay for Performance

If you run a company, stop increasing pay based on performance reviews. No, I'm not taking advantage of all that newly legal weed in my state (Washington). I know this challenges a belief as old as business itself. It challenges something that seems so totally obvious that you're still not convinced I'm not smoking something. But hear me out.

money money money! - by Andrew Magill CC BY 2.0 https://flic.kr/p/68vjKV

This excellent post in the Harvard Business Review Blog, Stop Basing Pay on Performance Reviews, makes a compelling case for this. It won't take long, so please go read it. Here's an excerpt.

If your company is like most, it tries to drive high performance by dangling money in front of employees’ noses. To implement this concept, you sit down with your direct reports every once in a while, assess them on their performance, and give them ratings, which help determine their bonuses or raises.

What a terrible system.

Performance reviews that are tied to compensation create a blame-oriented culture. It’s well known that they reinforce hierarchy, undermine collegiality, work against cooperative problem solving, discourage straight talk, and too easily become politicized. They’re self-defeating and demoralizing for all concerned. Even high performers suffer, because when their pay bumps up against the top of the salary range, their supervisors have to stop giving them raises, regardless of achievement.

The idea that more pay decreases intrinsic motivation is supported by a lot of science. In my one year at GitHub post I highlighted a talk that referred to a set of such studies:

I can pinpoint the moment that was the start of this journey to GitHub, I didn’t know it at the time. It was when I watched the RSA Animate video of Dan Pink’s talk on The surprising truth about what really motivates us.

Hint: It’s not more money.

I recommend this talk to just about everyone I know who works for a living. I'm kind of obsessed with it. It's mind opening. Dan Pink shows how study after study demonstrate that for work that contains a cognitive element (such as programming), more pay undermines motivation.

More recently, researchers found a neurological basis to support the idea that monetary rewards undermine intrinsic motivation.

This rings true to me personally because of all the open source work I do for which I don't get paid. I do it for the joy of building something useful, for the recognition of my peers, and because I enjoy the process of writing code.

Likewise, at work, the reason I work hard is I love the products I work on. I care about my co-workers. And I enjoy the recognition for the good work I do. The compensation doesn't motivate me to work harder. All it does is give me the means and reason to stay at my company.

Not to mention, should the company dangle a bonus to improve my performance, there's some questions to ask. Why wasn't I already trying to improve my performance? Where will this new performance come from? Often, the extra performance comes from attempting to work long hours which backfires and is unsustainable.

So what's the alternative?

Pay according to the market

This is what Lear did, emphasis mine:

In 2010, we replaced annual performance reviews with quarterly sessions in which employees talk to their supervisors about their past and future work, with a focus on gaining new skills and mitigating weaknesses. We rolled out the change to our 115,000 employees across 36 countries, some of which had cultures far different from that of our American base.

The quarterly review sessions have no connection to decisions on pay. None. Employees might have been skeptical at first, so to drive the point home, we dropped annual individual raises. Instead we adjust pay only according to changing local markets.

They pay according to the market.

This makes a lot of sense when you consider the purpose of compensation:

  • It's an exchange of money for work.
  • It helps a company attract and hire talent.
  • It helps a company retain talent.

It's not a reward. You wouldn't go to your neighborhood kid and say, "Hey, I'll pay you 50% of what the market would normally offer you, but I'll increase it 4% every year if you do a really good job." The kid would rightfully give you the middle finger. But companies do this to employees all the time. Don't believe me?

A recent study showed,

Staying employed at the same company for over two years on average is going to make you earn less over your lifetime by about 50% or more.

Keep in mind that 50% is a conservative number at the lowest end of the spectrum. This is assuming that your career is only going to last 10 years. The longer you work, the greater the difference will become over your lifetime.

Let that sink in.

If your employees act rationally, they'd be stupid to stay at your company for longer than two years watching their pay drop over the years in comparison to the market for their skills. And if they wise up and leave every two years, the turnover is very costly. The total cost of turnover can be as high as 150% of an employee's salary when you factor in lost opportunity costs and the time and expense in hiring a replacement.

So even if you decide to continue on a pay for performance system, market forces necessitate that you adjust pay to market value. Or continue selling your employees a story about how they should stay out of "loyalty". This story is never bidirectional.

And what should you do if someone tries to take advantage of the system and consistently underperforms? You fire them. They are not upholding their side of the exchange. Most of the time, people want to do good work. Optimize for that scenario. People will have occasional ruts. Help them through it. That's what the separate performance reviews are for. It provides a means of providing candid feedback without the extra charged atmosphere that money can bring to the discussion.

The Netflix model

This is one area where I think the Netflix model is very interesting. They try to pay top of market for each employee using the following test:

  1. What could person get elsewhere?
  2. What could we pay for replacement?
  3. What would we pay to keep that person (if they had a bigger offer elsewhere)?

After all, when you hire someone, the offer is usually based on the market. So why stop adjusting it after that? This also solves the problem I've seen companies run into when the market is hot, they'll hire a fresh college grad for more than a much more experienced developer makes because the developer's performance bonuses haven't kept up with the market.

Keep in mind, this is good for employees too. If an employee wants to make more money, they will focus on increasing their value to your company and the market as a whole. This aligns the employee's interest with the company's interest.

Another cool feature of the Netflix model is they give employees a choice to take as much or as little of that compensation as stock instead of cash. I think that's a great way to give employees a choice in how they invest in the company's future.

Conclusion

If you insist on continuing to believe that bonuses for performance is the right approach, I'd be curious to hear what science and data you have that refutes the evidence presented by the various people I've referenced. What do you know that they don't? It'd make for some interesting research.

UPDATE: Based on some comments, there's one thing I want to clarify. I don't think the evidence suggests that all companies should pay absolute top of Market. That's not what I'm suggesting. Many companies can't afford that and offer other compensating factors to lure developers. For example, a desirable company that makes amazing products might be able to get away with paying closer to the market average because of the prestige and excitement of working there.

The point is not that you have to be at 99%. The point is to use the market value for an individual's skills as your index for compensation adjustments. When it goes up, you raise accordingly. When it flatlines or goes down, well, I'm not sure what Lear does. I certainly wouldn't lower salaries. I'd just stop having raises until the market value is above an individual's current pay. I'd be curious to hear what others think.