What I've Learned From Failure
What I've Learned From Failure
Reg “raganwald” Braithwaite
Buy on Leanpub


The chapters of this book originally appeared as online essays and blog posts. I decided to publish these essays as an e-book as well as online. This format doesn’t replace the original online essays, it’s a way to present these essays in a more coherent whole that’s easier to read consecutively. I hope you like it.

Reginald “Raganwald” Braithwaite, Toronto, Christmas 2011

The original version of these essays are copyright 2004 - 2011 Reginald Braithwaite. This expression of their ideas is Copyright 2011-2012 Reginald Braithwaite. New material copyright 2012-2014 Reginald Braithwaite.

Creative Commons License

Creative Commons License

“What I’ve Learned From Failure” is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

About The Author

When he’s not shipping Ruby, Javascript and Java applications scaling out to millions of users, Reg “Raganwald” Braithwaite has authored libraries for Javascript and Ruby programming such as Katy, JQuery Combinators, YouAreDaChef, andand, and others.

He writes about programming on [raganwald.com][http://raganwald.com] un-blog as well as general-purpose ruminations on braythwayt.com.


Twitter: @raganwald
Email: raganwald@gmail.com

Reginald "Raganwald" Braithwaite

Reginald “Raganwald” Braithwaite

(Photograph of the author (c) 2008 Joseph Hurtado, All Rights Reserved. http://www.flickr.com/photos/trumpetca/)

What I’ve Learned From Failure

I have been fired from more jobs than most people have had.–Mark Cuban

Why does failure matter?

It’s a funny thing. After more than twenty-five years of drawing a paycheque for creating software, people generally want to hire me because they want me to duplicate the successes I’ve had. The model seems to be “Just do the things you’ve done successfully before, and you’ll be successful now.”

My experience is that this has never worked on its own. Success in software development is at least as much about avoiding failure modes as it is about “best practices.” I conjecture it’s because software development on a commercial scale is so hard that almost any mistake will sink a project if left uncorrected or even worse, actively encouraged.

We tend to seek easy, single-factor explanations of success. For most important things, though, success actually requires avoiding many separate causes of failure.—Jared Diamond

With that in mind, I’ve taken a little time to jot down some thoughts about situations where I’ve personally failed. I’m not going to tell you about some theoretical anti-pattern, or relate some broken thing I’ve fixed, I’m going to share things that caused me to leap from the deck of a burning boat to avoid drowning.

If you decide to run with the ball, just count on fumbling and getting the shit knocked out of you, but never forget how much fun it is just to be able to run with the ball.–Jimmy Buffett

Some of them, in retrospect, would be comical if it wasn’t for the human misery, damaged careers, and money wasted on failed projects. Or worse, in my opinion, the opportunity cost of putting good people to work on things that never end up delighting the world. I weep for what might have been.

The four most important causes of failure

Things which matter most must never be at the mercy of things which matter least.—Johann Wolfgang Von Goethe (1749-1832)

The first thing I’ve learned from failure is that the four things which matter most are:

  1. The quality of the people doing the development
  2. The expected value of the product to its stakeholders
  3. The fitness of the proposed solution
  4. The quality of project management and expectations communication

In my experience, you need all four working to have a successful project. I’ve personally failed when even one of those four things was bad and not corrected immediately. If two, three, or all four were wrong, my discovery is that I’ve been unable to avert disaster. (This list obviously doesn’t cover all of the factors needed for business success: I’m just talking about getting the software to ship).

Now that I’ve learned this, I have four new things to evaluate when placed in charge of a new project. And regardless of what I’m told, I’m going to investigate these four things every time, right away, without fail.

I’ve never seen a project where strength in one area made up for weaknesses in others. I’ve never personally seen a great technology platform, for example, that magically enabled low-quality developers to produce commercial-quality results.

And don’t talk to me about XP being a magic bullet: all of the good XP teams I’ve seen happened to have quality developers, a valuable objective, decent technology, and yes, good project management.


I think the root of your mistake is saying that macros don’t scale to larger groups. The real truth is that macros don’t scale to stupider groups.–Paul Graham on the Lightweight Languages mailing list

I’ve been involved with strong teams and weak teams, and the weak teams always failed. Weak teams have individuals whose performance is weak. The strongest indication of a weak team is the realization that if you were to quit and start your own business, you wouldn’t try to poach any of your colleagues.

Painful experience has taught me some of the signs that a team doesn’t have the chops to perform up to par. The first sign of a weak team is poor hiring practices.

Developing software is a difficult job. It requires a panoply of strengths. Hiring good people is never as simple as interviewing three people with “five years of J2EE” on their résumés and making an offer to the best of the three. Strong teams have almost impossibly high hiring standards. Strong teams will always leave a desk empty rather than settling for less than the best.

Another sign of a weak team is poor development hygiene. There are dozens of development practices that seem trivial to the inexperienced outsider or to the manager focusing on “big wins.” Examples of development hygiene include source code versioning, maintenance of an accurate bug or issue database, significant use of automated testing, continuous integration, and specifications that are kept current (whether incredibly detailed or high-level overviews).

One team I audited were not just unwilling, but were actually unable to build a product that was in sustaining development. In other words, the product was in the field, in use by customers, and the team were not able to rebuild it from source. They were issuing all of their bug fixes as patches on their existing binaries. This was not a good sign.

(c) 2003 Steve Gregory

(c) 2003 Steve Gregory

Does this mean that nothing can be done if the team is weak? Not exactly. Some of the time I’ve had the authority to replace members of the team. I’ve always had the ability to set an example and suggest practices. But sometimes I’ve thought that an organization would be unreceptive to calls for change. And for want of courage, projects have been lost.

The bottom line is that when I’ve failed to recognize weakness in the team and/or failed to take immediate and decisive action to bring the team up to world-class strength, I’ve failed.

Argue with idiots, and you become an idiot.–Paul Graham

If you compete with slaves, you become a slave.–Norbert Weiner

I’ve mentioned that I’ve failed with weak teams. Would you believe I’ve compounded this failure by failing with weak stakeholders? Whenever I’ve had stakeholders who didn’t have the horsepower or the will to recognize that a project was in trouble, I’ve wound up in the E.R. having the brick dust removed from my forehead.

A chicken and a pig decided to open a diner together. The pig asked the chicken what they should call their new restaurant. The chicken suggested “Ham and Eggs.” The pig thought about it for a while, then decided she didn’t want any part of the venture. “You,” she told the chicken, “would only be interested in serving breakfast. I’d be committed.”–as told by Ken Schwaber

Getting away from weak teams, another source of failure is the omnipresent threat of “chickens.” A chicken is not necessarily a weak individual, but a sign of a weak management structure. A chicken is an individual who has significant authority over your project, but does not make a personal commitment to the success of the project. Significant authority includes the authority to impose constraints on the team.

Even a single chicken can take a project out. Chickens are a special case of “external dependencies.” Special, because they are often politically entrenched. I’ve worked with teams where the pay scale was determined by an edict from H.R. They were literally prevented from hiring top talent, and it wasn’t a question of budget: they did not have the freedom to replace three mediocre programmers with two good programmers for the same price.

Another situation involved a team that were continually pestered to include functionality and architecture for “strategic” reasons by a Business Development person. Although senior management made the importance of the strategic functionality clear, they were unwilling to relax tactical requirements like the ship date or the target revenues. They had to constantly manage the “chicken” in order to succeed.

I’ve managed around chickens here and there, but I’ve failed to deliver a successful project whenever I’ve failed to limit the effect of chickens on the management of projects.


Always dive down into a problem and get your hands on the deepest issue behind the problem. All other considerations are to dismissed as “engineering details”; they can be sorted out after the basic problem has been solved.–Chris Crawford

The next thing I’ve learned from my failures is something familiar to the test-driven development crowd. It’s mandatory to fail early. You need to know you’re in trouble right away. That’s essential when taking over an existing project or starting something new. You have to find out how you’re doing within weeks. Not quarters, not months. The longer you wait, the more inertia the failure will have.

You have to come in, take over, and establish some incredibly short-term goals and be prepared to take action based on the project’s performance. I’ve learned that there’s no such thing as too little time between milestones. Looking back at projects where I’ve failed, many contained some uncertainty or risk that I didn’t address immediately.

(c) 2010 Jim Pennucci

(c) 2010 Jim Pennucci

In one case there was a critical piece of functionality that was so important the entire architecture was designed around it. The CTO and every developer swore it was the greatest thing since sliced bread. I made a back-of-the-envelope risk calculation and scheduled testing of that functionality to begin three months before the project was to be delivered.

A week before the function was to go into test, the technical lead informed me that it didn’t work, had never worked, and that no attempt to fix it would work, because there was a major, glaring flaw that had been overlooked. A rewrite would be required.

The stakeholders agreed and appointed a new team to handle the rewrite. Needless to say, the old team’s job security suffered a major hit.

Today, older and wiser, I would demand immediate proof of feasibility of all critical pieces of the product, no matter how obvious things may be to everyone else. I should have said “Great! It’s a slam dunk! Wonderful, let’s schedule a demo next week.” At least we would have felt the pain early.


Whenever I’ve allowed the details of a project to escape me, I’ve failed. On one project, the technical lead was a Ph.D. and refused to describe his work, saying that although I was the managing product development, he wasn’t going to try to explain his rarified code or architecture to a layperson.

No, I’m not responsible for what happened. I’m accountable for how we dealt with it, but I’m not responsible for it.–Julian Fantino

Needless to say, I was unable to ship a successful project. On another project the CEO would ask me the same question every few days: “draw on my whiteboard who’s working on what.” I had no trouble with this on that particular project, but looking back there have been projects where I was not tracking people’s work on a day to day basis.

And I can tell you, whenever the details of a project have slipped from my grasp, the project has started to drift into trouble. I make no apologies for now insisting on knowing exactly who, what, where, when, and why. There’s a big difference between being asked to explain your work in detail and being told how to do your job.

My personal experience is that attention to detail has always accompanied successful projects: Losing track of the details has always accompanied failing projects.

The Schedule

In most companies if a good quality project ships late then the managers will still get it in the neck whereas if poor quality project ships on time then the managers say “we did our best - obviously the dev team seem to be of a poor standard”.–Daniel H. Steinberg

Dates are sacred. I’ve learned this lesson in good times and bad. Stakeholders treasure good dates. Stakeholders despise bad dates and the people who make flawed promises. That would have been me, more than once.

Every time, the lesson has been clear. Don’t get the dates wrong. I’ll confess: I don’t really think Scrum is an order of magnitude more effective than anything else at producing beautiful, world-changing software. It may be worse. But it does produce software every month, month after month.

(c)2011 Phillip Pessar

(c)2011 Phillip Pessar

And every time I’ve delivered software on schedule milestone after milestone, my influence and standing with stakeholders has grown. And every time I’ve missed a date, I’ve suffered, regardless of whether the late software was demonstrably better than what was originally planned for the missed date.

If documents don’t serve to avoid stupid things, mitigate risks or calculate budgets then what are they for? They’re to show you have a “process” and a “paper trail” so that you can get ISO certified. That’s all they look for, they don’t care if you read them or not.–Skagg on http://discuss.fogcreek.com/joelonsoftware/

Back to the measurable processes. I’ve learned from failure that stakeholders like to know what’s going on. I hate producing useless documentation. The net result is that I’ve tried to find the happy medium where I generate weekly management reports on projects.

A management report is something that is used to actually make a decision. Everything else is garbage. I’ve learned that when I haven’t had management reports for a project, failure has resulted. Worse, sometimes I’ve had documents and metrics that were used to justify bad decisions that sank the ship.

So my lesson from these failures is that every project needs a set of regular reports that contain information you’ll actually use to make decisions.


Few projects are cancelled because their designs and implementation weren’t complicated enough. Many are cancelled because they become so complicated that no one can understand them anymore; any attempt to change or extend the system results in so many unintended side effects that continuing to extend the system becomes practically impossible.–Steve McConnell

One of the reasons people associate me with “Agile” development approaches is that I’m always trying to simplify, simplify, simplify. This is because almost every time I added something to a milestone, I’ve gotten burned. It seems like it’s always better to say “just finish what we planned, get that 100% functional, and then we’ll add foobars.”

I recently got burned twice in the same project adding functionality in between milestones. Both times I was sure that the changes were low risk. Both times I burned myself. Now I’m licking my wounds and swearing I’ll never, ever break my agile principles of restricting scope changes to between increments.

It’s important to remember that when you start from scratch there is absolutely no reason to believe that you are going to do a better job than you did the first time. First of all, you probably don’t even have the same programming team that worked on version one, so you don’t actually have “more experience”. You’re just going to make most of the old mistakes again, and introduce some new problems that weren’t in the original version.–Joel Spolsky, “Things You Should Never Do, Part I”

Here’s something that I’ve screwed up repeatedly. Sometimes I’ve bounced back, sometimes the project has paid the ultimate price. The grand “this time we’ll get it right” mantra is absolute garbage.

It had taken 3 years of tuning to get code that could read the 60 different types of FTP servers, those 5000 lines of code may have looked ugly, but at least they worked.–Lou Montulli, one of the founding engineers at Netscape

Don’t talk to me about porting to Java, or new design patterns. If you must refactor, refactor here, and there, and there to solve this, and that, and the other specific problem that has a specific feature or bug attached to it. And show me that you had 100% unit testing coverage on the affected code and completed each refactoring in a day or so and then ran all the unit tests and got a green light.

If you can’t do that, you’re going to fail. I know it, because I’ve failed when I didn’t do that. And when I cried on a friend’s shoulder, he told me “I also made that mistake once, and I suffered the same horrible fate.”

Thinking that a major rewrite is going to solve all of your problems is just revisiting my four things that matter most and planning on having one, the fitness of the proposed solution, overpower defects in the people, expected value, and process. It won’t happen.

A major rewrite should produce a major new product that offers an order of magnitude more expected value. And you’ll need to be 100% sure your team has the horsepower to get the job done and is going to use a process that can handle the load. I say this because I’ve tried and failed to rewrite entire applications, and I’ve taken over other people’s rewrite projects and failed there too.


Some days you are the bug, some days you are the windshield.

I’ve learned a little about politics from failing. What I’ve learned is that if you stick your neck out and evangelize change, you will be blamed if you do not achieve results. You may or may not care about that. But be aware of the fact that making changes involves spending your personal credibility. If you don’t want to lose it, don’t ante up: get out of the project.

Don’t have good ideas if you aren’t willing to be responsible for them.–Alan Perlis

And if you decide to make changes, have the courage to go 100% with your gut. I’ve failed more than once when I watered down my convictions in order to appease dissenters. The only thing worse than evangelizing change and failing is looking back and realized you might have succeeded if you’d held firm on your convictions. What a waste!

Making an employee work and profiting from that work are two different things.–Eliyahu Goldratt

I’ve seen a number of “sweat shops,” and I’ve worked in several places where long hours and rhinoplastic intimacy with the grind stone were demanded of the team. I can honestly say that hard work makes no long-term difference to failing software development teams.

I disagree with those who say that long hours are 100% detrimental to software development: I’ve seen lots of situations where people worked around the clock, motivated by passion. But those were successful projects.

I’ve learned that redoubling effort when a project is in trouble has not fixed the project. The conclusion I draw is that although teams have worked long hours on many successful projects, there is no causal relationship between long hours and success (It’s another example of the fallacy of “best practices”: copying a single element of a successful project does not guarantee that another project will be improved).

My experience with failing projects is that the exhortation “Ahh, I’m going to have to go ahead and ask you to come in on Sunday, too…” has always been interpreted as punishment, not a meaningful way to fix the project. It has had no effect on under-performing members of the team and tends to strongly demotivate the people who are pulling more than their fair share of the weight.

It is impossible to sharpen a pencil with a blunt axe. It is equally vain to try to do it with ten blunt axes instead.—Edsger Dijkstra

Good luck convincing stakeholders of this. One of the reasons people love to hand out overtime like candy is that hours in the office are measurable. The team’s behind? Make them stay until midnight every night. Even if it doesn’t work, the executive handing out this order can be sure she can measure compliance.

The bottom line is, it’s easy to measure how many axes you’re using to sharpen a pencil. When you discover that a blunt axe isn’t sharpening the pencil, how do you propose to measure “sharpness”? How do you measure “working smarter”? If we wish to count lines of code, we should not regard them as lines produced but as lines spent. Edsger Dijkstra

So another thing I’ve learned about failure is that when things start to go wrong, stakeholders want two things:

  1. New processes
  2. A way to measure compliance with the new processes
  3. Overtime meets both criteria nicely, as do other simple panaceas like generating reports with every build or compliance with coding standards.

Fixing failing projects demands lots of things that are easy to measure and some that aren’t. I’ve learned that if you don’t control your stakeholder’s expectations around change, you’ll find yourself fending off demands for things like overtime and reports.


Even when my proposals are seen as significant improvements, they are often rejected on the grounds that they are not intuitive. It is a classic Catch-22: The client wants something that is significantly superior to the competition. But if it is to be superior, it must be different. (Typically, the greater the improvement, the greater the difference.) Therefore, it cannot be intuitive, that is, familiar. What the client wants is an interface with at most marginal differences from current practice… that, somehow, makes a major improvement.—Jef Raskin

When I’ve been brought in specifically to “work out” a failing project I’ve failed when I didn’t have the authority and support to make major changes. This is saying the same thing I’ve said several times already, but it needs to be repeated.

Often, the stakeholders have just finished casting someone into the darkness and think that they’ve cast failure into the darkness with him. Take a moment and look up the definition of the word “scapegoat.” They may have symbolically cleansed the project of sin, but the sins remain, and I have inherited them whenever I’ve allowed the project to continue to do business as usual.

The most damaging phrase in the language is, “It’s always been done that way.”—Rear Admiral Grace Hopper

I’ve heard dozens of variations on the same line (yes, I’ve failed dozens of times!) The line is “Well, so-and-so failed because he didn’t x. But now you’re here, you’ll get x cleaned up, and we’ll start succeeding right away. We were hardly failing, really, a little behind, nothing serious.”

Every time I’ve been told that, things have ended up being seriously dire. No, it wasn’t as simple as implementing monthly sprints or formalizing acceptance tests or nightly builds. The rot went right to the core and the stakeholders were usually (unwittingly) enabling it by not understanding or being in denial of the real problems. When people don’t see the depth of the problem, they don’t accept the importance of making changes.

It has boiled down to something so simple that you’ve probably heard it described in jest as the definition of insanity. If a project has been doing things a certain way, and the stakeholders are not 100% happy with the results, doing things substantially the same way will not produce substantial changes in the results.


There are two kinds of people in the world: those who finish what they started.

Getting back to failing early, I’ve learned it’s important to completely fail. Get fired. Shoot the project, then burn its corpse. Melt the CVS repository and microwave the backup CDs. When things go wrong, I’ve often tried to play the hero from start to finish. Guess what? Some projects are doomed no matter what. Some need skills I don’t possess. And some need a fresh face.

The best way to fix a bad project is to not be part of it.—Norman Nunley & Michael Schwern

I’ve ridden more than one project down in flames, and as painful as it is to ‘fess up and admit defeat, it’s important to know when to fold your cards and quit. Yes, that sounds defeatist. But most success stories are comebacks from personal failures, not wondrous turn-arounds.

Sometimes you shouldn’t finish what you started. Sometimes you shouldn’t finish what somebody else started.

But if you avoid the four key causes of failure—people, value, fitness, and management—you will finish the software and it will be wonderful.

(Originally published in January, 2005. The bmx image is taken from http://www.flickr.com/photos/pennuja/5129137471/in/photostream/. The accident image is taken from http://www.flickr.com/photos/southbeachcars/6205032655/in/photostream/. The race picture is taken from http://www.flickr.com/photos/gasheadsteve/131492751/in/photostream/.)

The Not So Big Software Design

A little less than a decade ago, Sarah Susanka wrote a very provocative book, The Not So Big House. I found out about it one evening while watching PBS. I switched to Channel 17, and there was an interview with her in progress. My partner and I were enthralled. We had been struggling to purchase a new home from “tract” or “subdivision” builders, and we simply couldn’t find anything that spoke to us. In a few short moments, Sarah articulated exactly why we were so frustrated by the builders.

Sarah spoke about a culture of building homes to impress. Of cookie-cutter McMansions, where everything was big, but nothing was warm and inviting. I can give you a very practical example of this syndrome: drive through any subdivision these days. Measure the space between the houses. It’s pitifully small! The reason is that the builders are building the largest homes the possibly can on each lot.

That means that very little light can get into the sides of the houses, and you see this when you look at the floor plans: everything is organized around large picture windows in the front and rear of the house. And no wonder: there is nothing to see to either side except the brick or siding of your neighbour’s house just a few feet away.

(c) 2010 John Hritz

(c) 2010 John Hritz

Sarah’s solution to the problems of poorly designed homes is to take a given budget, and instead of buying the largest home for that price, purchase a smaller home but invest in features and details that customize the home for your needs.

Applying this “not so big” thinking to the problem of houses squeezed together in a subdivision, you can try to place a smaller home on the lot and invest the construction savings in windows on three sides instead of having nearly all the windows on just the front and the back.

Everything in Sarah’s philosophy is driven by the owners’ actual lifestyle, not some imagined lifestyle that never comes to pass. So… unless you are a competitive dancer, Sarah is not going to design an impressive ballroom for your home. On a more practical note, she spends quite a bit of time discussing the merits of doing away with the formal dining room.

Very few people want to have company over for dinner in their kitchen, so Sarah often designs an eating area separated from the kitchen by sliding doors. You have an eat-in for everyday dining and a formal spot when you need to throw a dinner party.

This kind of thing is not free: sliding doors are expensive, and that’s why very few “tract” homes have them, even very expensive tract homes. But if you want a home that works, you make the choice to have fewer square feet but make those square feet work for you every day.


The problem with tract houses can be summed up in a phrase: the builders are selling you lemons. I hope Bruce Schneier forgives me quoting wholesale from his excellent article about security problems:

In 1970, American economist George Akerlof wrote a paper called “The Market for Lemons,” which established asymmetrical information theory. He eventually won a Nobel Prize for his work, which looks at markets where the seller knows a lot more about the product than the buyer.

Akerlof illustrated his ideas with a used car market. A used car market includes both good cars and lousy ones (lemons). The seller knows which is which, but the buyer can’t tell the difference — at least until he’s made his purchase. I’ll spare you the math, but what ends up happening is that the buyer bases his purchase price on the value of a used car of average quality.

This means that the best cars don’t get sold; their prices are too high. Which means that the owners of these best cars don’t put their cars on the market. And then this starts spiraling. The removal of the good cars from the market reduces the average price buyers are willing to pay, and then the very good cars no longer sell, and disappear from the market. And then the good cars, and so on until only the lemons are left.

In a market where the seller has more information about the product than the buyer, bad products can drive the good ones out of the market.

Now don’t think about the house builders as bad people trying to sell you a bad house.

In the case of new homes, the bottom line is that if most buyers of homes cannot tell the difference between a home that will suit their lifestyle and one that will not, the builder has very little choice but to offer homes that offer the superficial features (like gross square footage) that will sway people into buying.

The entire problem is centered around the fact that the average home buyer is unable to tell the difference between a good house and a bad house, so they settle for superficial distinctions.

(c) 2007 havankevin on flickr

(c) 2007 havankevin on flickr

Building Better, Not Buzzwordier

Does this sound familiar? There are two obvious blog posts here, one about the fact that the average employer cannot tell the difference between good programmers and bad. The other about the average buyer of custom software. Since someone has already noted the similarity between used cars and programmers, let’s look at the similarity between houses and custom software projects.

I recently had a chance to review an architecture design for a custom software project. The designer was given a telephone book sized specification written by the client and asked to put together a high-level architecture plan.

Now right away, I want to say this is a tough spot to be in: it’s all well and good to talk about customizing things for clients, but you really need to talk to them if you want a shot at doing a good job. Whether you are a proponent of Agile or of BDUF, I think you will agree that no amount of documentation can replace communication, ever.

Any ways, I saw right way that the document was… What is the phrase?… Oh yes, as Richard Feynman would say, it was no damn good. It was a lemon.

It is very poor form to criticize this person’s work after establishing that they had very little chance of doing well. So this is my disclaimer: I am writing to talk about why these circumstances conspire to produce a lemon of an architecture plan. Got it? Good person, bad circumstances.

So what was wrong with the design? Quite simply, there was no client in it.

There was a technology stack, there were buzzwords, there was a very popular programming language, there even were some quasi-open source components. Lots to like, and difficult to criticize. Think about how such a conversation might go between two lemon sellers: “Why are you specifying Java, C# is the best language!” Or perhaps, “BizTalk?!?! No way, you want open standards, not lock-in!”

But there was no client in it. Tract houses are designed for the features that all inexperienced clients want to buy, making owners and tract houses interchangeable.

In all my looking at tract houses, I saw just two departures from the norm: one builder offered a two storey model with a master bedroom on the ground floor so that when they children moved out and the owners aged they wouldn’t need to go upstairs (bungalows solve that problem as well, but you need a much larger lot). Another style, “New Urbanism,” put garages back behind the house where they belong. But 99% of them were just variations on the same theme

And this design articulated the features that inexperienced clients (and inexperienced software designers) like to think about. These kinds of designs and clients are equally interchangeable.

Where is the client?

What I saw was a design with such broad strokes (“Database,” “ORM,” “Workflow,” “Templates”) that it could have been presented to hundreds of different clients without change. Now obviously, hundreds of clients need databases and what-not. So the design wasn’t wrong in the sense that none of the decisions it articulated were bad decisions.

But let’s stop for a moment and compare that architecture design to a home design. Imagine you hire an architect. They put together a preliminary design, a kind of sketch, for your consideration. They call you into their office for a presentation. The lights are lowered and the presentation begins.

The Consulting Engineer speaks. “Concrete foundation!” She says. Next slide. “Wood frame.” Next slide. “Brick exterior.” You are getting the same treatment as the clients looking at a design that goes into detail about the technology stack (“Java,” “Oracle,” “Hibernate,” “BizTalk”).

Or maybe the Consulting Engineer sits down and the Junior Architect takes over “Four bedrooms. Maybe five.” Next slide. “En suite bathroom.” Next slide. These things are all decisions that must be made, but they have little or no connection to the client’s needs.

Isn’t it obvious that a well-designed home with vinyl siding is a better choice than a poorly designed home clad in brick? But in the absence of better information, clients are forced to pick the brick over siding instead of choosing whether to have a formal dining room or whether to separate the eat-in from the kitchen with sliding doors.

And obviously two parents and three children want at least four bedrooms. But there is no talk of whether the master bedroom is on the main floor, or whether the architect has chosen to place the play room adjacent to the children’s bedrooms upstairs where the children can play without disturbing the adults or whether to place it downstairs where it can be seen from the kitchen.

It’s easy to see that the exterior of the house and the number of bedrooms are superficialities: To get at the important details, you have to ask a simple question: How is this different than what every other client is getting?

The really important architectural decisions are the ones that address how each client is unique, not what all clients have in common.

Better Software Architecture

Designing software is not easy. And truthfully, our environment makes it difficult, because our clients are not knowledgeable enough to distinguish the not-so-big applications (“Domain-specific languages,” “Agile development”) from the McMansion applications (“Industry-standard platform!” “Detailed Specifications!!”)

In the context of software developed for clients, good software architecture is, at its heart, architecture that is specific, not general. It isn’t all high-level abstractions that could apply to any client, it’s specific choices that address specific problems for that specific client.

It is easy to say that the cure for the general architecture is to add detail. If the lemon design requires five slides, flesh the design out into fifteen slides. If that isn’t specific enough, triple the length again and go to forty-five slides.

This would be the equivalent of taking the builder’s floor plan of a McMansion and filling in the exact dimensions. Or perhaps selecting the kitchen finishes and whether the shower fixture will be pressure-balanced or not.

Adding detail makes a design more specific, but it only makes it specific for a client if the choices expressed address the most important needs of the client. Naturally every new home buyer has a preference with respect to kitchen cabinetry. But does expressing that decision really reflect a deep understanding of the client’s lifestyle?

When you look at a high-level design for a client, it should be obvious at a glance that the design addresses specific needs. Someone who doesn’t know the client may need an explanation—if you looked at a home design with the master bedroom on the ground floor, would you know instantly that the clients have teen-aged children?—but if you know a little something about the client, you ought to be able to literally see the client in the design.

This should be true at each level of detail. It should never be necessary to drill down into the details to understand how the design solves the client’s specific problems. If you are looking at a five-slide high-level design, it should convey the one or two most important ways the software will solve the client’s most important and pervasive needs.

When you drill down to detail requiring forty-five slides, you should see solutions to problems that are a ninth as important as the solutions evident in the five-slide presentation.

Like Sarah’s approach, this type of design has a cost. When you only have five slides, using one slide to address a client’s specific problem means foregoing a slide full of buzzwords that impress the less-knowledgeable client.

I wish I could tell you that this will outshine the McMansion presentation from the big consulting firm full of buzzwords and no attention to the client. But it will not: most clients will buy the idea that their needs are not-so-unique, and if what they need doesn’t fit the architecture, they must change to adopt “best practices.”

But for the serious practitioner, good design is more important than technology stacks and buzzwords. More important than size and impressiveness. It may be “not so big.”

But it is better.

(Originally published in May, 2007. The image of lemons is taken from http://www.flickr.com/photos/marionenkevin/3291053507.)

Which Theory Fits the Evidence?

There are two schools of thought about the practice of managing software development (the theory of managing software development is of little use to us because “the gap between theory and practice is narrower in theory than it is in practice”).

One school is that everything is fully deterministic in practice (“Theory D”). If development appears, from the outside, to be probabilistic, it is only because we haven’t discovered the “hidden variables” that fully determine the outcome of software development projects. And, since we are talking about development in practice, it is practical to measure the variables that determine the outcome such that we can predict that outcome in advance.

The other school of thought is that development is fully probabilistic in practice (“Theory P”), that there are no hidden variables that could be used to predict with certainty the outcome of a software development project. Theory P states that the time and effort required to measure all of the variables influencing a software development project precisely enough to predict the outcome with certainty and in advance exceeds the time and effort required develop the software.

(c) 2009 James Bowe

(c) 2009 James Bowe

Theory P does not mean that software development cannot be managed in such a way that the desired outcome is nearly certain: the flight of an airplane is fully probabilistic as it encounters atmospheric conditions, yet we have a huge industry built around the idea that airplanes arrive at their destinations and land on the runway as planned.

why do theory p and theory d matter?

Understanding whether software development follows the Theory D (fully deterministic) model or the Theory P (probabilistic) model helps us set our expectation for the relationship between what we plan and what transpires.

If we believe Theory D, we believe that it is possible and practical to plan software development entirely in advance. Therefore, when things do not go as planned, our first reaction is to either blame the planners for faulty planning or to blame the implementers for failing to carry out a reasonable plan. Believing in Theory D, we believe that we ought to have a plan that can be carried out to perfection.

Programming is not complicated because computers are complicated—it’s complicated because your requirements are complicated (even if you don’t know it yet).–Chris Ashton

If we believe Theory P, we believe that it is only possible and practical to plan some part of software development in advance. Therefore, when things do not go as planned, our first reaction is to embrace the new information and update our expectations. Believing in Theory P, we believe we ought to have a process for continually updating a plan that asymptotically approaches a description of reality as the project nears its conclusion.

belief drives behaviour

Our belief about which theory is true drives the way we manage software development projects in almost every way. Here are three examples: the way we manage software design, the way we manage time estimates, and the way we manage selecting people.

If extra time is required, people on Theory D projects work nights or weekends, or they cut testing time. They do this because their belief is that if a task takes too long, the fault lies with the estimate or with the worker carrying out the task, and by working overtime they can “make up for their fault.”

Theory D adherents believe you can design software in advance. They believe it is possible to collect all of the information needed about software’s requirements and the technical elements of its construction, such that you can fully specify how to build it before you start. In short, Theory D adherents believe in Big Design Up Front.

Theory P adherents believe that software can only partially be designed in advance. They believe that requirements suffer from observation, that the act of building software causes the requirements to change. Theory P adherents also believe that technical factors cannot be perfectly understood, that only the act of trying to build something with specific components will reveal all of the gotchas and who-knews associated with a chosen technology strategy. They believe that software design is an iterative process, starting with a best guess that is continually refined with experience.

Theory D adherents believe it is possible to estimate the amount of time required to develop software (in both the large and the small) with precision. This is partly a consequence of their belief that you can know the requirements and design in advance, and therefore you can plan the activities required without uncertainty. Theory D adherents do not plan to miss milestones. Theory D adherents do not, in fact, have a process around re-estimating tasks; instead, they have a mechanism for raising exceptions when something goes wrong.

Theory D adherents believe that the normal case for software projects is that tasks are completed within the time estimated. (If extra time is required, people on Theory D projects work nights or weekends, or they cut testing time. They do this because their belief is that if a task takes too long, the fault lies with the estimate or with the worker carrying out the task, and by working overtime they can “make up for their fault.” Theory D managers often “game” their workers by “negotiating” estimates downward in a cruel game of “guess the estimate I’m think of.”)

Theory P adherents believe that there are lies, damned lies, and software development estimates. This is partly a consequence of their lack of faith that the requirements are truly fixed and that the technology is fully understood. If you don’t know what you’re doing and how you’ll do it with precision, how can you know when it will be done? Theory P adherents build processes around re-estimating estimates, such as burndown charts and time-boxed iterations.

Theory P adherents are always fussing with an updated view of how long things will take. They talk about “velocity” or “effective vs. actual programmer-hours.” Theory P adherents believe that the normal case for software projects is that tasks are rarely completed exactly as estimated, but that as a project progresses, the aggregate variance from estimates falls.

Theory D adherents believe that the most important element of successful software development is planning. If a plan is properly constructed for the design and development of a software project, the actual implementation is virtually guaranteed. Theory D adherents invest most of their human capital in “architects” and “managers,” leaving little for “programmers.” They often have architects, senior developers, and other “valuable resources” involved in the early stages of projects and then moved to the early stage of other projects, leaving the team to implement their “vision.” They likewise believe that you can “parachute” rescuers into a troubled project. Since the plan is perfect, it is easy to jump in and be productive.

Theory D adherents believe in “architecture by proxy,” the belief that using frameworks, components, programming languages, libraries, or other golden bullets makes it possible to employ lesser talents to perform the implementation of software, since the difficult decisions have been made by the creators of the pre-packaged software. Theory D adherents also believe in “success by proxy,” the belief that using methodologies, practices, SDLCs, or other buzzwords makes it possible to employ lesser talents to perform the management of software development, since the difficult project management decisions have been made by the “thought leaders” who coined the buzzwords.

Theory P adherents believe that the most important element of successful software development is learning. They invest their human capital more evenly between implementers and architects, often blurring the lines to create a flatter technical structure and a more egalitarian decision-making environment. This is a consequence of the belief that learning is important: if you invest heavily in a few “smart” people, you have a very small learning surface exposed: there is only so much even very bright people can learn at one time. Whereas when the entire team meets a certain standard for competence, there is a very large learning surface exposed and the team is able to absorb more information.

Theory P adherents believe that there are lies, damned lies, and software development estimates.

They strongly prefer to have the same team work a single project from start to finish, believing that when a member moves on to another project, crucial knowledge moves on with them. They likewise abhor bringing new members onto a team late in a project, believing that the new people will need experience with the project to “get up to speed.”

Theory P adherents use frameworks (especially testing frameworks), but are skeptical of claims that the framework eliminates technical risk or the need for talented contributors. Theory P adherents, even Agilists, are skeptical of methodology claims as well. They do not believe that a deck of slides and a nicely bound book can capture the work required to learn how to develop software for a particular user community in a particular environment.

Theory D and Theory P adherents are easy to distinguish by their behaviour.

so which theory fits the evidence?

Which theory fits the evidence collected in sixty years of software development?

To date, Theory P is the clear winner on the evidence, and it’s not even close. Like any reasonable theory, it explains what we have observed to date and makes predictions that are tested empirically every day.

Theory D, on the other hand, is the overwhelming winner in the marketplace, and again it’s not even close. The vast majority of software development projects are managed according to Theory D, with large, heavyweight investments in design and planning in advance, very little tolerance for deviation from the plan, and a belief that good planning can make up for poor execution by contributors.

Does Theory D reflect reality? From the perspective of effective software development, I do not believe so. However, from the perspective of organizational culture, theory D is reality, and you ignore it at your peril.

Do not confuse Computer Science—the study of the properties of computing machines—with Software Development, the employment of humans to build computing machines. The relationship between Computer Science and Software Development parallels the relationship between Engineering, the hard science of the behaviour of constructions, and Project Management, the employment of humans to construct engineered artefacts.

(Originally published in June, 2007. The image of dice is taken from http://www.flickr.com/photos/jamesrbowe/4001776922/in/photostream/.)

d is for “d’oh! we should have gone with p!”

Several people have pointed out that Theory D is more attractive to “stake holders” than Theory P. Here are my unvarnished thoughts:

  • How you structure a business arrangement is not relevant to how the world works in practice: For example, I can hedge against further devaluing of the U.S. Dollar using various derivative instruments. This does not change the fact that the US Dollar moves against the Canadian Dollar. The “Theories” essay was talking about what people believe, not what they negotiate.
  • Regardless of what games you play with commitments and consequences, development teams (including management and stake holders) do not have the authority to bend the laws of space and time. If you press developers to work 90+ hours a week, they make mistakes. If you change the requirements every week but insist that the project stay on schedule, you either get lower total functionality or the team skips QA. As a stake holder, that kind of thing matters to me.
  • If you structure a fixed price contract for development specifically because you want the developer to carry the risk, you believe in Theory P. That’s right, you believe it’s probabilistic, that’s why you hedge against disaster by talking the developer into assuming the risk. So it’s more correct to say that some stake holders prefer to structure fixed commitments from developers than it is to say that stake holders believe in Theory D.
  • On the flip side, if you agree to a fixed price contract, that doesn’t mean you subscribe to Theory D either. Perhaps you fully embrace the probabilistic nature of projects, but you have done a risk analysis and believe this is a profitable deal, just like someone selling an option on Wall Street.

So there are two issues: What people believe about how projects work, and what deals people negotiate with each other. I believe that the adversarial negotiation games do contribute to fixed schedules, I agree with that. But I think it’s often evidence of stake holders who have embraced Theory P, not the opposite.

Interlude: The Programmer’s Dilemma

The Vice-President of Development hitched his chair forward, looked intently at the team lead, and deliberately softened his face. He was a master of body language, and he knew the exact fatherly tone of voice he needed to get the job done. The project has gone down in flames after a miserable Death March, and this meeting was a necessary step towards healing, reconciliation, and gearing up for the next project that would go down exactly the same way.

(c) 2006 Lars Plougmann

(c) 2006 Lars Plougmann

“Nash, you’re a terrific programmer and you have a lot of promise here as a team lead. I hired you, I believe in you, I want to see you succeed. I want to help you, but you have to help me help you. You and Pareto, the Marketing Project Manager, are both saying that the Waterfall Methodology just doesn’t work. Fine. If you both stick with that story, you’ll be ok for now, but to be honest, this company values process and blaming the process doesn’t look great.

“And right now, Pareto’s in a meeting with the Director of Marketing just like you’re in here with me. If you say it was the process, but Pareto says that you and the team screwed up, you’re going down hard and I can’t protect you. You’re out on the street. And you know the Director, he’ll do everything in his power to get Pareto to screw you over.”

The VP paused, watching the lead, waiting for the tell-tale tic. Aha, that subtle squirm. He leaned forward and lowered his voice. “Of course, if Pareto blames the process but you can show that he ignored your warnings about delays and misrepresented progress to the board… Well, that would be different. The Director’d let Pareto go in a shot just to cover his own ass, and he’d forget about you in a week. Just one of those things.”

“Of course, if you try blame Pareto and he blames you, … , well I can’t say what would happen. Not great, but with some work you could get back on the fast track. Better than being fired, for sure. Not as good as if you both stick to your theory that Waterfall doesn’t work, but here’s the bottom line: The way I see it, no matter what Pareto says, you’re better off saying Pareto screwed up than saying it was the methodology at fault. And thanks to the methodology you say doesn’t work, you have the evidence you need to show that Pareto wasn’t doing his job.

“So it comes to this: If he says it’s the methodology and you say it’s him, he goes down hard and you look like a star. If you both say it’s the methodology, well, that’s not bad. If he says it’s you and you say it’s him, well, not great but better than if you whine about the methodology and he says you screwed up: If you try to blame the methodology and he knifes you, you’re a dead man.”

“So, what’s it going to be? Is Waterfall a broken methodology? Or was Pareto doing it wrong?”

The obvious moral of the story is that in the abstract, it’s easy to blame the process. But once a project fails, blaming people for doing it wrong is always safer than blaming the process and taking a chance that the other people will blame you. And this could be why Waterfall survives: It’s the process that optimizes for blaming other people.

A few people have asked whether this story could just as easily be told about Agile. Methodologies are distinguished by whether they conform to Theory D or Theory P.

Theory D methodologies, like Waterfall, make it easy to assign blame later. With Waterfall, marketing can show that the programmers failed to hit their deadlines, and development can show that marketing changed the scope and failed to properly specify their requirements. Theory P methodologies like XP and Scrum constantly recalibrate expectations, so if the result is disappointing, you are left blaming the inelasticity of space-time.

Thus, this story really only applies to Theory D methodologies: They make it more difficult to diagnose failure because they encourage blaming people rather than the process.

(This was published in June, 2011. The picture of a noose on the Thames is from Flickr.)

Project Management acts like a Marketplace for Information

“A social problem!?” Really? This is painfully obvious. Most managers do try to juggle the social issues of a software development project with the technical issues. Clearly this is not news, clearly people understand the implications of social issue son software development.

And yet, there are two social issues that are often overlooked in project management. In this chapter, we are going to look at one of them, The marketplace for information and its effects on development projects.

Information is the raw material used in every software development activity. Developers turn ideas into code (with caffeine acting as a catalyst, of course), product managers turn problems and opportunities into features, and project managers turn information about the state of the project into a to-do list for what to work on next.



Close your eyes and imagine a development team as a big steam engine. There are boilers, valves, tanks, turbines, pistons, and of course pipes. The pipes don’t do any of the work. But they do the vital job of moving the water and steam around to where it’s needed.

Metaphors are leaky abstractions. So no, information is not like water or steam. But the image of pipes helps me remember how important it is for a development project to convey information within itself without losing or corrupting it along the way.

The trouble is, there is another metaphor for the behaviour of information within a development project: A marketplace.

There is a market for information in a project, with information flowing to where it’s valued and away from where it is not valued. This would be fine if the “value” of information was always its utility for moving the project towards completion. However, markets for information are just like “real-world” markets: Full of uncertainty and conflicting human needs. They are inefficient.

In order for information to flow to where it is most valuable for moving the project forward, the participants in the project must value the information properly. Managers “buy” information. They trade favours like letting you keep your job for information about how well you are doing your job. In order for them to buy it, they have to “price” it.

Fruit Market

Fruit Market

Alas, most managers, especially those with limited experience shipping software on a predictable schedule, do not know how to correlate what they’re told about the project with the likelihood that the project will succeed. Thus, they cannot properly price the information.

When they price it wrong, they pay high prices for junk information–like whether a developer is punching the clock consistently or how many lines of code they wrote–and low prices for valuable information–like the thought that a massive infrastructure investment might be avoided if the team pushes back on what seems like an inessential requirement.

Managers also “sell” information, literally: They have to make a report or a presentation to their superiors, or to stake holders, or to their fellow founders at the YCombinator dinner.

When a manager cannot tell the difference between information that is useful for predicting the outcome of a project and information that is not useful for predicting the outcome of a project, she thinks about the next best thing: The “resale value” of the information with people one step removed from the project, like her own manager. So she values things like pretty PowerPoints about the architecture higher than finished pieces of functionality.

(This is why I have always sweated my heart out to give good presentations. My teams have depended on me being able to take good information and sell it upstream just as if it were CMM Level Five Buzzword-Compliant Junk).

Do managers further removed from a project always value pretty junk more than good, solid information? Not always, but often. And that’s enough for people to be pressured to give the bad information that sells to their manager, while hiding the good information that doesn’t sell. Exactly like the owners of good cars taking their treasures off the market.

So the lesson here is that while project managers can directly use information, they sell information, and many times the “resale value” of the information is more important than its intrinsic value to the manager. This creates an incentive for them to buy information with high resale value regardless of its intrinsic value to the problem of shipping the software.

what kind of information sells?

Why does junk information outsell good information? Nice PowerPoint isn’t a good explanation by itself: there are nice PowerPoints explaining Agile, but most managers still prefer Waterfall.

Consider a “Not-So-Big” design. Let’s call completing that design good information: we’ve done a good job finding out what’s really important for the project and making a design that emphasizes the way this project is unique, not the technology stack.

Now consider a typical technology design, emphasizing frameworks and technologies. Fully buzzword-compliant.

Which one sells better? The technology stack does. Why? Well, for starters, managers have been exposed to seventeen billion dollars worth of advertising talking about the benefits of technology stacks. Nobody is advertising the specific ways the not so big design helps the project. How could they? Those are specific to the project, that’s the whole point.

And managers are like anyone else, they compare what you are doing to successful projects they have seen in the past. Once again, the not so big design doesn’t have anything in common with other projects, but the technology stack does. (There are lots of failed projects with technology stacks, of course. But who cites those when bugging the team about whether they will use Hibernate as their ORM?)

How did this happen? How did things that have no correlation to the success of a project become more attractive than things that do?

Quite simply, people have an incentive to look successful. So they imitate the outward appearances of successful projects. We have a really simple way of completing successful software projects: we put successful people on them. But we have a broken way of thinking about it: we don’t like to think of the people as being special, we think that what the people do is successful.

And by that logic, we can take anyone, have them do the same things as successful people, and our projects will succeed. In a manager’s mind, the measure of whether information is good or not is, Does it measure whether people are doing the same things that successful people have done on projects I’ve been told were successful?

This is not the same thing as measuring whether the project is on its way to success at all. This measures the outward appearance of a project. Things that can be measured easily are rarely the most significant things. Behaviours that can be “gamed,” like how many hours a team is working, will be gamed.

And as above, even if a manger knows better, does her manager know better? If not, good information will be difficult to sell and she will be under a lot of pressure to come up with information that does sell, whether it’s good, indifferent, or even harms the project.

The marketplace for information has a profound effect on the viability of a development project. There are natural incentives in nearly all social organizations for bad information to outsell good information, and for that reason, a key to success in project management is the ability to recognize the existence of this marketplace and ruthlessly work to ensure that the good information is conveyed within the project and the bad information is forced out.

(This chapter adapts a portion of “Still Failing, Still Learning,” originally published in June, 2007. The pipes image can be found at http://www.flickr.com/photos/seeweb/6115445165/in/photostream/ and the fruit market image at http://www.flickr.com/photos/xavitalleda/6168935426/in/photostream/.)


I wrote this essay as I was finishing off some work with a corporate client before moving back to my natural position in product development.

It was a good time to reflect back over what was straightforward and what was difficult, what worked and what didn’t. It has been a very positive experience overall, and I have learned a few more things. Here are a hotch-potch of my thoughts at the time about corporate projects, clumsily organized around a single metaphor.

software is not made of bricks

Although very few managers ever express it directly this way, many behave as if developing a piece of software is like building something fairly simple out of bricks. It might be something large. But it’s still fairly simple.

This is tempting. The inexperienced person thinks that bricks are easy to understand: they’re all the same: if you know how to work with one, you know how to work with them all. You can measure the progress of building something out of bricks by counting the bricks in place. And anyone can contribute, even if only by moving bricks from one place to another using brute force.

When you have a brick by brick mentality, deep in your soul you believe that a project contains a fixed amount of work. It’s just a pile of bricks. There are so many screens, so many lines of code. You think this to be true because when you examine finished applications, you can count these things. So you engage in a discovery step up front where you estimate how many screens and how much code will be needed, then you play some games with numbers of people and the amount of work they can do per day, and out comes an estimated ship date.

You believe that since the finished work contains a fixed number of bricks, it is possible to know in advance how many bricks will be needed, and where they belong in the finished software (More on this in the chapter, “Which Theory Fits the Evidence?”).

This model of software development leads to several natural assumptions about how to organize a project. These assumptions are logical consequences of the belief that software is made of bricks:

assumption: it’s all about moving bricks

The brick by brick mentality thinks of software development as making a pile of bricks. Think of the stereotypical Egyptian Pyramid as an example. There are so many bricks to pile and then you’re done. If it’s all about moving bricks, any work that moves bricks contributes to the success of the project.

(c) 2008 Benjamin Esham

(c) 2008 Benjamin Esham

That’s a comforting thought. Just keep those bricks moving. This helps us with all sorts of problems. Some people debate whether star programmers really are twenty times more productive than doofuses. Who cares? As long as the doofus can move bricks, eventually the work will get done.

So if you have a poor performer, someone who is slow and not very careful, you can use them on a project. Just find the right place for them where they can’t accidentally wreck the whole pyramid, and they can help. Ok, they are not good with the tricky booby traps or aligning the windows to allow light to strike the altar at the solstice. Fine. But what about ferrying bricks from the dhow to the base of the pyramid? Doesn’t that move the project forward?

Can’t you hire almost any warm body with ten working fingers and put them to work somewhere? Perhaps they can fiddle with page layouts, or copy the work of more experienced developers when implementing new features that are similar to existing features. But an extra pair of hands is always helpful, right?

software is more complicated than bricks

This assumption is wrong. The reason it is wrong is that software is deep. It is not a simple pile of bricks. Examining a finished piece of software, it is easy to discern surface forms like patterns, variable names, or rough organization. But the motivations for these choices are often subtle and opaque to the journeyman.

You can observe this the next time you are interviewing developer candidates. Ask them to name a design pattern: perhaps they respond, “Singleton.” Design patterns are surface forms. Now ask them to explain what problem the pattern solves. They respond, “Ensuring there is exactly one of something.” We are still working with the surface form.

Ask why we just want just one of something like a database connection pool. What problem are we solving? Why can’t we use class or static methods to solve this problem? What are the real-world issues with having 1,000 threads sharing a single database connection pool? How would you build ten pools? Or share connections without a single pool?

All of these questions drive at the deeper issues underlying development choices. A developer who treats their work as moving bricks, who simply copies the surface form of code they encounter, is oblivious to the motivations behind the code. They do not know when to use it and when to forgo it. They do not understand alternative ways of solving the same problem. They reproduce it blindly (and often incorrectly).

The result is software that superficially appears to be of acceptable quality because its surface form has things in common with good software. However, just because good software may be constructed out of commands and strategies, this does not mean that software constructed of commands and strategies is good.

What is needed on a software development project are people who understand the nuances, the requirements, the underlying problems. If you think that you are building a pyramid, what you want are architects, not slaves.

When you add people to a project who do not deeply understand their work or the problems the project faces, you create the superficial appearance of progress (look at all the bricks!), but you are slowly building up a mass of unworkable code. You may not see the problems immediately, but in time you will discover that everything they have touched needs to be re-written.

determine the baseline competence required for a project and don’t violate it

Once you understand that software is not a simple pile of bricks, you understand that the minimum level of competence required to contribute positively to a project is non-trivial. You can decide for yourself whether you need the mythical great hackers or not. But there is a minimum level of competence, and if you do not allow persons below that level onto your project, you will succeed.

If fact, you are far better off with a core of competent people and no sub-competent help than you are with the same group of people and the assistance of “helpers” on the side. Those “helpers” require three or four times as much management attention as the core contributors if you are to keep them from breaking things. And as we’ll see below, re-organizing your project such that there are tasks to keep them busy is usually harmful in and of itself.

Protecting yourself from people unlikely to make a positive contribution may require adroit maneuvering on your part. On one project, I explained that we could not complete the work in the time requested by the client. The response was to offer us some part-time assistance by employees of the client. Those particular employees may have been talented, but their experience was not a direct fit for the technical work of the project, and they did not have a full-time commitment to the success of the project.

Rejecting such “assistance” is tricky: other managers may have trouble with the idea that the project will move more slowly with the extra help, rather than move more quickly. Those managers see your project as a pile of bricks, and you’ll need to educate them if you are to avoid disaster.

software development is difficult to parallelize

The metaphor of a pyramid being built, brick by laborious brick is useful for illustrating another principle. When you assume that an application is a pile of a million bricks, you assume that you can move bricks in parallel. You can have one thousand people on the project, and if each places one brick per hour you will move forward at a constant rate of one thousand bricks per hour.

(c) 2007 Lyn Gateley

(c) 2007 Lyn Gateley

Software is not like this. Parallelizing development has serious and sometimes fatal consequences. The main problem is that the pieces are usually coupled in some way. There are techniques for lowering coupling between “bricks,” but when you set out to place two related bricks simultaneously, you must, perforce, do some kind of design or thinking ahead of time as to how they relate so that you can place them properly.

Consider two pieces, A and B. The natural dependency between them is that B depends on A. The right thing to do is to build A, and then B when you are happy with A. But the zealous manager with bricks on her mind asks, “why can’t we decide on an interface, I, between A and B, then build both at once?” They want to build I, then A and B simultaneously.

Of course, this constrains A and B tremendously. As you are building them, any flaw or shortcoming with I you discover as you build the pieces will result in rewriting both A and B. Only you are under time constraints so, you just patch and kludge, because the schedule does not have time allocated for redoing things: your motivation in parallelizing A and B was to save time, so the schedule has no room for the possibility that it will take longer to write A and B in parallel than in series.

This makes no sense to the person who thinks software is made of bricks! Looking at the finished brick, what’s the problem, it takes x hours to make A, y hours to make B, why would making them in parallel take longer than x + y instead of roughly max(x, y)?

Try the following: give piece A to one person, wait for it to be done, and then give piece B to another. Whoops, when the person working on B has a question about how A works, they have to track down the author and interrupt her. And if working on B teaches you something about A, is the person working on B supposed to change A? Or is the original developer supposed to backtrack and change it?

This explains a well-known nugget of wisdom: One reason adding people to a late project will cause it to slip further is that you are increasing parallelism. If the project was originally at or beyond its natural limit, further parallelism lowers productivity.

Or another example. You have 100 reported bugs to fix. You have 100 people. Do you assign one bug to each person? No way! Experience shows that bugs are rarely fully de-coupled form each other. You have to analyze the bugs as a team and try to guess their causes and relationships. If bug forty-nine is a simple text change on a page, anyone can fix it. But if bugs one, four and nine are all related, you need one contributor to address them simultaneously. Sending three people in to fix them in parallel thing would be a disaster.

Any time two or more pieces are strongly related either by design or by coupling in the application, it is a mistake to give each one to different people to build or fix.

In software, you want to minimize dependencies between pieces, which in turn means being very, very careful to minimize parallelism. Obviously, there must be some parallelism on any project with more than one contributor. But every project has a natural maximum amount of parallelism. Gratuitously chopping tasks into bricks to increase parallelism beyond this natural limit lowers productivity rather than increases it.

how to make the team twice as productive without parallelizing everything

What if you need two pieces, A and B, and you can’t wait for the normal amount of time to develop A and then B? Here’s an idea: instead of treating them like bricks and trying to develop A and B in parallel, why not simply hire one person who works twice as quickly? And have them develop A and B in series?

Think about this for a moment. There are a lot of claims out there that good people are three, five, ten, even twenty times as productive as the average. This seems intuitively wrong: when you look at their finished work, it rarely looks that much different from the work of the average person. So you figure the claims can’t be correct.

The finished work of the allegedly great person doesn’t look too outlandish. Ok, it has map and reduce instead of loops, and now that we look at it, the so-called great person seems to deliver fewer bricks, not more. What’s going on?

Let’s think about bricks for a moment. What if this essay right, and many times building bricks in parallel takes more time than building bricks serially? What if it’s very hard to coördinate the interfaces and contracts between pieces that are built by different people?

If most projects assign related bricks to different people, and most projects further compound this error by trying to “exploit parallelism,” you can get a big productivity win just by bucking the popular choice and asking one person to do all of the related work themselves. They’ll be as productive as a team of other people simply because they aren’t burdened with the heavy cost of parallelism and from the wrong people working on pieces.

Of course, you need someone who is able to keep two pieces in their head at one time. That’s one of the advantages of hiring good people: they don’t necessarily need to build things that are twice as complicated: if they can keep twice as much in their head at one time, they can build related things without incurring the costs of splitting development between people.

software is transfinite

The other wrong assumption about software being like bricks is that you can measure progress on a software development project by examining physical features of the software, by counting bricks.

The underlying thought is that you imagine the finished software as a pyramid of bricks, a pile of them. You count how many bricks will be in the finished application. Now you can measure your progress by counting how many bricks are “done.”

This is very wrong, and it leads to troubled projects. The first problem with this assumption has been given above: if you need a million bricks for your application, you ought to be able to make use of absolutely anyone to move the project forward in some small capacity. As long as they move a brick an hour, they are helping. So, a brick a day, a million bricks… let’s employ 1,000 sweating slaves for ten hours a day for one hundred days and we’ll have our pyramid. All we need are an architect and a team of overseers with sharp whips to see to it that they work without flagging.

But what happens when the millionth brick is placed and we are nowhere near completion? It turns out that software’s requirements are fluid, so fluid that you could place as many bricks as you like and still not be finished.

Measuring software productivity by lines of code is like measuring progress on an airplane by how much it weighs.–Bill Gates

In fact, moving a lot of bricks is counterproductive: the physical manifestation of software, like written code, design specifications, unit tests, and so forth, have mass just like bricks. And if you want to redo things, the more mass you have, the harder it is to move and reshape it. In that sense, software is exactly like bricks.

Only, what you want to is to move the minimum number of bricks required to test your assumptions about whether the software is complete enough. It will never be complete, but trying to measure completeness by bricks is wrong.

There are only two meaningful ways to measure progress on a software development project: the first is to ask the team to estimate how much work remains, given the most up-to-date expectation for the form of the finished application. The second is to measure customer satisfaction.

how to measure progress on software development projects with estimated work remaining

Given the most current understanding of what is to be done to complete the application, it is meaningful to ask the team to estimate how much time will be required to complete the work.

This sounds conservative, even traditional. Doesn’t every project do this when they prepare the plan? What’s the catch? The catch is, if you only do it once, you only know your progress once. This differs markedly from the traditional model, where you plan once, estimate once, and thereafter you measure progress against your plan, rather than estimating again.

As the project progresses, the client’s requirements change. This is especially true if the client is given the opportunity to interact with the team and engage in the learning process: Agile’s claim that requirements are subject to change is a self-fulfilling prophesy.

To measure meaningful progress, you must re-estimate on a regular basis. If you wish to give a meaningful progress report every two weeks, you must ask the team to estimate the work remaining every two weeks. if, instead, you simply take how many bricks you thought you needed a few months ago, count how many have been moved to date, and calculate the work remaining through simple subtraction, your reports are drifting further and further away from reality every fortnight.

We know that this does not work, both from experience in the field and from critical thinking: as the project progresses, the client’s requirements change. This is especially true if the client is given the opportunity to interact with the team and engage in the learning process: Agile’s claim that requirements are subject to change is a self-fulfilling prophesy.

We also know that as we work with the software itself, we learn more about how much work is required to complete the software. For example, if you load a project’s team up with in appropriate contributors, maximize parallelism, and perhaps go three for three by minimizing testing and bug fixing, you will compound a tremendous “technical debt” over time.

Measuring “progress” against the original plan does not include the technical debt in your estimate of the work to be completed. Asking the team to estimate the amount of work to be done gives them an opportunity to factor the consequences of technical debt into their estimates. Or of any other factor that reflects what the team is learning over time.

In essence, you have an opportunity to include off-balance sheet items in your measurement of progress, whereas measuring against bricks would have excluded those factors.

how to measure progress on software development projects with customer satisfaction

Measuring customer satisfaction is easy. All you have to do is ask the customer. A successful project increases satisfaction over time. An unsuccessful project does not. I boldly posit that any project that increases customer satisfaction over time is a successful project, regardless of what was originally written in a specification.

There is no simpler or surer way to increase customer satisfaction over the long term than to let them experience the application as it grows and to rate your progress by how much their satisfaction increases with the software itself.

Customer satisfaction is a key metric because software is not a pile of bricks. It is impossible to predict with certainty the set of requirements that will result in maximum customer satisfaction at the end of the project, so you must measure satisfaction as you go. That being said, there is a pitfall looming when you ask the customer to judge their own satisfaction.

Some customers have difficulty understanding the features and characteristics of software that will meet their needs in a cost-effective way. They have trouble distinguishing good software from bad, good applications from lemons.

Although such a customer may need a not-so-big application, they may demand an Enterprise Solution.

This manifests itself in the customer demanding proof of progress in the form of elaborate documents, plans, and diagrams instead of working software that solves a portion of their problem. It manifests in the customer demanding proof that you’ve “hired up” to meet their needs. Although these things have their place, none of them are working software. They are promises to develop software, and subprime promises at that.

Life is not all project management and wrestling with customers. To Mock a Mockingbird is the most enjoyable text on the subject of combinatory logic ever written. What other textbook features starlings, kestrels, and other songbirds? Relax with a copy and let your mind process the business side of development in the background. You’ll be a better manager for it! It is easy to obtain short-term success with such a client: deliver what they ask for. This is the exact business model followed by real estate developers specializing in inexperienced, first-time buyers: offer the superficial features that provide short-term excitement at the expense of long-term satisfaction. In the case of software, you can dazzle the inexperienced customer with head counts, power points, and diagrams showing Jenga-piles of technology.

Should you do this? That is up to you, it depends on whether you wish to build short-term customer satisfaction at the expense of long-term satisfaction with the software itself. If you wish to deliver long-term satisfaction with the software, you may need to educate the customer to focus on the software itself.

And that means delivering increments of a functioning application for the client to experience. There is no simpler or surer way to increase customer satisfaction over the long term than to let them experience the application as it grows and to rate your progress by how much their satisfaction increases with the software itself.

building software without treating it like a pile of bricks

Sometimes, it really does boil down to a few simple ideas, working in concert:

  1. Hire people with a minimal competency. Do not be seduced into accepting “help” from people who are not able to contribute to the team at this level.
  2. Minimize parallelism. Exploit the talent of your best developers by giving them chunks of related work.
  3. Measure progress by continually re-estimating the work to be done and by customer satisfaction. Educate the customer to prefer completed work over documentation and promises.

It’s that simple.

(Originally published in August, 2007. The image of lego bricks is from http://www.flickr.com/photos/bdesham/2432400623/in/photostream/. The image of the Great Pyramid of Giza is from http://www.flickr.com/photos/lyng883/2168065634)

Trial-and-error with a feedback cycle

Don’t EVER make the mistake that you can design something better than what you get from ruthless massively parallel trial-and-error with a feedback cycle.

That’s giving your intelligence much too much credit.–Linus Torvalds

trial and error

A common mistake when trying to be “Agile” (or trying to appear to be Agile without actually being Agile) is to shorten milestones while still maintaining the idea that we have a plan for every milestone done up at the beginning of the project and we consider changing the plan to be a management failure.

The whole point of trial and error is to make our plan, but to accept the inevitability that our plan will not survive contact with reality. I was working on one project, and the first two-week cycle went badly sideways. The project manager complained that she hates missing milestones early in a project, it foreshadows all sorts of problems. She wanted to impose overtime to hit the date.

I urged restraint. If we were wrong about our velocity, or wrong about some of our technical predictions, or wrong about some of the requirements, it would be far better to adjust our expectations. The two-week cycles were a bonus because they were providing the feedback we needed to adjust our plans.

If you ignore the evidence, if you try to twist the appearance of reality to suit your plan rather than twist your plan to suit reality, the feedback is wasted.


Feedback is so critical, that even if you do everything else right, and even if you think you are gathering and responding to feedback, you can still fail. One major project management anti-pattern is called The Waterfall Trap. This is when you build software incrementally instead of iteratively.

“Increment” and “Iteration” are not synonyms. Increments are components of the finished software that do not provide value in and of themselves, like a database and its associated ORM doo-hicky. When you build an increment, what feedback do you have? Feedback about how well you are executing against your plan.

This is like deciding to march through the jungle and measuring how many steps you take per day. Are you going in the right direction? Who knows, but you know you how fast you are going, and that is worth something, isn’t it? I can tell you exactly what it is worth. At some meeting later on, when the project has been judged an expensive failure, you can prove it was a well-managed failure under your watch. Congratulations.

For feedback to really work, you must have feedback about how well the finished software delivers value to its users. The only way to do that is to deliver it to those users and get their feedback. Thus, the other approach: to build the software iteratively. This means to build things that are valuable in and of themselves, so users can judge how well they work.

An iteration is a piece of software with business value. People can touch it, try it, use it, and judge how well it does or doesn’t help the business execute. This is different in every way from an increment. I call iterations “products.” If something cannot be called a product, it probably isn’t really delivering any value.


My Golden Hammer for building software iteratively is to divide it into separate products. We sit down and look at the pile of functionality on the plan and ask, “Which of these pieces could be useful things in and of themselves?”

On one project, there was a complex bit of loan eligibility reasoning that needed to be built into a database application. We decided that although the finished application would have the reasoning built into certain input screens, the reasoning could be a separate product. We imagined this thing sitting in its own screen alongside what their existing process.

There would be some double entry involved, but we knew that if we built this as a stand-alone product, it could be used in production and it could provide value. So we did. And sure enough, we got a lot of feedback from the users. We didn’t end up deploying it in production, they wanted to wait for the rest of the pieces to be built, but they could play with it, they could see how it worked, they could give us real feedback.

Did I mention they decided against deploying it in production until the rest of the project was ready? That was the most critical piece of feedback yet. It told us it wasn’t as important as we thought, and we were able to re-prioritize around other things.

That kind of feedback is only possible when you get feedback about the value you are providing. And that is exactly the feedback Linus is talking about: the ruthless feedback of users who don’t care about your plans or your years of experience or your blog or whether you work for a billion-dollar company or whether you use Java or JRuby, or whether your servers are MSFT or Linux.

the scarce resource

In addition to feedback, there’s another reason to develop software in products rather than increments. Strangely enough, it’s the exact reason that software projects naturally “want” to divide themselves up into increments rather than iterations. Once you understand this, you need to ruthlessly fight a project’s natural slide into increments.

Increments deliver what you might call software value. Completing an increment validates architecture or software assumptions, such as “Can we build a functional facade in front of an ORM and expose it to applications via REST-ful web services.” Since a software development team does a lot of software development, it naturally wants to modularize the software along development lines.

If the team is large enough, it can happily develop multiple simultaneous increments of software functionality. It has the software development bandwidth to pull this off. And each increment contributes to several different chunks of business functionality: The database will be used by every feature, as will the system for pushing notifications to browsers, as will the markup for building pages, as will… You get the picture: Working on any one increment of software is really working on many simultaneous products at once.

Unfortunately, the capacity of a team to effectively develop simultaneous products is nearly always one. That’s right, no matter what the size, teams are most effective when they work on one chunk of business functionality at a time. And by teams, I mean the full team, including stake-holders, business analysts, everyone, not just the programmers.

There is nearly always an excess of software development “muscle” compared to the management attention available to analyze the software once a project is underway. Business focus is the “scarce resource.”

Focusing on one business iteration at a time is not about maximizing programmer productivity. In fact, it has nothing to do with programmers. It’s about a very scarce resource: management attention. Whether you are building a commercial application for an ISV or a client project for a government ministry, there is a very limited ability of the stake-holders to digest the project’s progress and contribute with feedback and priorities.

Developing multiple increments simultaneously hits the stake-holders hard, forcing them to try to manage the relationships between them in their head, understand the impact of changes from product onto another, and juggle twice as many documents and meetings.

The results are invariably poor.

So what to do? The answer is surprisingly easy. Divide large projects up into their natural products, then develop the iterations serially. Work out which product can be developed first, and then start with it, religiously rejecting attempts to extend it horizontally with features from the other products. Resist the siren’s call of “optimizing development time.”

When each product is done and delivering value, build on it by adding another product. Repeat until done.

(Originally published in December, 2007)

Software’s Receding Hairline

The news is out that the Java Programming Language is going to have a clean, simple syntax for lambdas Real Soon Now. It seems that after two or three or maybe even five years of wrangling, the various committees have decided on a syntax, mostly.

Obviously, I’m less than impressed. But let’s cut the designers a little slack. There are factors I don’t understand that go into a feature like this. It must be carefully considered not just for its functionality, but for the subtle ways a revised compiler would interact with billions of lines of existing code. The new feature would interact with thousands of existing features in weird ways. Each and every one of those interactions needs to be carefully considered before adding a new feature. Under the circumstances, it’s kind of amazing the feature was added at all.

But then again, consider the following scenario: I build an application for a client. The application is a little long in the tooth, and the UX is so ancient, it is demonstrably less effective than competing applications. The client asks for a modern feature, something like changing from constantly refreshing pages to having a Single Page Application that fetches data using AJAX, or perhaps the client wants entities in the system to have “walls” with stories and posts the way Facebook pages have walls with stories.

“Well,”, I say, rubbing my beard. “That’s going to take two or three years. There are many subtle interactions in the code, many features that interact and are coupled. While this change may look simple to you, it actually requires a massive rewrite to make sure it doesn’t break anything.”

What is my client to think? That I have given them a marvellous, well-architected application? Or that years of bolting a feature on here and hacking a workaround there have created a nightmarish miss-mash where the velocity of progress is asymptotically approaching zero?

(c) 2010 Matt Hutchinson

(c) 2010 Matt Hutchinson

receding hairlines

I am balding. Fortunately for me, my hair is too nappy to form the dreaded “comb-over.” But other men sport this unfashionable look. How does this happen? How do men wind up with such an obviously unattractive appearance. Don’t they know it looks ridiculous?1

The answer, of course, is that it doesn’t happen overnight. Nobody walks into the barber’s office and asks for a comb-over. Nobody carefully grows the hair on one side of their head until it can reach right across their bald pate to the other side. Instead, a comb-over is the accumulation of years of small decisions, until one day there is an unmistakable comb-over. Mercifully, nobody of character is paying attention. Nobody who matters judges a man by his hair, and nobody who judges a man by his hair matters.

This is interesting, because the mechanism of growing a comb-over applies to software development. A comb-over is the accumulation of years of deciding that today is not the day to change things. A comb-over is the result of years of procrastination, years of decisions that seem right when you’re in a hurry to get ready for work but in retrospect one of those days should have included a trip to the barber and a bold decision to accept your baldness or take some other action as you saw fit.

Software is like this. Bad software doesn’t really start with bad developers. It starts with good, decent people who make decisions that seem right on the day but in aggregate, considered over longer timeframes, are indefensible. This is a great paradox. It is difficult to pull out a calendar and tell Smithers that on February 12, 2003 he should have restyled his hair. Why the 12th? Why not the 15th? Why not June 2003? Why not some time in 2004?

Likewise with software, it is sometimes difficult to pull out a calendar and say that on May 5th, 2010, we should have deferred adding new features and refactored this particular piece of code. On that day, adding new features might have been the optimum choice. And the next day. And the next. But over time, it’s clear that some time should had been devoted to something else.

the tyranny of the urgent

This is as true of life as it is of software and hairlines. You cannot make all decisions based on short timeframes. Sometimes you have to do things that are important but not urgent. It is never urgent to read a new book, or learn an unpopular programming language, or refactor code that isn’t blocking you from implementing a new feature. If you are programming in Java, it is never urgent to switch to Scala. If you are implementing Java, it is certainly never urgent to release a feature that would force your legacy users to rewrite some of their code.

But if you make all your decisions according to their urgency, one day you wake up with a receding hairline and a million lines of Java code running on a compiler that simply can’t accommodate new language features without three years of finagling. It’s entirely up to you how to proceed.

For my part, I am going to do the following: The next time I am prioritizing features with a client and tasks with my team, I am going to explicitly ask the group to name three things to do that are important but not urgent. And with any luck, we’ll do some of them, and I won’t wake up one day explaining that what looks like a straightforward change will take years to implement.

(Published September, 2011. The combover sketch is from flickr.)

Interlude: The Mouse Trap

The Mouse Trap

The Mouse Trap

In the board game Mouse Trap, players build an elaborate Rube Goldberg Machine. Wikipedia explains: The player turns the crank (A) which rotates the gears (B) causing the lever (C) to move and push the stop sign against the shoe (D), which tips the bucket holding the metal ball (E) which rolls down the stairs (F) and into the pipe (G) which leads it to hit the rod held by the hands (H), causing the bowling ball (I) to fall from the top of the rod, roll down the groove (J), fall into and then out of the bottom of the bathtub (K), landing on the diving board (L). The weight of the bowling ball catapults the diver (M) through the air and right into the bucket (N), causing the cage (O) to fall from the top of the post (P) and trap the unsuspecting mouse (i.e. the player who occupies the spot on the board at that time).

Software sometimes suffers from a Mouse Trap Architecture, it becomes a chain of fundamentally incompatible components used for purposes far removed from their core competencies, incomprehensibly connected to each other with brittle technologies. And here is the tale of how one such system came about.

The project was originally designed by a Business Analyst who had been a DBA in her younger days. One of the key requirements of the system was that it be completely flexible: it was designed to accommodate almost any change in business requirements without reprogramming. Her approach to designing a program that would be very nearly Turing-complete was to make a data-driven design: nearly none of the business logic was to be written in code, it was to exist in various tables in the database. Changes to business logic would be accomplished by updating the database.

Of course, the application would just be an empty shell, so the actual business analysis would consist of gathering requirements and generating data for the tables. The program was obviously trivial and could be generated in parallel with the business analysis, delivering a huge time saving over her company’s traditional waterfall model where analysis was completed in its entirety before the first line of code was written.

Delighted with this breakthrough methodology, all parties signed off on a seven figure contract and she started by building a large Excel workbook with the business rules, one sheet per table. The plan was to simply export her workbook to CSVs and import them into the database when they were ready to deploy the finished application. And in the mean time, the customer could sign off on the business rules by inspecting the Excel workbook.

Meanwhile, the trivial job of designing a web application for two hundred internal users plus a substantial public site to handle millions of users with unpredictable peak loads was handed off to the Architect. While her Golden Hammer was the database, his was XML and Java. His first order of business was to whistle up a Visual Basic for Applications script that would export the workbook to XML. From there, he wrote another script that would generate the appropriate configuration files for Struts in Java. His work done, he moved along to another project leaving some impressive presentations that delighted the customer immensely.

Implementation being an obvious slam dunk, the company put a few people on the project more-or-less part time while they completed the final easy stages of other delivery projects. Thanks to the company’s signature up-front analysis and rigid waterfall model, they were confident that customer UAT and delivery into production on other projects would not generate any meaningful bugs or requirements changes, so the resources would have plenty of time for this new project.

The New Guy

But just to be sure, they hired The New Guy. The New Guy had a lot of New Ideas. Generally, his ideas fit into one of two categories: Some were sound but unworkable in the company’s environment, and the others were unsound and still unworkable in the company’s environment. His early days were marked by attempts to hook up his own wifi so he could surf on his shiny new Tablet PC during meetings, attempts to disconnect the loud pager that would interrupt all programming in the cubicle farm whenever the receptionist was paging a salesperson, and attempts to get the project to fix all bugs on completed features before moving on to write new features.

When he saw the design of the system, he immediately grasped its deepest flaw: Changes to business requirements in the Excel workbook could cause problems at run time. For example, what if some business logic in Java was written for a Struts action that vaporized when a business rule was rewritten?

Today, we can sympathize with his obsession. He was deeply discouraged by the company’s insistence that development run at full speed developing new features with the actual business of making the features work deferred to UAT at the end of the project. One developer claimed that she had a working dynamic web page switching back and forth between English and Inuktitut, but the English version was generated by a JSP backed by a stub class and the Inuit version was actually a static HTML page. Management gave this a green light to be considered “feature complete” after the customer failed to ask any pertinent questions during the Friday-afternoon-after-a-heavy-steak-lunch-and-before-a-long-weekend demonstration

Depressed and quite pessimistic about the team’s ability to orchestrate Java development in parallel with the rapid changes to the workbook, he came up with the solution: a series of XSLT files that would automatically build Java classes to handle the Struts actions defined by the XML that was built by visual basic from the workbook that was written in Excel. Then, any changes that were not properly synchronized with the Java code would cause a compiler error, and the team would be forced to update the Java code immediately instead of hand waving things.

Excel, VBA, XML, XSLT, Java!

The New Guy ripped his phone out of its socket, ignored all emails, and worked around the clock. When management came looking for him, he hid in vacant conference rooms, feverishly tapping away on his tablet. A few days later, he emerged triumphantly with his working build system. Saving a change to the excel workbook automatically generated the XML, which in turn automatically generated the Java classes and rebuilt the entire application, along with regenerating the database tables in a style what would presage Rails migrations.

He presented this nightmare of dependencies and fragile scripts to management and waited for the response. They had shot down every single one of his ideas without exception, and now he was promoting a daring scheme that owed its existence to the proposition that their management style was craptacular. But he was a man of principle, and was committed to do the right thing in the face of any opposition.

Management wasted no time in responding. Brilliant! He was obviously far too valuable a resource to waste time on implementation. He was promoted to a junior Architect role where he could deliver demonstrations to clients. His very first job? To write a white paper explaining the new technology.

Expecting to get the axe, he was shocked by their warm reception. He had failed to realize that management was indifferent to the idea’s actual development value, but had a keen sense of what played well with clients in Enterprise environments. These companies lived and breathed integration between wildly disparate technologies, many of which didn’t work, had never worked, and never would work.

I suspect this is typical of Mouse Trap architectures everywhere. Built with the best of intentions, they survive for reasons their creators could never have anticipated.


The company adopted the new build scripts immediately and assigned a junior programmer who was working on several other projects to maintain them. Within months they had been dismantled, in no little part because the team hated the idea that every time the business analyst changed the business rules, their Java code that was carefully constructed for a previous version would stop compiling.

The New Guy lasted a few months longer before realizing that his sudden accord with management was illusory, and that nothing really had changed. He has now forsworn his love for static typing and is now wandering a production Ruby code base muttering about software development the way the demented wander the streets muttering about government conspiracies.

All that remains of his work are a few XSL files somewhere, like old game pieces that are rolling around the bottom of a drawer at the cottage hoping that someone will open a bottle of wine and call for a game of Mouse Trap to pass the time.

(Published February, 2002)

Duck Programming

“The Mouse Trap” is an amusing anecdote, but buried within it is a potentially dangerous anti-pattern in software development.

prelude: the project

One of project’s key requirements of the system was that it be flexible enough to accommodate almost any change in business requirements “without reprogramming.” The team decided to build a data-driven rules engine. Most of the business logic was to be encoded as rows in database tables. Changes to business logic would be accomplished by updating the database. The system would read the rows and use their contents to decide what to do.

The system controlled the auditing of apprenticeship programs such as cooking, automobile repair, and plumbing. The system would track all the apprentices in the various programs as well as the educational institutions and working organizations that trained apprentices on the job.

The “rules” for completing an apprenticeship program are elaborate and vary with each program. Those rules do change from time to time, and the designers of the program imagined that the ministry overseeing the apprenticeships would update the rules on the live system on administration screens, and the system would store the rules in the database.

A similar design was imagined for controlling the ministry’s case workers and offices. Each case worker would be tracked along with the individuals or institutions they were auditing. A workflow system was envisaged that would assign audits to offices, case workers and managers.

For example, when a new restaurant was added to the system, a case would be opened at a nearby office, and assigned to a caseworker. The caseworker would visit the office and check that the qualified instructing chefs were employed there. The caseworker would also do an inventory of equipment and facilities, and the system would validate such things as that pastry apprentices work under a proper pastry chefs in kitchens with ovens. And those rules could all be changed at any time in response to changing regulations or practices by the ministry.

The team’s management decided that since the application would just be an empty shell, the actual business analysis would consist of gathering requirements and generating data for the tables. The software itself was obviously trivial2 and could be generated in parallel with the business analysis, delivering a huge time saving over her company’s traditional waterfall model where analysis was completed in its entirety before the first line of code was written.

Alas, the project was not the success its customers, managers, and architects expected. There were many reasons it never lived up to their rapturous expectations, but one stands above the others: The success of the system rested on correctly configuring the various tables that controlled its rules engines, but there was very little time, attention, or process devoted to configuration.

The team failed to recognize that they were going to be doing a lot of duck programming.

(c) 2006 Thomas Widmann

(c) 2006 Thomas Widmann

what is duck programming?

Duck Programming is any configuration that defines or controls mission-critical behaviour of a software system that is thought to be “not programming” because it doesn’t appear to involve programmers, programming languages, or programming tools.

When I see a bird that walks like a duck and swims like a duck and quacks like a duck, I call that bird a duck3

Duck programmed systems walk like programming, swim like programming, and quack like programming. It’s just that people fool themselves into thinking “it’s not programming” because it isn’t code.

As described, the project’s system was designed to be nearly entirely duck programmed through the use of database tables that would be updated live in production. Rules engines aren’t the only software architecture that can be abused through duck programming: “Soft coding” is the practice of extracting values and logic from code and placing it in external resources such as configuration files or database tables. Soft coded systems are also fertile breeding grounds for duck programming.

Duck programming isn’t an architecture or an implementation, it’s a management anti-pattern, the failure to recognize that updating rules or soft coded values is programming just like updating code.

why duck tastes so good

When designing systems, the temptation to include duck programming is seductive. For one thing, it’s easy to ignore or vastly underestimate the amount of work required to do the duck programming. In the project described, the team worked hard to estimate the work required to perform the code and implement the various screens. Alas, by “code,” they only meant the shell. The configuration of the system through the various rules was “Left as an exercise for the reader.”

Budgeting time and resources for the “code” programming and hand-waving the effort required for the “duck” programming makes projects appear easier and cheaper than reality will dictate.

Duck programming also exposes projects to “Naked Risk,” the possibility that bad things will happen without safeguards to prevent it or processes for recovering from disaster. Duck programming can be seductive to development teams because it pushes a lot of project risk away from the project team and onto the shoulders of the users. If something goes drastically wrong, the response from the team will be a shrug and the cryptic notation PEBKAC.4 The system “works as designed,” thus any problem is the fault of the users for misusing it.

Finally, duck programmed systems seem more “agile” in that major changes can be made “on the fly” without reprogramming. Let’s speak plainly: “Without reprogramming” doesn’t really mean “Without those pesky and expensive programmers.” It really means “Without all the project overhead of writing requirements, writing tests, managing a small project to implement the changes, testing, and signing off that it has been done as specified.”

Project management overhead is necessary because organizations need to plan and budget for things. Most organizations also realize that changing systems involves substantial risks of doing the right things the wrong way (defects), the wrong things the right way (requirements failures), or the wrong things the wrong way (total disaster). Duck programming avoids overhead at the cost of ignoring planning, budgeting, and risk management.

dangerous but manageable

Duck programming is dangerous, for exactly the same reasons that modifying the code of a live application in production is dangerous. Let’s look at the ways in which programming teams manage the danger. Think about the process for “ordinary” programming in code. Hopefully, you agree with the following common-sense practices:5

  1. Requirements are documented–whether simply or elaborately–before code is written.
  2. Code is reviewed before being deployed.
  3. Automated tests are run to validate that the code behaves as expected and no unexpected defects are present.
  4. Code changes are first placed in a test or staging environment for human testing before being deployed live.
  5. Code can be “reverted” to a previous state. Changes can be quickly highlighted with a “diff” tool.

Now let’s think about a typical duck programmed system or module:

  1. Requirements might be hidden in emails requesting changes, but since these are just actions to be performed on the system rather than formal projects to update the system, they may not have the same gravity as requirements for code changes.
  2. Changes are live immediately, so there is no review other than double-checking a form and clicking “Yes” in response to “Really foobar the financial administration system?”
  3. There are no automated tests, and no way for the end users to write them.
  4. Changes are live. Testing on staging is typically limited to verifying that the duck programmable system can be duck programmed, not testing that the duck programming itself works.
  5. Reverting is typically very challenging, as in many systems it requires reverting part of a database and carefully managing the consequences with respect to all related data.

There are no controls to minimize the possibility of disasters, and no processes for recovering from disasters. Imagine you were interviewing a software team lead and he told you, “We don’t use source code, we work directly on the live system, and we don’t test, we simply fix any bugs our users report. If anything really serious goes wrong, I suppose there’s a system backup somewhere, you should ask the Sysadmins about that, it isn’t my department.” Madness!

how to manage duck programming

Duck programming is manageable. It starts with recognizing that while it may be designed to be carried out by people who are not professional coders, it is still programming, and must be managed with the same processes you use to manage code:

  1. Document requirements for the duck programming using the same system the team uses for programming in code.
  2. Stage changes through testing and/or staging environments, then deploy the validated changes to production.
  3. Build a system that allows users and analysts to write and run automated tests for the duck programmed functionality.
  4. Do not allow live changes to duck programmed systems.
  5. Build reversions and change management into the duck programming system.

These five simple points are not as difficult as they may seem. Most software systems have a ticket application for managing work on bug fixes and features. Teams can use the exact same system for all “duck programming” changes. Some systems are smart enough to tie a feature request or bug report to code changes in the source code repository. Using techniques described below, duck programming changes can also be checked into source control and tied to tickets.

Most programming tools revolve around text files. One way to bring duck programming in line with code programming is to find a way to manifest the duck programming as text files. For example, Domain-Specific Languages can be written in text files that can be checked into the source code control system just like ordinary code.

Data-driven duck programming can be set up to export to and import from text files. Those same text files can be managed like any other code change. For example, instead of making changes to a live system, changes can be made in staging, validated, and then exported to a text file that is imported into the production system using an automated deploy script.

Most automated testing tools can be set up to allow non-programmers to create stories and scenarios in remarkably readable code, such as expect(case).toHaveAnOffice().and.toHaveACaseWorker(). Writing automated test cases has many benefits, so many that it is nearly ludicrous to propose developing a non-trivial software application without a substantial body of testing code. Besides catching regressions, readable test code documents intent. Test code acts like a double-entry system: Changes must be made in the normal or duck programming “code” and in the tests, and the two must match for the tests to pass.

The process for deploying duck programming to production can also be managed like the process for deploying code. Changes to table-driven systems can be made in staging, tested, and then exported to text files and imported into the live system’s database with automated deployment tools.


The project described should not discourage anyone from contemplating building a system around rules engines, programmable workflow or domain-specific languages. There are many successful examples of such systems, and our developer went on to create several duck programmed applications himself. He learned from his experience that with a little common sense and appropriate process, duck programming can be a successful strategy.

(January, 2012. The yellow duckie photograph is from http://www.flickr.com/photos/viralbus/299654876/)

I Can’t Find Good Salespeople

I was meeting Sarah for our regular coffee. Sarah’s one of those born entrepreneurs, she’s built and sold a variety of businesses, and she’s has a great touch with investments. I pick her brain about business issues, and she picks my brain about tech.

As we took our seats, Sarah sighed with frustration. “I’m trying to get a mobile app put together for the new business, but we simply can’t find any good programmers, do you think—”

I was a little brusque: “Hang on, Sarah, last week we talked about your business the whole time. How about I start?” She laughed, and I took the imaginary talking-stick.

“I can’t find good salespeople! They simply can’t be found anywhere, and I’ve tried.” I went on for a few minutes, and then Sarah gently recited the facts of business life to me:

Why I Can’t Find Good Salespeople

“Reg, you believe in your product, and I believe in you, but to a salesperson the whole setup is very high risk. You’re offering a generous commission, but with a start-up, the salespeople have to convince customers they have a need AND convince them you can satisfy the need AND close them on satisfying the need now. Or they starve.”

“Salespeople are willing to take risks, but only on the things they can control. Your development schedule, your PR, the receptivity of the market to your innovation… Salespeople have no control over that. Generous commissions can’t compensate for risks they can’t mitigate with their skill and experience.”

“Furthermore, the fact that you aren’t a sales manager with your own track record of success is a red flag to them: There’s no social proof that someone who knows sales thinks this is a great opportunity. And if things aren’t going well, they can’t count on your sales management experience to know how to fix the problem. You are a great guy, you talk at programming conferences, but you have zero credibility as a VP-Sales when you pitch a job selling your product.”

“So, you have to pony up a sensible guaranteed compensation package. And even then, if someone comes to your business for a year and the business flames out, their resumé has a big red flag on it, prospective employers may think they couldn’t sell. That doesn’t matter for inexperienced people, they’re happy just to get a paycheque and say they’ve worked in sales for x number of years.”


“Let’s think about the more experienced people, the ones with track records. They don’t need another year or five of experience. If they’re any good, they’re not chasing another gold rolex, they’re chasing success, bragging rights, or a chance to say they made a difference. Given their perception that they may have to tell people the company failed, you’ll have to find another motivation to get them interested.”

“There’s no single answer to the motivation problem. It may be finding someone who’s a little evangelistic, who wants to be consultative and work more closely with customers. Or helping someone grow into marketing or product development. It may be finding someone who’s already fascinated by the problems your business solves. And it may be different for each person. But you certainly can’t expect good people to sign up just because you find some talent and are willing to pay.”

Solving my Problem

I finished scribbling notes. “So, Sarah, it comes down to this: Good salespeople don’t want to come on board because:”

  1. Their success is gated by risks they can’t control;
  2. My lack of management experience in their domain;
  3. Experienced people don’t need money or another couple of years on their resumé.

“So if I want good people, I have to:”

  1. Base their compensation on risks they can control;
  2. Hire a leader with credibility in their domain;
  3. Work with experienced people to tailor the job to their motivation.

“Got it, thanks.”

Solving Sarah’s Problem

I finished my coffee. “Now, Sarah, what exactly were you asking about hiring good programmers to work in a startup for a non-technical founder?”

  1. Paul Graham asked this exact same question about comb-overs in “The Age of the Essay.” It’s a wonderful read.
  2. As noted in Trivialities: “Trivial” is programmer slang for “I understand what to do and I’m ignoring how much time it will take to do it,” much as one might say that adding two large numbers by representing them as piles of pennies, pushing them into one pile, and then counting the pennies is a “trivial” exercise.
  3. James Whitcomb Riley (1849–1916)
  4. “Problem exists between keyboard and chair.”
  5. Many organizations have even more practices, but these are fairly minimal and commonplace in 2012.