Agile For Teams
Agile For Teams
Buy on Leanpub

Table of Contents

Welcome to #agileforteams and thank you for purchasing this playbook.

You will get the most value from this playbook by completing the exercises at the end of each chapter. The goal of this playbook is to help teams get to a place where they can agree that agile is actually working for them instead of against.

Remember, you are entitled to future versions of the playbook every time it’s updated via Leanpub.

Let’s get started.



At the end of this playbook, you will:

  • Be confident and clear on agile, pragmatism, evidence-based tracking and simplicity principles.
  • Understand what a Retrospective is and how it’s the easiest way to begin building trust and fostering a culture of experimentation.
  • Understand the principle of #smallbatches and how that reduces risk and enables forward motion; and why forward motion is the critical element to consistent and predictable performance.
  • Have a clear understanding the #agileforteams process, and Tracker, a tool which supports this process.
  • Be confident to start using this process and Tracker in an existing team right away.

What is this #agileforteams thing about?

Hi, My name is Matt Kocaj. I’m a dev, team lead, tech lead, hardware hacker, dad, husband and I suck at gardening. I like to fly drones, launch model rockets and write clean code. I’m also furiously passionate about the craft of software development and helping teams get the most from the precious ninja skills and knowledge they have. Like you, I hate building things no one uses so I want to show you some of the things I’ve learnt over the years building products and delivering projects from different gigs. I’ve been a full-time team player, a contract dev, consulting team lead and various other team roles. Hopefully I can impart some valuable lessons to you so that you and your team can be more successful in your efforts.

While I’m passionate about “finding the right thing” to build, #leanstartup, #leanUX (Lean User Experience), validation and testing, etc; this playbook is not about those things specifically. I’d call those “higher order ideals” - something to strive for but, in my experience, are much harder to realise. This playbook, #agileforteams, is about the patterns and practices which will lead you to building and delivering value to your stakeholders effectively and consistently. It’s also about enabling your team to respond quickly to the inevitable change which you will face on a weekly basis. We call this being Agile.

Right about here is where I start making claims in the first person rather the collective “we” which would ordinarily refer to the corpus of agile practising colleagues and friends in the development world. It might sound arrogant, but I want you to separate my ego from this - there is no right way to do Agile. I’m just going to share with you what’s worked for me and the teams I’ve been blessed to work with. I’ve seen enough of the same patterns which survived trial and pain. I’ve also seen many things that perhaps might be called the “common practice” fail time and time again. I’ll share this with you in the hope you can take something from it and be spared some of the time wasting frustration.

Agile by definition is about trying something, and then if it doesn’t work, trying something else. It’s saying to yourself, “The way we work is not set in stone. We can change it when we need to.” Ironically, this is precisely the attitude we should have towards software as well. I want to show you my Agile recipe. This is my flavour if you like. It’s a reliable, composite Scrum/Kanban hybrid of sorts which relies on a heavy dose of pragmatism and simplicity. Nothing about the component parts is particularly unique or special. It’s the collective function that is most interesting.

I will prescribe this recipe to you, knowing that given its past performance, it will work very effectively for your team. I’ll give you all the parts and empower you to reproduce it in your team. But agile is about adapting. So I’ll also be encouraging you, that once your practise is up and running, you tune it over time. This is the heart of Agile - taking a process and making it better.

Why should you care about Agile?

Because you care about delivery

Building a sustainable software application is not like constructing a house, a theatre or even a dam designed to hold back millions of tonnes of water. Software is not made of concrete. Things can be changed at any time and the effort and complexity to produce any part of the software whole is often not well known. Contrast that to the well-established housing building methods, 100s of years of structurally proven theatre designs; as well as the huge investment in concrete science, etc. Granted construction projects run over budget/cost all the time, but each component, and each dependency is orders of magnitude better understood than compared with software. In software, we use patterns, open source libraries and established conventions to make our lives easier, but any given team and team member only occasionally builds something the same way more than once or twice. Conventional project management is not suited to the uncertainty, variability and constant priority changes that the software industry is subject to.

If you’re like me and you care about getting real value into the hands of users, then you care about delivery. Delivery in this context is the constant, consistent rate of valuable product being shipped to and used by the customer. This is not delivery to the internal stakeholders, or the managers who might be injected into your process. This is working, production product. Not even “production ready”! I’m talking about working, fully integrated product deployed into the PRODUCTION environment so that end users can realise genuine day to day benefit from your work.

Perhaps you deliver like this already and you just need a framework to track the work. Perhaps you can’t fathom a world like this and you’d like to educate me as to how impossible such a fairy tale is in your world. That’s fine. I’m here for you. I’m on your side. What’s important is that you care; and that’s all I need to get you started. We’ll do this together, using small steps. We can make things better.

The #agileforteams process will accommodate the roller-coaster of software priorities, changing scope, and hard deadlines provided you apply it with discipline and rigour. Agile is not easier. Let me be clear: if you want to deliver with measurable momentum, it’s going to be work. “Waterfall” project management techniques do to some degree enjoy a special luxury: I call it “finger in the air” estimating. The long and the short is this: dates and estimates often have some amount of “guess work” built into to them, to varying degrees. I don’t believe in guess work. I believe in data. If you care about delivery then you should care about data. The popular legalese disclaimer reads “past performance is not a good indicator of future result”. However, when one can be disciplined about the methods of measurement, past performance can be a very strong indicator. More on this later.

Because you care about your team’s morale

There’s nothing worse than coming home from work (or popping out of the home office if you’re lucky enough to be able to work from home) and feeling like you’ve achieved nothing that day. Actually, there is: in my opinion, it’s much worse building something no one cares about. But let’s focus on the shorter time frame. Developers work hard and write code but feel like they’re “achieving nothing” all the time. Often they don’t talk about it, and even more rarely will management talk about it.

But you can’t always change your management team - let’s focus on what you can change. You want a team who’s excited to come to work because they’re excited about what they delivered yesterday. How does a member of your team know they’ve delivered? Because they have a mechanism for tracking real value transitioning from the build team into the hands of end users. Most teams don’t have this. Most KPIs are some distance or vague abstraction from defining real value. If you want a team who’s driven and motivated to work and work hard then you need some way of giving them a scoreboard. The score shouldn’t be in terms of misleading metrics like lines of code or hours worked or even commits pushed (counter point: more and smaller commits are very healthy). The score needs to be value - working production code in the hands of users. Ideally, this is also the features or behaviour that they have asked for too.

A team who can “move the needle” on the collective team score will be truly motivated. If they can move the needle (and move it daily if possible) they will be delighted, engaged and happy to get out of bed in the morning. You can do this for you team! You don’t have to be the “leader” or the manager. A healthy team is a self-organising one and it looks like it’s going to start with you. Look around: do you see anyone else on your team taking the time to “level up”. No? Then it’s you!

Software is a team sport if you buy the logic that there’s more in the whole than the sum of the parts. You may not get credit for taking ownership of your team’s morale and workplace happiness, but it will help you work better. That alone might be worth it.

You care about cost, optimisation and efficiency

Some readers may have a management bent. That’s ok. After all Agile is management! I refused to accept that, even long after my Scrum Master certification. If there’s something to be gained from optimising the machine (clean code, patterns, testability, readability), then there’s arguably more to be gained from optimising the machine that builds the machine - we’re talking here about the agile process. Once you have one, you can add more to the delivery of value through your colleagues, tooling and decision making than simply refactoring a method in code.

A healthy agile process should allow for self inspection, testing and experimentation, change where change is justified and needed, and a level playing field where everyone’s ideas are equal. Cost optimisation is a function of a good starting point, an adaptable process and the courage to make changes. Notice I didn’t say “smart people”? Your team is smart! Let’s just begin and end there. Go in with the assumption that there’s way more value waiting to be unlocked in your team than you’ve ever realised. With that attitude you will find it. Oh and that “good starting point” is just the linear x at the end of the function - it only makes a small difference overall.

“Hey, you didn’t answer my question. I didn’t hear about doubling man-hours, etc, etc” I hear you say? Well no. I’m not going to say that because (while I take particular offence to getting more blood from a stone) it doesn’t matter what technique you want to use - blindly applying it isn’t scientific and results may be misleading, even if they appear good. You need a system to measure optimisation strategies using metrics that matter. You also need a process that hosts these strategies which can weed out the ones that are not performing, no matter how much your management conscience wants to keep them.

You care about building the right thing more than the wrong thing really productively

This is a small extension of the above but it deserves its own soapbox.

I mentioned up above that we’re not here to talk about #leanUX, validating a pipeline/backlog of work, or any of those “higher order” activities. We’re not. But you will never get there if you don’t have an adaptable process. Those goals absolutely depend on a culture of experimentation and data-driven testing.

You care because using the #agileforteams process, you stand to gain:

  • A highly scalable/efficient team with little capacity waste and Single Points Of Failure (SPOF).
  • Very accurate delivery forecasting (based on real work) with dates as far as many months away within +/- 1 week of actuals.
  • Stakeholders/customers in full control of what they take delivery of and when.
  • Very (did I mention very?) happy customers/end users.

Sounds too good to be true? Well, here come the details. You judge for yourself.

Questions and Actions

This is where you get involved. I’ll place these Q&A at the end of certain chapters to get you thinking and even participating. If you want the most from this playbook, you’ll give some of these a try.

  1. How much do you care about your team? Do you simply want to improve your personal contribution or are you more interested in the collective effort. There is no correct answer. Just think about it and write down some thoughts.
  2. When was the last time you overheard some direct feedback from your end users about how delighted they were in the product/project you’ve been building? What did they say? How did this make you feel? If you’ve never experienced this, what do you imagine it would feel like?

What is Agile?

Agile in its most general form, should start with the Agile Manifesto. This is essentially a bunch of clever folks pooling their knowledge and experience into some concise insights by which we can test our agile process over time.

The Agile Manifesto home page could be summarised like this:

A Is more important than B
Individuals and interactions > processes and tools
Working software > comprehensive documentation
Customer collaboration > contract negotiation
Responding to change > following a plan

This is to say that for each row of this table, items at both ends of the continuum are important. The emphasis is that the items on the left/A are more important than the right/B.

Take for instance this whole playbook and the particular #agileforteams recipe that I’m trying to impart to you - it’s a process. But as you’ll soon see, this process is only made possible by the humans and their human-centric interactions which I will detail. This process I’m about to teach you is very valuable, but the people come first. Remember that.

Now is a good time to have a quick read over the Principles behind the Agile Manifesto. Don’t over analyse it right now. Just have a quick think and how it might relate to the above sections. Those principles will become clearer as we move on. Feel free to come back at any time and revisit them as I unpack more of the #agileforteams recipe.

Agile is open ended, so you need guidance

As you can see, the Agile Manifesto is not particularly prescriptive. It’s open ended and allows for a variety of implementations and only attempts to guide by way of priorities. This also reinforces the fact that agile itself should also be agile, in that your agile process should be adaptable and change over time.

Many don’t know where to get started with agile or find the body of knowledge so vast that the perceived effort is too great. I will detail a flavour of agile which I believe is small enough to learn by doing in the space of weeks. This is something you actually need to practise to learn, like a musical instrument. It’s not a theory activity. It’s a doing activity.

The answer is always: It depends.

Most of the #agileforteams flavour is guidance and goals. This means what I’m going to prescribe or suggest will contain patterns, actions and implementations but you must still apply your judgement as to when something I’m suggesting won’t work for you. Many of these ideas will pay off very quickly. Some might eventually work, but just not right now, or may never work. I don’t know your context so I can’t say. So how does one make such a determination?

Let me teach you a lesson in pragmatic decision making when you want to add something. It might be a line of code, or a new feature or a new element to your process. Test as much as you can with these easy steps:

I encourage you to test the #agileforteams process against the above as you introduce it to your team.

Process first, Tooling second

The first line from the Agile Manifesto reads:

Individuals and interactions over processes and tools

I’ve come to find that of processes and tools, process is far more important than tooling.

Never select a tool to drive your process. Select or create a process first and then try to find tools which support it. Tools here may not be software tools, they could be paper based, or a mental activity or some kind of game the team plays in order to, for example, resolve disputes or tie-breaking scenarios. Tools often have a point of view and by using a tool, whether you realise it or want to, you do to some degree adopt part of that point of view. It’s best to get your point of view (about your process) straight before selecting one from elsewhere. This will give you clarity and help you make decisions.

When it comes to an agile process, the primary tool for tracking work is pretty important. As already highlighted, the human components are more important, yet the goal with a tool should be to get as much intersection with your process as possible. The balance of that intersection is pain, frustration and waste. These usually manifest in the form of complicated UX which you don’t agree with, get value from or need. It could also be something you want to do, but the tool makes it substantially harder. Many software tools are flexible to some degree. If you can disable a component or feature that you don’t need and the UX becomes better then that might be an acceptable compromise.

So remember to apply the How-To: To add or not to add steps above when selecting tooling - it’s all going to cost you something. Make sure there is a substantial net gain. In particular with tools which automate a large proportion of your agile process, be sure to select something that’s as close to 1:1 with your own process: Process first, Tooling second.

Questions and Actions

  1. When have you found yourself working in a team with a good process for “getting things done?” What did that look like? What were the important elements, big or small, which made that work so well?
  2. How do you or your team make decisions currently when you want to add something new? Is it even a team discussion or do folks just do their own thing and the rest are informed next time they git pull? What are the pros and cons of this current workflow?
  3. How do you react when you are challenged on your idea? Are you simply never challenged? If so why? Or is this something you avoid, or do you simply relent when confronted? What would you like your response to be? Give this one time. You might need to think about it for a while. Write down your thoughts for your own reference later.

Part 1: The Retrospective

retro tweet
source: twitter

If you only implement one thing from this playbook, make it this: launch a Retro (Retrospective) meeting in your team. The core mission of a Retro is to “inspect and adapt” the agile process over time as a team. This is how you do it:

  • It should be a regular meeting which is given priority by all who attend.
    I’m not a fan of meetings, with few exceptions - this is one. This is the single greatest asset your team has, whether using an agile process or not. Everyone should make the time and respect the time of others. You’re not going to magically have this buy-in from all team members. You need to take them on a journey. Over time they will value it and therefore respect the time slot. Shoot for once every two weeks if you can. I’ve found that works best. Weekly is hard for new teams and longer than every 4 weeks can wear away at the trust that we’re trying to create.
  • Invite everyone from your Dev/Build Team: testers/QA, Project Managers, leads, key users/SMEs, architects, designers, product owners, developers and other stakeholders who you can get along. Provided this list is not 50 people long, you should manage with anywhere from 3 up to 25 in a Retro no problems.
  • Ensure that it’s a safe place and a democratic environment.
    Someone will have to facilitate or drive the meeting, and it looks like it’s you again. But over time I suggest you rotate that responsibility around the group. Not everyone will want to do it, but it’s important you try. I’ve seen even the most shy personalities give the Retro facilitation “a shot” after months of meetings.
    The key here is to build trust. It’s building trust first in the Retro, and then the things the Retro stands for: flat hierarchy, team decision making, actionable changes, leadership/management buy-in, experimentation, testing with data, and most of all: human-centric collaboration. Until Skynet writes all the line-of-business/enterprise software, it will be humans creating these things so the emphasis should be on fostering an environment where people can work together, and if you’re really lucky, even like each other!
  • Ensure that actions are acted upon and a follow up is conducted each meeting.
    As you’ll see below in the quick playbook, each Retro should produce zero or more actions. These are usually tasks expressed as Chores (outlined later) on your backlog and generally assigned before the Retro meeting is over. Distribute the responsibility of enacting the Retro’s decisions around the group. Everyone should take part over time in carrying out the decisions of the meeting. The point is not to place blame, but for the team to support each other and find ways to clarify, assist, break up or redefine the priorities as necessary. As you mature you’ll notice sometimes you don’t have any actions: that’s fine, but try to encourage the discovery of actions, even small. Sometimes tasks will “roll over” from meeting to meeting without a resolution - this might mean it’s not really that important, or simply a hurdle the team needs to work around some other way. Remember, your agile process is about finding ways to do things better: inspect and adapt.
  • Try an experiment.
    When a challenge arises or there is some question of “is there a better way?”, invite ideas and allow members to propose solutions. Experimentation is at the heart of agile so try to identify how an idea can be tested. Can one time-box a Spike (short investigation that uncovers more information) of a new way of doing something and report on the findings next Retro? Or perhaps, can the group alter slightly their process or workflow for a potentially better outcome? Review the experiment at the next meeting. Not everything needs qualitative data to support it. A “gut feel” is often validation enough given a quorum.

A great tool to help with structuring Retros is Retromat. Play around with this tool. It will soon become clear what you’re trying to get from a retro and how you can go about it. You may not need all of the components the tool suggests.

Questions and Actions

  • There is only one action and one question for this section. The action is schedule a meeting and have your first Retro. The question is this: did you do it?
    It’s time to put this playbook to real work now. You should be able to convince your leaders to allow a “once off” meeting as an experiment into team building and innovation. You’ll get credit for trying and no boss wants to hear how her subordinate manager turned down an opportunity to “innovate”. If you can gain momentum here with a regular Retro, then you’re on your way to building trust which will set the stage for implementing the beginnings of a meaningful agile process.

Part 2: The Agile “iterative” cycles

Here is where the fun starts. Now we get into the meat of the #agileforteams process. First I’ll introduce the various loops, or iterative constructs. Next I’ll introduce you to some theory which will underpin everything in this section: the principle of #smallbatches. Then I want you to meet my best friend in Agile: Pivotal Tracker. Finally we’ll cover, in depth, each step of the Feature Lifecycle, supported by Tracker, in a practical way so that you can use this section as a reference when (not if) you start your journey with your team.

Loops, feedback and getting up to speed

Agile is a human-centric process. This means that we acknowledge that we’re people working with other people. It’s not like machines working together primarily - if it were, it would be so much easier. If you’re anything like me, you know it’s much easier to get machines to communicate. People on the other hand, are a whole different level of crazy. Conversely, using the principles of agile, we do to some degree want to automate our working style like a machine. More succinctly, we want to become like a production line. A production line scales manufacture of its product by performing the same actions repeatedly, consistently and at speed while minimising waste. In agile we want these things too.

If we can punch out features consistently then we have something we can measure, track and use for forward projection. If we can make our steps simple and easy, then we can make them repeatable and fast. Where an element becomes painful (eg. updating the “hours remaining” each day), we can identify it quickly because we do it so often. Then we kill it or automate it, thus eliminating or minimising the cost. In this way, we receive feedback at a pace proportional to the duration of the cycle.

The cost of a defective or inefficient element (eg. having to create a Pull Request for even the smallest changes) is proportional to the duration of the cycle too, because we discover it and reduce it faster if we cycle faster. It’s critical then, that we create small loops so that lost effort goes unnoticed for as short a time as possible. Just like a bug in code, I don’t want to know about it next week. I wanted to know yesterday!

When a given cycle or feedback loop matures (that is, it grows, with integrated feedback changes that make it more stable) we can then begin to see it speed up. Now, contrary to some popular management theory, it can’t accelerate indefinitely - these are still people underneath. But what you will find is that over time, the speed naturally increases and you find a rhythm or a pace that is largely free from waste and can be maintained over a long time.
Q. Why might you want this?
A. Predictability.

So what are these “loops” then?

In #agileforteams we subscribe to three main loops or cycles.

The Build Cycle lives within the Feature Cycle which lives within the Iteration Cycle.
the build, feature, iteration nested loops

On the outside (the biggest loop) we have the Iteration. In Scrum, this is called the Sprint. Iteration is a more generic agile term so I’ll use that. The Iteration is the duration over which the team measures delivery to produce a metric that represents their capacity to deliver. This will become clearer later.

Inside the Iteration, we have the Feature Cycle. That is, during a single Iteration we start and deliver many Features. Ideally a feature should be something a pair or single developer can deliver in a single day. But in practice, they range from a few hours to a few days. The team should strive to ensure that a Feature doesn’t exceed the length of the Iteration.

Finally, within the Feature, we have a Build Cycle. That is, once the building of the feature commences, we expect that we show or “release” parts of the Feature to the Product Owner (defined shortly) at least once before completing it. This cycle doesn’t have to loop more than once or twice, but sometimes it does if it’s a big feature. The aim here is the same as before - collecting and integrating feedback as early as possible to avoid heading down the wrong direction which is just another name for waste.

If you’ve used a variation of these techniques before you’ll probably see a peculiarity emerging. If an Iteration is say 1 week, and an median Feature is about 1-2 days, and inside that you’re “releasing” something multiple times, then each Feature is going to be pretty small, right? And you’re going to be deploying pretty frequently as well right? Great observation! This brings us to our next section and perhaps the most critical thing you’ll learn about in this playbook, and about agile in general: #smallbatches.

The Principle of #smallbatches

Think of #smallbatches like The Force. You have to use The Force. You also have to use small batches! Use them everywhere and with wild abandon.

The Principle of #smallbatches originally comes from Eric Ries’s The Lean Startup. But recently for me, it’s exploded into the underpinning of nearly every successful element to Agile, mature software development and life in general. If you want to just get stuff done, you are doing yourself a favour by breaking large items down into small pieces and then completing those in batches. Why? Well, I’m really glad you asked.

Because Small is Fast

It’s not rocket surgery. The smaller a task is, the faster one can complete it. Done? Move on?

No, it’s more than that. Humans have a natural tendency to be driven about things in a decaying exponential fashion.
human drive decays exponentially

You get excited to build this feature and then by day 3 you hate it! It’s not only not motivating but you avoid it, watch YouTube and sometimes even try to hand it off to your fellow developers. The longer a task takes, the higher the likelihood that it will end up being loathed by the one completing it. This is not how you get love into your code. Keep things small and they will get done quickly. People need inspiration, and developers particularly need to see progress and continual momentum. Small helps you get there.

Because Small is Easier

Want to get something done? Make it easy.

“Oh but Matt, life is not easy. Some things are complicated and tricky and you can’t live all your days in fairy-tale-agile-land and expect to be able to decompose everything into small, easy tasks.” Um..
Yes you can.

It’s not easy making things easy, just like it’s not simple making things simple. But I’ll teach you how and you’ll begin to see the invisible lines where things can be divided, cut in half and broken into smaller chunks. Let someone else do it the hard way.

Because Smaller is Clearer

Or rather, having something with less breadth makes it easier to define clearly.

Clarity and quantity have a kinda logarithmic relationship I’d say. So it stands to reason that with less, making something clear and easy to understand becomes an easier task. You will see below that our Features are so small, the scope can usually be captured in a few bullet points.

Because Small == Low Risk

Using the #agileforteams process, you will learn to deploy Features into PROD (production environment) as small, thin vertical slices of value. That means that each Feature offers the end user something real and useful. It might not be the finished workflow or complete behaviour, but we phrase Features as something completely integrated so that it at least adds some value. This is the test for the lower bounds of how we define “small”. It’s too small if it’s no longer delivering value to the user. Sometimes this can be as small as a new select list on a form, or a new piece of copy in a footer. If that copy in the footer gives the user some needed context that improves the experience then it could be adding value and so it’s big enough. This sets the bar very low and allows us to cut things up many, many ways.

Pushing small vertical Features into PROD is a low risk exercise. Now provided that you have an automated build pipeline, a staged-deployment strategy and other development best-practices like automated testing and production monitoring, etc (I’ll help you get there too), you should not feel hesitant about putting small things into PROD. A small change into PROD can be rolled back (or forward!) quickly and easily. If it’s wrong, the impact surface is less. The risk is less. Integrating and deploying many changes and days worth of work at once is risky and dangerous. Free yourself and your managers from anxiety by using #smallbatches to reduce risk.

Because Small gives you Fast Feedback

We’ve talked about feedback and feedback loops and this is entirely enabled by keeping things small. If you’re heading off in the wrong direction because the scope wasn’t clear enough, or you’ve made the wrong assumptions or you’re just building it “how I think it should be done” then you’re going to spend less time on that wasteful activity if the average Feature size is smaller.

This happens less with Pair Programming (pairing) as the other party usually detects the deviation to “wrong town” much sooner given they’re watching you type. But even if you don’t use pairing, you can expect members of your team to churn on something that later turns out to be the wrong choice a lot less often.

The other (more important?) feedback that you receive faster by having things small is the feedback from the customer or Product Owner. Often large whole features are broken down into smaller (yet still value delivering) features and delivered to the customer for their validation. It’s this fast customer feedback that keeps your team on track and the smaller the tasks, the less time you waste in the wrong direction.

For the model aircraft aficionados of you out there, you can think of it like a PID loop that is “tuned”.

Small often removes dependencies

This is not always the case, but many times, we’ve found that by breaking large items into smaller pieces where we can express the value of each component independently, tight coupling becomes less tight. Some things still make sense to be done in a certain order. eg. The first part of the signup form should have the email. Later stories can capture less critical user data, but if the email is the user’s key in the database then there really can’t be a story which delivers a valuable signup form without the email field.

Often the Product Owner can worry about things like this, instead of the team, and maintain implicit dependencies using the order of items in the backlog. This often alleviates the need to use first-class dependency features in tools. It’s just another thing not to use unless it delivers more value than it costs.

So how do you get to Small?

Half, then half again is a great read but I want you to take this away: you can always make something smaller than you think.

You want to break things down by identifying the most important elements first and then seeing how small you can make those while they still make sense and deliver some value. You’re not necessarily cutting the rest (although, that’s often a great idea!) but you start to draw lines around sub-parts and these boundaries help you to find your collection of small Features that make up the whole.

It’s another human-centric activity to coach a customer or Product Owner through practising #smallbatches. It’s not something folks immediately grok. If I can teach it to you, then you can teach it to your teammates and then they can coach your customers and stakeholders through it too. Sooner or later you’ll naturally see those magic lines of separation and be breaking things down into small parts all day long.

I want to sell you an idea first and then show you how to achieve it. This is why I explained all the benefits of #smallbatches first and then explained how it’s done (and there’s more examples to come). It’s not easy. Breaking things up and making it part of your team’s customer interaction and standard practice takes work, time and practice. But it’s worth it! Just like many of the following tools and techniques are worth it. You need to decide for yourself however. Use the How-To: To add or not to add steps above and weigh the benefits and the costs. In my experience, the benefits and joy of having things small far outweigh what it takes to get there.

Pivotal Tracker

Pivotal Tracker (Tracker) is an agile work tracking tool. Tracker takes the science of Scrum, the flexibility of Kanban and formalises many of the vague and open-ended agile principles into a concrete and straightforward system. Put simply, Tracker gives teams a digital wall for storing work, tracking the rate of delivery and using said rate to forward project new work.

Compared to other tools

Tracker, the tool, has about an 85-95% intersection with the #agileforteams point of view. This means that there is little waste and distraction when using a tool such as this to support our process. Here are a few reasons why other tools don’t carry as great a net gain on comparison:

  • Physical card walls are great and many would argue it’s all a team needs. I also agree they have their advantages over software/digital walls especially from a human-centric point of view. The problem is that teams are almost always distributed. Even a team who has the luxury of sitting together on the same office floor in the same bullpen will have members working from home sometimes and others who should be included in the “team” participating from elsewhere. It’s simply just not practical to enforce the limitations a physical wall holds over your team on them any more. You get ones best work by allowing them to work how they want to work. If that’s at home on the couch or down in the cafe then (provided your boss allows it) let them be. Physical walls don’t cater for this well and the workarounds often hinder effective parallelisation of work.
  • Trello is more lightweight than Tracker and has some handy features that Tracker does not. Trello however does not give first-class support to tracking delivery capacity and forward predicting dates with it. I understand there is a Chrome extension but the UX isn’t great.
  • VSTS (sometimes called TFS) is a tool from Microsoft which includes build/CI and deployment features in the same place. This is great if you use the MS stack and therefore have free licences and cheap tooling is more important than the right tooling. But as a work tracking tool, it’s basic design hasn’t changed in a decade and many of it’s interfaces still reflect the MS Excel/Access interfaces it was originally born with. Yes it has some first class support for tracking delivery capacity but the rest of the experience is mediocre largely I’d say because it’s a heavily integrated tool trying to do way too many things.
  • JIRA is similar to VSTS in many regards - it’s trying to be all things to all people. This is a curse: as energy is split across many outcomes, the time and love for each is diminished. JIRA was birthed as a issue tracking tool and even today, many of the first-class agile features are still backed by “issues” which seem to surface every now and then creating (yeah, you guessed it) more issues for your team who are just trying to track their work. Much of the capacity tracking and estimate projection is available in JIRA through reports but it’s a little indirect. Because JIRA doesn’t want to take on one particular point of view, it has a complex workflow engine which is almost infinitely customisable. Now if you know anything about enterprise systems, then you will know that as you approach the far end of the flexibility continuum, you tend to find said flexibility injecting unnecessary complexity throughout the rest of the application. This is certainly the case with JIRA. While JIRA appears to be very flexible in many ways, in others it’s not. For example, last time I checked, you still had to “plan sprints”. This is a clunky process and one I highly discourage as it inflicts substantial damage on team morale.

Thanks to it’s customisation, JIRA is probably the tool which you could bend to the #agileforteams point of view most closely. The problem is in the bending, and the UX - it would take a lot of work and there would still be many parts of the tool you don’t want, that you can’t turn off, that would get in your way. Tracker was designed to do one thing really well - track work, and surface the rate of delivery using an experience optimised for just getting on with delivering. None of the above tools can boast this.

Finally, do not underestimate the power of small little things in your tooling that frustrate your team mates. Those poor UX elements, or the “extra hoops” or clicks a user needs to make just to create a Feature are going to result in disengagement with the tool. I’ve seen it over and over. What good is a tool if your team doesn’t participate? If the tool supports the process then the delta between your process’ point of view and your tool’s point of view represents how likely your process is to fail. I’ve found Tracker to be simple enough that team members are happy to engage with it and often remark what a joy it is to use. When was the last time you heard a project management tool spoken about like that?

The Killer Feature

Tracker knows how much you can deliver in a given Iteration and so rather than telling you that number, it tries to answer you’re next question: So when can I have Feature D or Feature K?
tracker backlog with capacity planning

Those Features (yellow star) have a cost to them and that cost is expressed in the same terms as the computed capacity of your team. I’ll explain later how Tracker calculates this, but for now assume Tracker knows it. It’s 7 points in this example.

Now that tracker knows that in each iteration, your team can deliver 7 points worth of work, answering the question “When can I have x?” becomes trivial. Tracker simply breaks up the backlog into iterations and their associated weeks and groups the Features into the iteration they’re likely to be delivered by based on that capacity.

This is not a special view for managers or team leads - everyone sees the backlog like this every day and so everyone knows what’s happening next and when it’s likely to get delivered.

Questions and Actions

  1. When was the last time you used an enterprise tool that was fun or a “delight to use” every day? Write the number of elapsed months and the names down or reply to this tweet and tell me about them.
  2. Watch this short Intro to Tracker video and take a glance over the basic features.

Part 3: The Story Lifecycle

So what do #smallbatches actually look like? They usually manifest as User Stories (stories).

You may have used User Stories in your team before, but I’m going to give you a few more constraints, tips and suggestions to get more out of them.

The first thing we need to get straight is the Story Lifecycle. This is a cycle defined in our #agileforteams process first, and then supported by our tool, Tracker, second.
story flow

The above direction of flow indicates that stories must evolve in this sequence. For example: We can not start a story without first putting an estimate against it.

The other thing you’ll notice is that I prefer not to use the term “done” very much. Done is overused in the Agile space and in my opinion is often ambiguous: code complete? pushed to the server? built? integrated? deployed (to where)? signed off or just waiting around? To solve this ambiguity teams often adopt a documented Definition Of Done but while I’ve used those in the past and agree they’re valuable, you don’t always need them. “Done” in the above image means the end of life of a story: It’s in PRODUCTION or a PROD-like environment (heavily, if not completely integrated) and the customer or Product Owner has accepted it. When the Product Owner accepts something, they then transfer the ownership from the build team to themselves. This is where value is created.

The #agileforteams process is a very “customer direct”, hands-on, human-centric and collaborative way of working.

Writing a high-quality Story

In the same way that there is no “right way to do agile”, there is no perfect way to write a story either. Here’s how I’ve been doing it for the past few years. Start with this and then adapt to suit your team over time.

The conventional User Story syntax looks like this:

As a <user role>, I want to <what do build> so that <value to be realised>

An example story might be:

As an Admin, I want to see the last page requested for each user session, so that I can see where users abandoned


As a new customer, I want to hide the optional sign up fields, so that there is a less confronting form and we get less churn

The important elements here are:

  • The <user role> or the point of view for the story. Who is this for, or what is the role of the user who will be experiencing this feature or behaviour?
  • The thing you want to do or build: <what do build>. This is a summary of the body of the story. It’s the work to be done.
  • Lastly we have the <value to be realised>. This is the most important in my experience. It conveys to the team why we should be doing this work.

With that last point in mind, #agileforteams adopts the following syntax:

STI <value to be realised>[, AA <user role>], IWT <what do build>

As you can see, I’ve changed the order, made the <user role> optional and created some shorthand so that we’re largely reading important words rather than “convention bloat”.

  • STI: So that I..
  • AA: As a..
  • IWT: I want to..

Some examples using the new form:

So that we have a better UX and get less signup churn, IWT hide optional form fields


STI can better track user abandonment, AA Admin, IWT list the last page requested for a Session


STI can tell if the documentation has been updated, IWT publish a version token in the footer

These are the story names or titles. These would be on the front of the physical card (if we used a physical wall) and so are contextualised at a “high level” and omit most of the detail.

Many teams trying Scrum or other agile patterns often omit the <value to be realised> when it’s at the end of the line. It’s also sometimes harder to form this part. Compare that with Add first part of commit hash to page footer - easy! I know what to do. But maybe we don’t know why. Or maybe, we think we know why and odds are that as a developer, I want something for different reasons to the PO or even the end user. The core value of a story can only be understood when we define the why explicitly. This is why the <value to be realised> is required and first. It also helps one justify whether we should even have the story at all - if you can’t answer why? then maybe you don’t need the story. The PO “owns” the product, but I encourage all members of the Build Team to occasionally challenge a story as a humble conversation with the PO. If done well, we are sometimes able to influence a PO to self discovery and cutting a feature out entirely. This always a positive thing.

In my experience, many more stories using the conventional syntax start with “As a user”. This bothered me over time, and like any good code refactor, after a lot of usage and waste, I decided to cut it out. And by “cut it out” I mean, make the <user role> optional.

Feel free to expand the “STI” and “IWT” shorthand as necessary to make the story title readable, as I did in the examples above.

Any member of the Build Team can create a story. But it’s the PO and Build Team together, who define it. That is to say that the second step in the Story Lifecycle, Define Acceptance Criteria, must be completed by the PO and at least one other member of the Build Team. Suffice to say that we want to avoid wasted communications, so having a developer and the PO (and others) together to Define the story, will likely avoid unnecessary to and fro.

Defining a high-quality Story

On the back of a story card usually we have the Acceptance Criteria (AC). These are the conditions that need to be met for the work to be considered complete. Usually they are statements expressed in the positive future tense like so:

  • Address, Nickname and Social Media Handle fields are hidden on initial load
  • We include a “Show optional fields” link above the hidden area
  • When link is clicked, the hidden section and fields are shown
    • “Show optional fields” is removed after first click. There is no “re-hiding” behaviour

As a clarity tool and reinforcement mechanism for PO and developer, we use the expression “this is done when” instead of Acceptance Criteria. It’s more “plain language” and readable. It also makes more explicit the notion that “we are finished” when we satisfy these conditions. I’m sure I don’t have to tell you developers about the pain of POs changing their minds and scope creep. A developer wants to be confident about when they are finished and confident that the PO will accept the work they’ve done.

Most stories, if small enough, will be completely defined with a good title, a few bullet points/notes and sometimes a sketch or marked-up screenshot. As an example, the completely defined story might contain only the following:

So that we have a better UX and get less signup churn, IWT hide optional form fields

this is done when:

  • Address, Nickname and Social Media Handle fields are hidden on initial load
  • We include a “Show optional fields” link above the hidden area
  • When link is clicked, the hidden section and fields are shown
    • “Show optional fields” is removed after first click. There is no “re-hiding” behaviour

NOTE: might want to use visibility: hidden; instead of display: none; due to existing JS complexity on that page.

This would be enough for the team to begin estimation and get it delivered.

As you can see, the team uncovered a potential implementation detail in the course of discussing the requirements with the PO and added it to the story as a note. This is fine but should be kept to a minimum. Sometimes this happens at definition stage, sometimes at estimating; but be sure to keep it separate from the formal this is done when. Try to phrase story definition as user-oriented or implementation agnostic as possible. This gives the developers the freedom to innovate and the PO confidence about meeting their objectives.

Estimating a Story

Remember how I said that if you only take one action from this playbook to your team it should be the Retro? Well, the second most critical action to your #agileforteams process is the Estimation; specifically consistent and repeatable estimation. Sometimes this is called Backlog Grooming or Backlog Refinement. Those terms don’t work for me because they suggest that it’s kinda optional, and for us, it isn’t. Other teams call it “pointing [a story]” and you’ll see why below.

Not every Agile process will be scientific. Many of the common agile patterns don’t need to be. Scrum practitioners will generally apply the following but, as usual, I’m going to refine it a little more for you so that it’s not only easier to use, but substantially more valuable.

Don’t get stressed or hung up on being careful to ensure that you pull off estimation activities “by the book” because I’ve said it’s so important. Just carry these tips with you and try to throw a new one in each time and eventually, over time, you’ll find a consistent groove.

Being able to reliably predict future delivery rates based on historic delivery rates requires control. Controlling the inputs ensures more accuracy for the outputs. It’s just science.

The rules for our estimation are the following:

  • We use “points” instead of hours or person-days. Our points are a Fibonacci sub-set: 1, 2, 3, 5, 8 and “too big”.
  • We use Planning Poker for “throwing down a number” and we use our hands to do it. Avoid apps and playing cards if possible. Like the Retro voting, we don’t reveal votes until everyone has silently selected a number. Everyone reveals together.
  • When there is only a single size difference (eg. min is 3 and max is 5) we “go large” (select the 5).
  • Use all 10 fingers to indicate “too big” and as early as possible, break these down into 2 or more smaller stories.
  • When there is wider deviation, a “low ball” team member gives reasons for their low vote and the “high ball” member explains theirs too. Then discuss.
  • Limit discussions (strictly 2 mins if you have a lot of stories to estimate; use a stopwatch) and invite a second vote to zoom in on a number. Two, maybe three, rounds of voting should be enough to get to the same number or a single size difference. If not, split the story or create a Chore to investigate the requirements further.
  • Log blocking questions for the PO, update the AC/this is done when as you better understand the story, and add notes/reference links to the description while you’re discussing the work.
  • Log the agreed estimate/points number at the end and quickly move on to the next story.

Have a look at the following scenario and then after, I’ll unpack some of the details.

I know this example doesn’t cover all the scenarios and variables but I’ll try to cover many questions I hear frequently and some of the more common variations.

  • Let’s start with story points.
    Developers are notorious at estimating very inaccurately when it comes to hours and days. Just ask any project manager. It’s also a recipe for bias too - I want to show my team and myself that I have a handle on something and I can do it (quickly), so I’m mentally incentivised to offer a smaller than reasonable number. Most developers often estimate by a factor of 0.5 or smaller when using linear time.
    Points push all that to one side and free a technical team member to think more clearly. We usually say that points are in terms of “relative complexity or effort”. Complexity works for some personality types. Effort for others. The other thing to note is that a 2 is not necessarily twice as much as a 1. Think of 1, 2, 3, 5 and 8 simply as containers. 5 is the “middle of the road” container and 1 is the smallest thing the team will ever do. In that “1 container” might go a one line change that takes 10 mins or a re-styling effort of 3 hours. But it’s the smallest so the smallest items of relatively the same size get a 1.
    8 on the other hand is absolutely the biggest thing any member of the team wants to commit to in an iteration. If your iterations are 1 week and Bob thinks the story can be done in a week but Sarah thinks longer, then it’s more than an 8, and should be broken down.
  • Less tools is more.
    Avoid the Planning Poker cards and apps and just stick to your hands. This also influenced the selection of Fibonacci numbers.
    We don’t use the size 0 because it’s somewhat confusing and dangerous (like a Chore - more later).
    We don’t use more numbers because you get diminishing returns by increasing the resolution of estimates too much further. Tracker also offers “t shirt sizes” S, M and L which I’ve found helpful for planning heaps of work up front very quickly but not particularly practical for day to day work or new teams to this process.
  • At all times, remember: we’re just estimating.
    I’m preaching to myself here. I often have to remind myself that these are all just estimates. It helps you relax a little and not apply as much pressure to yourself (and your team mates) and sometimes get to a number quicker.
  • Use the Done column to reset your base of reference.
    I didn’t include this in the narrative above because if you’re estimating regularly (every 2-3 days), each team member subconsciously knows the relative scale of each point container. But on more occasions than not, it’s helpful to pull up the Done column in Tracker and refresh the team on what each size means, using recently completed work:
    use the Done column for estimating Scroll to the bottom of the Done column (most recent completed iteration) and work back until you find 1 or two of the various sizes that the team is debating match the container for the current story being estimated. This is another mechanism to ensure consistency over time for the estimation inputs. You’re essentially reminding each team member “what a 3 looks like” or “what a 5 looks like” prior to each silently forming their estimate. This historic reference is vital in ensuring that work of the same relative size receives the same estimate over and over again.
  • PO does not vote.
    After a story is defined, the developers usually estimate it. Sometimes other members of the build team can help but this activity must absolutely exclude the PO. The PO has a conflict of interest when it comes to estimating how big a story is, so they need to stay out. If your PO can sit in on the discussion to answer questions and further clarify, then that can be helpful but I’ve found they often influence the estimate, sometimes simply by being present. If you can find a way to meet with the PO and clarify the AC separately and estimate without the PO, I think you will have more success. Discuss in your Retro if you have issues.

This, like many practices, simply gets better over time. It’s ok for it to be little slow at first. Like commits, stories, iterations and release phases, the smaller you can batch things, the faster you’ll cycle and quicker you and your team will learn.


Train yourself to get over the fear of starting!
you need to click this Start button asap

For some reason, even folks who are engaged to use Tracker seem to be hesitant to clicking Start. I think I’ve done it too and here’s why:
Aside from the automatic ownership assignment the tool makes when a user clicks Start, there is a physiological commitment you’re making. In a working environment where everyone is cross-skilled (ideally) and can pick up any story, it’s easy to think “Oh, I’ll leave that one for Sarah and get the next one.”
Please don’t do this!

Grab your lightsaber, adopt your attack posture, click the Start button and then get it shipped!

There is this unknown time where developers implicitly “start” a story before clicking the button. They’re looking at code, trying to understand the requirements and to some degree planning how they will execute the build. This is great and real work that needs to be done. But rather than “evaluating the work” (coz that’s what you’re really doing), indicate to the rest of the team that you’ve taken it on by clicking Start. Then the story moves from the Backlog to the Current (Current Iteration - the work that is now in progress) column and everyone can see what everyone else is working on. During that “evaluation period”, your team mate might become blocked or idle and she might pick up the same story because according to Tracker, no one is currently working on it - now it becomes waste. So please, click Start.

The other reason clicking Start is so important is because #agileforteams doesn’t subscribe to (the conventional implementation of) Standups. Unless you’ve found a way to make the daily Stand Up meeting more valuable, its usual goals are to surface to the team what each member (a) did yesterday, (b) will do today and (c) whether there are any blockers/impediments.

I can see what you're all doing
(here Current and Backlog are combined, on top and bottom respectively, rather than in separate columns)

From the above I can see that:

  • Sarah has a Chore and a Story she’s not yet finished.
  • Barry also has a single story he’s working on.
  • There’s a story there which is complete from the Build Team’s point of view - it’s simply waiting on the PO to Accept it.
  • The next piece of work to be started is listing various details for machines.
  • After that one, there’s a 5 point story that should get started next.

If the current Velocity (rate of delivery) is correct then those week boundaries should be accurate too. So chances are, those un-started stories won’t get started tomorrow. You can already see that I’m now getting even more context than a regular standup meeting and I didn’t have to ask anyone anything or waste valuable time. If I wanted to see what was Accepted yesterday I just open up the Done column.

You might notice that in the screenshot above both Sarah and Barry have assigned the next few stories to themselves in advance. Unless you agree to do this as a team, please try to avoid it. Whether you’re there or not right now, you should want to move to a “cross functional” style of working where every team member can start any piece of work. This way you ensure maximum parallelisation of work and minimise specialisation and SPOFs.

Can you see now why it’s so crucial to maintain the context in Tracker as up-to-date as possible? Tracker works best when each member of the team is adding comments to stories, updating the status of stories and regularly reading comments from others on a daily basis. Free everyone to work how they prefer and where they prefer by using the collaboration features of Tracker to their fullest potential.

Another thing I try to suggest when a story is stretching out over multiple days (which some do and that’s fine), is inviting the owner of the story to add a quick progress note to the comment thread of the story before leaving for home. If those working on the story are already leaving comments throughout the day (communicating with the PO, other team members or simply logging context because they’re particularly verbose) then obviously skip this. But it’s hard for a lead, PO or manager to know whether work has stalled or someone needs extra support if a story has sit in the Started state for 3 consecutive days. Use the tool and reap the rewards.


Now I’m going to talk a bit more about software development patterns. I’m going to insist on a few things in this section and for some of you this might be scary. Maybe you do these already, and if so, power to you! It will make this whole agile thing much easier. But for the others, you may have only heard of these things, and have wanted to do them for years. Yet still others will start to freak out as I unroll this picnic blanket of surprises.

These are not “Matts rules” these are the building blocks of high-performing software and product teams and many if not all of these patterns and practices enable the #agileforteams process. I’m going to explain what the tool or pattern is, why you should use it and how it supports our process.

“But I can’t change all the things Matt! My team is old school..” or “doesn’t like change..” or “too invested in it’s current way of doing things..” or some other excuse. Look, nothing good came easy okay. I already know you want to; that’s why you’re here right? You’ve already launched a major piece of change: The Retro. You did that right? Like I said to (if not, go back to Part 1). That’s the beginning and already forming trust and a greater degree of rapport and respect with your team. Take each step small and you’ll get there. Like my old workout buddy used to say with a cheeky smile “Eh Matty, that pain you feelin now cuz, that’s your body getting stronger!”

The key here is having the ability to push code and partially completed work to your PO after the Started event and before the Finished event of the story. My team frequently starts a story, gets something that’s not quite right but mostly working, deploys it hidden behind a Feature Toggle, and then sends the PO a link in STAGING or PROD inviting their feedback. “Hey Bruce, how’s this form look?” or “Mary, I’ve got this drag and drop behaviour mostly working, should the touch area be bigger on your mobile device?” Then the work is refined and completed with a high degree of confidence in acceptance after formal delivery. Sounds lovely right? It is. But it takes many of the below strategic decisions to make it possible.

Mainline Development

I’m going to define clearly what I mean here because this term, “trunk-based development” and Github Flow share many elements but there are some small differences which will be incompatible with our process.

So the rules:

  • Everything in master can and will go into PRODUCTION. This means that if it’s in master, then it’s ready for PROD.
  • Use simple Feature Toggles or Feature Flags (like DevCookie) to enable CI and partially-completed stories to be integrated all the way to PROD.
  • Use branches only for Features and story-related dev work. We don’t create “release branches”; we don’t create “dev branches”.
  • Use Pull Requests for said feature branches if it’s a significant change. 95% of them are significant. The small 1 or 2 line changes, or very well understood larger changes can committed directly to master. This is fine.
  • Other than developer workstations, use only TWO/2/II deployment environments: STAGING and PRODUCTION. You simply don’t need more.
  • Use a CI/CD pipeline to automatically integrate everything from master into the STAGING environment. Then have a “single click” deployment step to promote what’s in STAGING, into PRODUCTION.

So why?

  • Because merging sux and integrating everything into master with small commits and git rebase ensures you don’t suffer scary merge pain. I’ve done the hour long merge many times and that was using rebase! Some merges are so bad you copy and paste the changes in from scratch - I’ve done those too.
  • Because having everything integrated in PROD is awesome, is the ultimate test of “readiness” and just saves time messing around with other silly ideas like TEST, QA, SIT, STAND, CROUCH and whatever else weird environment names we use today.
  • Pull Requests are great, even if only for drawing a logical container around multiple commits - something that comes in handy when making smaller, and therefore many more commits. They’re also a great code review tool and one I highly recommend over automated code quality tools like SonarQube, etc.

And this enables #agileforteams how?

  • The Building step of our story wants feedback as soon as possible from the PO, ideally with the work in PROD (but STAGING will do now and then). So we need a way to push partially complete work (WIP) into master without preventing other work from passing by it; like an emergency bug fix. Feature Flags and mainline dev work well for this but require a little more discipline.
  • It’s simpler, so it makes automation many factors less complex and therefore easier to scale.
    My CI to STAGING config in TeamCity generally has Build, Run Tests, and Deploy to STAGING steps. The only other thing in there is the deploy to PROD which I can do in Octopus Deploy with a single click. Just to be clever, I sometimes duplicate that click in TeamCity with a second config Promote STAGING to PROD which just calls the Octopus API for the above click anyway.
    simple automation examples with TC and Octo (This is seriously as complex as my TeamCity/Octo setup gets most of the time. Config as code becomes hard to justify when it’s that simple.. but still do it if you can)

Automated Testing

Testing in the generic sense affords your product two things: confidence that you’re building the right thing; and confidence that you’re breaking something. The path to this confidence, I will break into two categories:

  1. Proactive Automated Testing
  2. Continuous Testing in Production.

Proactive Automated Testing is familiar to many simply termed Unit Testing. I take issue to the term Unit Testing because it’s actually only a very specific category in the wider corpus of automated testing. A heathy application should be employing some browser automation and js testing (more js tests if your app is SPA/interaction heavy), some unit testing, but most of the tests should be heavily-integrated tests, probably Subcutaneous tests (Sub-C tests). I’m aware this inverts the Testing Pyramid and I’m fine with that. The Testing Pyramid is wrong. You want “value-delivering tests” and this means each test should be “tested” with the How-To: To add or not to add steps above. Every test needs to justify itself and have a good Net Return.

Some tests are low cost and low value (unit tests). Some tests and high cost and high value (like browser-automation/UI tests). Then still, there are some tests which are low cost and high value (Sub-C tests) - you should have more of these, because they’re the best value for money. To be clear, “cost” here is the up front writing cost and the maintenance of the test over time. Unit Tests suffer in particular when it comes to ongoing costs as they are often heavily coupled to production code which then causes constant refactoring. I’ve seen teams with 1000s of Unit Tests, get so bogged down by test maintenance for each iteration, that they almost give up on testing altogether (yes, turning huge swaths of them off). Automated testing is another discipline. It takes work to get value from it.

Your software should have as many valuable tests (of varying types), as is necessary for your team to feel confident that the application is performing as expected, and that given changes over time, you’re not going to break the most crucial elements of that application.

I’m very passionate about testing, testable code, TDD, and using testing to drive clarity, simplicity and delivery. Sometimes I think I could write a playbook on that by itself. So instead, please read the articles in the links above.

Continuous Monitoring in Production

Active production monitoring, application telemetry and profiling tools are best practice and have been around for a while. Continuous Monitoring in Production builds on that by asserting that a system where many of the integration risks can’t be exercised fully (or even understood) until PRODUCTION, requires automating testing in the live production environment all the time.

With the above definition, this knowledge is fairly new. All I’ll say is that you can’t catch everything before you deploy and many current best practices try to teach us that we can. These include coding standards, code review processes, automated testing, manual testing/QA, acceptance testing, etc, etc. These often result in way more process, tooling and automation which eventually becomes bloat and cost, leading to diminishing returns. There is a happy place, and you and your team should be constantly tuning your process and tooling, trying to find it. Err on the side of simplicity at all times, and remember we are Agile, so if something goofy goes into PROD, it can be unwound easily. Especially if you have..

Fast Deployments

In my simple feature/bug example above, once the code is done and tests are written and passing, the waiting is now on the automated pipeline. In simple terms, the total time from my ENTER key to said change being live in PROD in front of end users is a product of:

  • Pulling dependencies
  • Building
  • Running automated tests
  • Deploying
  • Network latencies and other tooling limitations (polling windows, concurrent agents, queuing, CPU/IO performance, etc)

In a recent project, excluding the human step to check the STAGING environment before pushing the Promote STAGING to PROD button, the total time was about 2-2.5 minutes. You want to optimise your automation to ensure that build and deploy cycles, are as fast as possible. Unlike many other software efforts, these automation activities are very linear in terms of costs but have huge scaling value. You can generally afford to spend many hours on (learning) automation knowing that over time you will get that back many times over. But please, use your pragmatic decision making skills - don’t do too much too early.

Some things that might offend the Software Police, but get your team more automation agility include:

  • Storing your packages in source control.
    I know it’s not ideal for many reasons but stop saying “what if?” and start small. You may never encounter those issues because many of them require scale which you simply don’t have now. This is not so bad that you can’t change it later if your team decides you need to. npm is particularly slow these days and can add anywhere from a couple of minutes to double-digit minutes to a build step. Think about what you stand to gain by relaxing this constraint.
  • Adding more hardware is a perfectly acceptable, and often very inexpensive way to cut seconds and sometimes single-digit minutes from a pipeline. A hyped up Xenon, with RAID0 SSDs under your desk will often be faster and cheaper than many of the cloud alternatives. Consider the whole “value”: sure there are wiked-fast VMs but they are expensive. Even reserved instances running build agents incur warm-up costs so try to put everything in perspective.
  • Hack your build tooling like Stackoverflow did to get more parallelism than the tools provided natively. This option takes a dependency on the one above - you need more cores and fast IO. But as you can read in the article linked above: it worked!
  • Run your tests in parallel. This is especially important as you move to more heavily-integrated tests like Sub-C and browser tests. But it does work and you can still have 24 months of test portfolio complete in 3-4 minutes. Compounding this with more hardware is a relatively easy fix.
  • Get creative and challenge the status quo
    Do your tests have to complete and pass before you deploy to PROD?
    Can you roll PROD back/forward if tests break, after it’s deployed already?
    Can you automate this conditional “unwinding”?
    I don’t know about you, but my team’s tests don’t break very often and when they do it’s usually new work, and when that happens it’s only a small part of the new work thanks to #smallbatches. The point is - you can do some very clever and seemingly scary things to get stuff shipped faster while still managing risk. Go think about it.

Simple Architecture

Simple is like Small. They’re somewhat interchangeable. Simple is low risk, low ongoing effort but not necessarily low cost, especially to set up. Why? Because saying “no” is hard.

If you can avoid an extra package or dependency, then over time, as versions change and you would have used that dependency more (because it’s there), you’re saving hours and hours of work and potential pain.

If you can get away with a MPA or MVC UI for now instead of the latest React/Angular/SPA/#moarJS chaos, then you’re winning in my book. There are absolutely genuine reasons to use SPA frameworks and adding another js package, but most of us don’t make said dependencies justify their membership to our day to day lives. Go back to the How-To: To add or not to add steps above.

Have less deployment environments. Integrating in a pre-PROD or STAGING environment is step one - you need to see all the things working together before you do it in front of end users. The only other remaining step is then giving it to users. Remember that designing both STAGING and PRODUCTION to mirror each other as much as possible serves to further reduce risk. Use CCA tools and Blue/Green techniques to deploy consistently and with confidence. If you’re working in the cloud, often the only thing you might have different between STAGING and PROD are the auto-scaling settings: STAGING might start with two nodes and PROD with 10 for example. This keeps costs low but ensures that an integration or environment-related issue can likely be discovered before PROD.

Become the “wall” that team mates must hurdle before new “shiny” things get added to the mass of your product. Use pragmatic decision making skills and encourage team discussions to use these principles all the time.

Your agility as a team and product is disproportional to the mass of your product and process. If you value agility and adapting quickly then you’ll ruthlessly oppose unnecessary mass. Keeping things simple ensures less mass.

So what do you get in return?

Of our three agile cycles, the most time and effort is spent in this Building step, from within the story, inside the current iteration. It’s important then that we optimise for building the real value rather then extraneous tasks that could be dropped or automated. If you can pull some or all of these things off, you stand to gain:

  • Fast feedback, mid-story
    • Minimise wasted dev effort by keeping “on track” with PO
    • Low risk of story Rejection
    • Less communication waste
  • Quick and granular identification of issues
    • Thanks to low risk integration
    • Because it happens frequently, quickly and in small, easy to manage pieces
  • Shipped value, very fast and very frequently
  • Highly engaged and excited POs, who are actively involved with daily progress and input
  • Reliable automation: it’s exercised many times a day
  • Responsive remediation flexibility
    • Lots of options: roll forward, back, toggle code on/off
  • Generally a pleasant development experience
  • Generally an awesome PO experience
  • Oh, and very few scary merges (git pull --rebase FTW!).


Clicking the Finish button signals the completion of the dev/code work most of the time. But you can adapt this state to represent whatever you need to in your context.
click finish

In some teams we had the internal tester takes ownership after Finish. They would conduct their exploratory and manual testing steps and then if happy Deliver it to the PO.

More commonly however, the action Finish or the state Finished in Tracker, simply denotes that the developer(s) working on that story can now work on something else. A story will often remain Finished but not yet Delivered because a pull request (PR) still needs to be merged by another team member, which is best practice.

Sometimes a story becomes blocked in a way that releases the developer to pick up a new story but the old can’t be Finished. We usually just add [BLOCKED] or [WAITING] to the beginning of the story title and add and comment with more details (example in Done column, see Estimating a Story). We don’t set the state to Finished, and then back to Started again. Kanban encourages the minimisation of Work In Progress (WIP), and so do we. Just keep in mind that Tracker should surface all of this context too. If I can see 4 stories assigned to one person in progress without some kind of [BLOCKED] or [WAITING] prefix, then most likely that person needs support in engaging with Tracker - she’s not keeping it up to date. If Tracker is up to date, there could be a WIP problem.


pass the ball to the PO
pass the ball to the PO

Sometimes it doesn’t make sense to hide something behind a Feature Toggle - if it’s unnecessary work, don’t do it. In the case above I know my PO is responsive and he will see this comment in the next couple of hours. I’m also 95% sure he’ll Accept the work because he was at my desk earlier when I showed him most of it and he was pretty happy (mid-story feedback doesn’t always have to use the Tracker comments). So I’m delivering this into the staging. environment knowing that if a bug needs to go through this will end up in PROD, because my PR is merged into master. More than likely, the PO will see this before the next PROD deployment, Accept the story and then I’ll make the next PROD deploy to set it live. If he Rejected the work because I did something really bad then it’s not a big deal to wrap up the changes in the Feature Toggle (conditionally hide it) after the fact and re-deploy. Optimise for the happy paths and be prepared for the others - this minimises waste.

After I click the orange Deliver button, I’m signalling that all the work is complete, merged into master, built, passing tests and deployed to either to STAGING or PROD. The PO is now welcome to exercise the story according to the this is done when and then either Accept or Reject the story.

Generally we don’t Deliver a story without adding a comment for the PO using the @tagging syntax (this sends them a notification). In a highly engaged team you might not need this as the PO is eager to test and hit green Accept buttons all day long! After all, this is their magic product forming around them.

Lastly, a tip on “hand holding”. You want the PO to click that green Accept button as much as they do: you get to score points, and they get their awesome product. So try to word your final Deliver comment or conversation in a way where you are leading them to their next action - clicking the Accept:

  • Try to forecast questions they might have and answer them in advance.
  • Gently remind them that this isn’t the whole block of work, just one valuable part. Point them back to the AC/this is done when so they can see what they need to validate.
  • If in exercising the AC they discover that they want to change their mind, then encourage them to create a new story - additional effort needs to be tracked and this work is clearly defined. If what she has will get her 80% of the way there, then encourage her to Accept it according to the agreed AC and create a variation by way of a new story. The odds of it being “completely off” are slim given our mid-story feedback.

Acceptance (sign off) and Rejections

click that Accept button Mr. PO
click that Accept button Mr. PO

Once a story is Delivered, the PO has the opportunity to test the delivered behaviour in PROD/STAGING against the this is done when criteria. Of course the PO can Reject the work, but we’ve already shown how this is unlikely thanks to the highly collaborative Build Cycle. If we do receive a rejection, then a comment should be added by the PO and the story can be restarted.

The ideal and most often outcome is Acceptance. This moves the story into the Done column. Tracker will also show Accepted stories above the Current Iteration section as shown in the below image: turning green means Done You will also notice that Tracker is now recording those 3 points as done and counting towards the total for the Current Iteration (circled in red). Tracker will try and forecast how much work, in points, will get completed each iteration. The 8 points circled above is the forecast for the Current Iteration, based on the current Velocity/capacity. Your team may complete more or less than this predicted quantity.

We have now completed a full story cycle. This is a great feeling the whole team can share. The smaller you can make stories, the more frequently the team can experience this forward momentum. Remember, this is not a vanity metric. Those 3 points are real value, validated by the PO/customer, shipped to PRODUCTION as working, completely integrated code. Boom!

Rinse and Repeat

Endeavour to Start stories from the top of the Backlog. As always, there are cases where someone might start something that’s not absolutely the top item, but try to avoid this as much as possible. The team is responsible to give the PO everything they can so that the PO can make informed prioritisation decisions and maintain the order of the Backlog continually. Encourage your PO to check the next week or two of Backlog items and verify they’re in the right order. If you have these estimated, then the PO can usually make a good decision, and drag+drop something to a different position if needed.

Try to keep enough stories estimates to “feed the team”. This is obviously more work if your team is bigger. I generally suggest that 2-3 weeks of estimated work is enough. It’s this JIT approach which assumes that the near future is fairly well known and supports the PO with re-ordering activities. It also prevents waste estimation for the odd story which gets moved out of the Backlog to the Ice Box (like a backlog “later list” of things we may do, but aren’t planning to do) or deleted.
burnup showing JIT estimation The above images shows a Burn Up chart which compares the total points scope over time compared to the points that have been Accepted. The light-blue delta is how much estimated head-room we maintain. Our Velocity over this period was about 7 points a week so you can see we were averaging about 2 weeks of work ready to go. If you do find your team with someone who wants to start something new and there’s nothing estimated, then just grab a colleague or two and estimate it on the spot.

Does it matter if Story A with the highest position in the Backlog is Started, then Story B which comes next, is Started and then Accepted first? No. Stories are supposed to be no bigger than an 8 point estimate which should be <= one Iteration’s duration. Worst case, the PO takes delivery of Story A one iteration later than expected. These are, after all, still just estimates. It’s a good practice as a self-organising team member, that when I see my team mate on a particularly large story taking it across many days, asking if I can help out. Maybe by pairing up the two of us can get that big 8 pointer done quicker.

Hopefully you can now see that by:

  • Small stories with few dependencies
  • Parallelisation of story delivery
  • Asynchronous ordering/prioritisation by PO
  • Asynchronous build/deliver/acceptance

We can get to a place where we enjoy:

  • A highly scalable/efficient team with little capacity waste and Single Points Of Failure.
  • Very accurate delivery forecasting (based on real work) with dates as far as many months away within +/- 1 week of actuals.
  • Stakeholders/customers in full control of what they take delivery of and when.
  • Very (did I mention very?) happy customers/end users.

Questions and Actions

  1. Run an experiment for yourself: try to say no to everything and everyone for a week (for things that don’t get you immediately fired). You’ll have more time, get more work done and hopefully have avoided some unnecessary complexity. Train yourself to value “simple” and reinforce that value by saying “no”.
  2. You should now have enough context on the cycles and the details of Tracker to begin entering stories and introducing it to your team. If you’d like the signup discount shown below, please request it by sending me a DM on twitter.
    Tracker $90 off code Full disclosure: I don’t receive affiliate credits or anything in return by Tracker offering you these discount codes except for anonymous analytics data and happy readers. Please thank Tracker on twitter using the hastag #agilforteams. Start thinking about what area of your project or product could be small enough and complete enough, as to make a good launchpad for this new process and tooling.

Part 4: Points on the Scoreboard

I’ve mentioned Velocity a few times now along side this idea of “team capacity” or rate of delivery. That’s exactly what Velocity is. It’s simply a moving average of the total points delivered over the last 3 iterations.

a Velocity chart
a Velocity chart

Now in Tracker, you can tweak the finer points of how this works, but in short, this is the one metric that matters.

Scoring points for the current iteration is having stories Accepted. As you move from Iteration to Iteration, those totals on a per-iteration basis start to form a trend about your approximate rate of delivery. The moving average is a simple way to flatten out this over a long enough sampling of time to ignore weekends, etc, but a short enough window to show changes without too much delay. You can pull up the above Velocity chart in Tracker manually, but I’d encourage you to not obsess over it. Tracker knows what to do with it intuitively.

When is it going to happen?

POs want to know, managers want to know, PMs, stakeholders and developers want to know: When is story x going to happen?
Tracker uses Velocity for planning each Iteration

The above image shows tracker taking a Velocity of 7 points and splitting up the Backlog into the (1 week) Iterations and showing the “planned delivery”, in points, for each iteration (red rectangle). Why are they not all 7? Because not every grouping of stories fits perfectly into your iteration’s predicted capacity. Trust Tracker - it knows what it’s doing. It also magically adjusts Velocity and planning if you tell it (in advance or retrospect) that some members have been off sick/away for a given week. It also let’s you cleverly adjust the Velocity in this mysterious “warping of time” fashion so you can see how much your team could do and by when, had the Velocity been greater or smaller for example.

Why is Velocity so important?

Tracker won’t work without it. The #agileforteams process won’t work without it. So unless you have some other more accurate and easy to apply technique for predicting delivery for software (which I haven’t yet seen in 10 years by the way) projects, you’re back to “finger in the air” estimating, missed deadlines, grumpy managers, depressed developers, high churn rates, micromanagement as a result of lack of delivery and all those other horrible things you’ve probably all seen at one time or another.

A stable Velocity is core to a “well oiled” Agile team. It becomes stable by keeping the inputs consistent. If you start with a new team, then you should only start trusting the Velocity after ~3 iterations of work. Also, if your team changes substantially in either members, or their regular weekly commitment to the project, then you will require another ~3 iterations of lead time before the Velocity is useful again. Yes, this does mean that the #agileforteams process requires a small “leap of faith”. But I’ll let the #smallbatches and getting working software very quickly into the hands of users do the talking for me. No one complains about receiving working software!

An Agile process needs to be sustainable

You’ve probably noticed by now that there is no notion of manual iteration planning (Sprint Planning in Scrum). This is because it’s just not necessary and in my opinion, a very dangerous activity. I’ve worked on teams where the Planning Meeting was fairly depressing most weeks. The reason is that most teams want to “push themselves” and plan slightly more than their Velocity. This most often sets an unattainable target and, unsurprisingly, the target is missed more weeks than it’s made. This creates a morale black hole and one I will avoid for the rest of my career if I have any say in it. Teams who deliver on their planned number of points were substantially more chipper and up-beat. You absolutely need to care about the morale of your team if you’re going to get their best work and a sustainable Velocity.

So let’s assume every member in your team is working as hard as they can all the time and be done with it. If this is the case, and I highly encourage you to assume that it is, then there is no need to plan iterations. Tracker takes this view also - we just take the next story off the top of the Backlog and Start it the moment the next developer has time. In this way, and using a moving average for Velocity which hides stories that roll over iteration boundaries, we have a clear and actionable delivery metric. Over time, if the team is consistent and respected, the Velocity will “flatten out” and this Velocity or capacity then becomes very powerful, enabling longer term planning. It’s not unreasonable then to take a large batch of stories and estimate them over an afternoon of pizza and snacks, and then use a 3-4 month timeline as a very reliable estimate. I’ve personally seen stable Velocities and a consistent team produce many months of estimates and deliver them with +/- 1 week. It’s almost a shame to call this “estimating” when the data can be so reliable.

Thoughts on Bugs and Chores

I’m not going to detail every feature of the Pivotal Tracker tool because, like I’ve said before, the process is more important than the tool. #agileforteams is about the process. However, Tracker does have two other major items in a given Backlog that are worth covering as they get used a lot: Bugs and Chores.


Every software application will have bugs, so you should track them along side your feature work. In Tracker we interleave Bugs with stories/Features in the Backlog. Bugs however don’t get assigned points as they are a sunk cost. Their estimate of value (points) was part of the story that created the bug in the first place. This means that if we did assign Bugs points, we’d be double-counting that effort, which throws the Velocity off.

Just as any member of the team can create a story, any member can also create a bug. Defining a good bug is probably a discipline unto itself so I won’t go crazy on that. Apply pragmatism: What’s the minimum a Bug needs to be understood in terms of this is resolved when and what the impact of said bug if left unresolved?

Signup form doesn’t hide optional fields on IE6

Impact: Only IE6 users won’t have a shorter form. All fields will be present and visible. It just makes for a slightly poor UX.

this is resolved when: * the optional fields section loads as hidden on a new/empty signup form load

The example above is usually enough for a PO to prioritise the bug and the developer to get it to Finished. A PO or the person who raised the Bug should Accept/Reject it.

Lastly, try not to assign a dedicated team member to Bug at the expense of Feature work. I’ve seen this create specialisations and damage morale. Ideally, these patterns and practises work best with cross-functional teams and self-organising players. If you must dedicate someone, ensure that it’s for a limited season and if it happens regularly, rotate the responsibility around the team.


A Chore is something that needs to be done in the course of the product/project that makes sense to track with all of the other work because it consumes capacity of one or more team members. If you don’t include these things somewhere, then if this kind of work begins to make a measurable difference to Velocity, you won’t be able to track down the root cause.
So the first tip is: Better to create a Chore than not.

Then after you’ve created the Chore, think about what that work adds to the team/product and whether it can be expressed as “value delivering” and therefore in the form of a story. Often infrastructure work (setting up VMs, build pipelines, automation, etc) seems like an obvious choice for Chores only these tasks can consume considerable team capacity. Instead, rephrase the work into stories - after all, most of them will deliver value to the product. It can be difficult equating these types of work to the regular development work when you open up the Done column, but nevertheless try. A corpus will form over time making this base of reference for estimating easier. Where something can’t be phrased as a story or it’s taking too long to do so, just leave it as a Chore. If over time you find too many chores and suspect a meaningful loss to Velocity then bring it up and the Retro: inspect and adapt.

Just like the Acceptance of Bugs isn’t as strict as Features, stories that come out of Chores sometimes make sense to be Accepted by a non-PO. If the PO has the technical understanding and can exercise the AC then great: prefer to have the PO Accept it. But if not, then another technical team member could Accept it also. Just try to avoid the author of the work doing the Acceptance. It’s just like an Author of a major Pull Request merging it - it’s not a good practise.

Part 5: How can I get this happening in my team today?

So I’ve given you a lot of knowledge and wisdom here but how do you actually get started on these things in your team?

You have to start small.

You can’t win until you Start and often you won’t Start unless you Start Small. #smallbatches creates an end goal within reach. It’s that attainable conclusion which gives one the drive to set out in the first place. I forced myself to put the first version of this playbook together in a week. As I type this line I’m somewhere in between day 4/5 and day 5/5 (yes, it’s late at night). I feel like about here I should be starting to dislike writing but the surprising thing is that I’m still driven to push through. Why? I think it’s because I know I’ll be done soon. I’ll be getting this thing online and for sale. I’ll be getting it in your hands and hopefully helping some of you succeed better. I’m really looking forward to that! If I gave myself 4 weeks off and set my goals 4 times more lofty, I can almost guarantee that I’d quit before finishing. Don’t leave failure to chance. Better the odds by starting small.

Start with the most important and critical cornerstone of an agile process - the Retro. Go back and re-read the above section The Retrospective. Then make a start on it. Discuss it over lunch with your team; or copy this playbook for your team and have them read it too. Then schedule the first meeting and make it launch with a bang - get those beers and maybe even a meal happening. Take your manager out to lunch and get her excited and passionate about it. She might then see the vision and help support you. Get creative or crazy if you need to: buy everyone a little seedling plant and hand-write a label “I want to grow, please come to the Retro!” or something else satisfactorily lame. Get it done!

Do your stakeholders need convincing?

Did you know you can actually measure the value of a process? I’ve used a pulse survey tool called 6q before with small teams. It allows you to survey team members at a weekly interval with a very low barrier to entry: 6 questions. Each question is a one-liner and the answer is a click of an emoticon image: sad face, meh or a happy face. It takes to minutes to complete and if you can get this happening before introducing any aspects of this process then you’re data is even better - you have a baseline! I could go into this more but the point is that qualitative data can be just as useful to bring those management types over the line who need the “hard numbers” so to speak. You can measure team morale, engagement, happiness and perceived productivity before and after process changes and the data will speak for itself.

What new part of your project is Low Risk?

Taking a small vertical might be straightforward. But which small vertical? Try to find one that most agree is low risk and therefore low impact if it doesn’t go to plan. This way you’re going to have an easier time if, for example, you try a hybrid approach: one where you continue your existing process and tooling, but divide off this new low-risk work and try #agileforteams. Small is low-risk by default because of it’s reduced quantity. But if you can find something that your stakeholders also see as low-risk then you’ve got even better odds of getting some traction.

Just bite the bullet and do a Hard Cutover.

Choose a new phase and do a hard cut over - it’s just an experiment, no one will die, you can go back. You can even time box this. e.g. “we will try this for the next 10 weeks of work and then switch back”. You’re already doing Retros so that’s a good opportunity to review things. If you haven’t yet launched your first Retro then the end of this time-boxed experiment is a great justification for your pilot Retro. If the outcome of the Retro is that the new process didn’t help then revert. “No big deal.” you will say, knowing that the odds are that it will be something the team will want to continue with. Just make sure that all of the stakeholders and team are present - you want to ensure that management acknowledges the different types of value a process like this can bring and so you better not exclude the folks who are willing to share those benefits.

Final thoughts, housekeeping and parting inspiration

Thank you for having the courage to learn something new and try something different. It’s not easy and certainly shows the qualities of a good leader. I trust you have enough knowledge, inspiration and examples to go and apply this process in your team. I’d like to know what you think and potentially how this playbook could be improved. So feel free to tweet me or ping me on LinkedIn. Remember, any feedback I do integrate will be available to you forever.

I’d also like to invite you to share/copy this content within your team (only) and encourage your team and others who you think might be interested, to buy it for themselves. Each purchase and feedback inspires me to continue to work on the content and make it even more valuable to you. It’s also a useful reference as many of these tips, how-tos and wisdom are “the details”: the small bits that you sometimes forget but are very powerful.

Special thanks to Jason B, Andrew, Justin Tan, Jason Fowler and Matt Willis for reading drafts and contributing to the clarity and ultimate success of this playbook.

Apply what you’ve learnt as soon as you can! Every day not applying this material increases the likelihood that you’ll never do it. Don’t make this time investment of yours a waste. Be a great leader.

Cover image courtesy of SpaceX and

© 2018 Matt Kocaj,