Chapter 3: Agile Architecture Practices

Architectural Inception

Agile Inception is currently a widely used technique used to create a shared vision among the Development Team and the rest of the stakeholders for a given product. First documented by Jonathan Rasmusson in “The Agile Samurai” (Pragmatic Bookshelf, 2010), it involves 10 activities which can be run in a single workshop, sometimes picking some of them or complementing them with other, depending on the project/product at hand.

I facilitated dozens of these workshops over the years, for organizations ranging from small startups to corporations in different industries, and they have always been key to creating an collaborative context among all involved parties.

In short, the 10 activities I run are:

1. Why are we here?

A check-in round with a twist, were every participants states not only their name and role, but what they understand it is their involvement with the product.

2. Elevator Pitch

The group design a very short phrase explaining what the product is, whom its users are, its main features, its competitive advantage and some other relevant detail. As an alternative I can use a Product Vision Board (PVB) instead, depending on the audience.

3. Vision Box

We use a physical box (cereal-type) to design a fictitious box for their product, thinking about it as a retail product (even if its not).

4. In/Out/Maybe

The group creates a simple chart about the very high-level scope of the product, stating what it should have, deciding what it shouldn’t, and leaving some features in a group to be reconsidered later.

5. Community

This is a diagram about the different groups which need to be involved in the product development, and their different levels of participation, in a visual way.

6. The Solution

This is the activity were we talk about the high-level technical decisions we can define up-front to build the product. As some architectural discussion takes place here, I will expand on it later in this section.

7. Fears

We chart all the important risks related to product development and delivery from all the perspectives represented in the workshop such as Technology, Market, Cost, Team, Legal issues, Security and others.

8. Size

Here the group discuss the perceived size of the project from several angles: Team Size, including if the is a need for several teams; estimated duration or deadline, sometimes including an expected release plan.

9. Trade-Off

This activity involves prioritizing the main non-functional concerns of the project, debating which ones will take precedence in situations where some design decision targeting one of them conflicts with others. This is the other activity related to architecture strategy, so I will expand on it later in this section, too.

10. How much?

At this point the group makes a shopping list of all the important expected costs, including salaries and fees, hardware and software, services, facilities and anything else that can have an impact in the total cost, and perhaps when or how the cost will impact (initial investment, weekly or monthly, tied to releases, etc). We don’t typically try to include the actual cost, but have a clear list to produce an actual budget later.

As you can expect, there is a lot more involved in this workshop, but my goal in this section is to provide you with a general idea and get slightly deeper in the two highlighted activities:

The Solution

While the technical participants will be leading this activity, the main purpose is for them to clearly explain to the rest of the group several things that they better understand and agree on as soon as possible. For example:

  • Types of clients for the application (web, mobile, desktop, APIs, special devices)
  • Supported platforms (browsers, desktop or mobile OS, form factors)
  • Integration (with services, corporate applications, databases, sensors or networks)
  • Dependencies (with other products, projects, technologies)
  • Expected behavior for particular scenarios (should the mobile apps support offline usage? are there some kind of distributed transactions? there are some extreme load situations? some critical time-sensitive operations?)

What we try to achieve here is a shared understanding about implementation/architectural details and how they will support the main business requirements. As the group sketches diagrams and talk about the implications of different parts, a lot of assumptions from the different participant areas come to light and new ideas retrofit this early draft of the architecture.

Trade-Off

The starting point for this activity is to agree on a set of 5 to 8 attributes representing the main non-functional concerns about the product. Most of these are the typical Quality Attributes we use in Software Architecture, but I often let the group add some they find important. For example, I find these in real-life Inceptions I facilitated: Total Cost of Ownership, Time-to-Market, and even Team Life Quality4.

As I like to run this, once we agreed on list, I make the group write each one on a sheet of paper (big block letters filling the whole page) and then I make one person for each attribute to hold the paper, showing the name to the others. Then they stand in line, one next to the others, like in a police suspect line (which is a fun moment) and we discuss the relative positions, from left to right (choosing which side means MOST important). The group keeps discussing and swapping the people in the line to reflect the priorities. As a facilitator I help them come up with some potential conflict scenarios between most attribute pairs, so everyone understands they can’t make the cake and eat it, too.

Once there is an agreement that the line represent their best bet, I usually ask the people in the line to slightly turn so I can snap a picture of them where faces and signs are clearly seen, in order. I found over time this picture showing what I call the “Quality Strategy” for the product, with faces the actual team members building the product (and most stakeholders) can recognize, is way more powerful to align decisions than any standard document.

Slicing

The concept of slicing or breaking down User Stories (although the general idea can be applied to any other unit of work you use) into smaller ones is one of the most important aspects of Agile Software Development.

As a community, we agree that the smaller (but valuable) a User Story is, the better, for several reasons:

  • More understandable
  • Simpler to prioritize (or re-prioritize)
  • Easier to estimate
  • Easier to decompose in implementation tasks
  • Fewer acceptance criteria; easier to test

Above anything else, when we break down a User Story into several smaller ones, we have more chances to find less valuable items that can be deferred for later, and eventually discarded if they are not valuable enough as the development goes on. This is exactly the Agile Principle of Simplicity, defined as “the art of maximizing the amount of work not done”.

Functional Splitting Patterns

One of my favorite resources to help teams to slice (or split) User Stories is Richard Lawrence’s guide “Patterns for Splitting User Stories” 5.

The guide proposes a three-step approach:

  1. Check the User Story quality and fix other problems before splitting
  2. Try a series of patterns to find ways to split it
  3. Evaluate the resulting User Stories to be sure everything is fine

What’s most important for us now are the patterns themselves. I recommend you to read the original post and even find out some additional patterns based on your experience, but let’s list the 9 original patterns before extending them.

  1. Workflow Steps
  2. Business Rule Variation
  3. Major Effort
  4. Simple / Complex
  5. Variations in Data
  6. Data Entry Methods
  7. Defer Performance
  8. Operations
  9. Break Out a Spike

Even without getting into details you can see the focus is to try to break down functionality in more granular parts, trying to be able to take the most valuable and ideally thinner part of each need, and provide the earliest and more valuable implementation, trying to prove or discard the hypothesis behind each User Story and also leveraging all the learning from this early delivery to better select what’s next, the best approach to implement it, and even what’s not needed at all.

Architectural Splitting Patterns

Once you broke down your User Stories into smaller ones, you can still apply slicing strategies based on architectural concerns.

You may have smelled some architectural patterns in the list above. For example: Variations in Data can be considered also as “Variations in protocols, message format, encoding, and other messaging attributes”. Also, “Defer Performance”, which tries to solve first the functional aspects of a User Story and leave a potential optimization as a later Story, can be applied to several other Quality Attributes.

Then, you can apply further splitting patterns when breaking each User Story down to its Tasks. What’s important is to reflect in the resulting User Stories this restriction, and keep being explicit about the remaining value.

Some of the extended Splitting Patterns I found useful from experience are:

User Experience level

This approach is actually a huge topic in itself, but basically focus in providing the simpler UX approach first, iteratively test it with real users and keep refining for as long as needed, depending on the product.

The UX field is so large and important right now that I prefer to point you elsewhere, probably starting with the books “Lean UX”, by Jeff Gothelf and Josh Seiden, and “User Story Mapping”, by Jeff Patton.

Reliability Constraints

User Stories related to reliability (when properly analyzed) tend to have strong acceptance criteria about accurate data input and data transformations, error-free state management, and non-corrupting recovery from detected failure condition.

This is not typically the kind of User Stories you get at first, but they appear through Backlog Refinement and usually come from observed behavior after the application is already in use (or under specific testing conditions).

The different constraints related to this normally have a highly different business value and implementation costs, so separating each constraint base on its calculated Return of Investment can be a smart move, and just doing this analysis can provide many insights for both the involved stakeholders and development team members.

Manageability / Security levels

Configuration aspects can be split over time, deferring complexity from (hopefully short-term) hard-coded values, config files/resources, admin dashboards, cold or hot reconfiguration mechanics, and more.

In a similar way, Security can be split along different axis like internal/external, authentication/authorization, user-level, api-level, component-level, code-access-level, resource-level (like DB and services accounts) and much more. Security is always a complex and sometimes very restrictive, so the ability to spread various security concerns over a long period tends to be a good idea. In very sensitive scenarios, each User Story can have very specific Security Acceptance Criteria, or can be checked against a specific set of Security concerns gradually added to the Definition of Done.

Defer Performance / Availability / Efficiency / Scalability

As important as these factors can be in different scenarios, they are usually pretty difficult to plan ahead without over-engineering. As we will see in other sections, a recurring pattern is to agree on some baseline value as part of the User Story Acceptance Criteria and build an automated test to verify this. Of course such tests are not always trivial, and that encourage good conversation with stakeholders about the real importance of these attributes in a given context.

It should be also noted that building the first few testing scenarios for these Quality Attributes take more effort, but the required environments and tools and scripts used tend to stabilize over time and further automation effort usually decrease.

Staged Portability / Interoperability / Internationalization

For every User Story dealing with these attributes tend to be breath goals like:

  • target platforms
  • systems, protocols, messaging, data sources
  • languages or locales to support

But each of these dimensions have also depth goals:

  • versions (or version ranges) within each platform
  • versions, encodings, message formats
  • static text, dynamic text, localized media, date and currency formats, calendars

In many cases there can be a complex matrix for each of this Quality Attribute that can be split over time. Staging this changes usually break down functional and technical implementation.

Taking / Paying Technical Debt

I like to take this as a separate pattern that can encompass any of the previous attributes, and it uses the Technical Debt metaphor coined by Ward Cunningham6.

Following the initial idea, when the team takes Technical Debt, it should be explicit and managed. This is very different than implementing “just what is needed”. If you write clean code without extensibility points that you don’t need yet, this is not Debt at all. You will just have to spend a bit more refactoring whenever (and if) the extensibility is needed later on. This kind of decision is not incurring on any interest.

Now, if the team is explicitly leaving a substandard or partial implementation of a component (one with known risks and issues) then they have to track this, and my favorite way to do it is by thinking what is the problem/risk it generates for the users, discussing it with the stakeholders, and then coming up with a User Story expressing that need: not the technical detail, but the need to prevent the unwanted circumstance the current implementation leaves open.

Design and Validation Levels

Symmetry and Integral Consistency

Validation Techniques

Validation Technologies

Quality Stream Mapping

DevOps