Smarter ideas worth writing about.

The Case for Automated Testing in Agile Teams

The “Agile-ish” Team

One of the most common challenges an agile software team will experience is in maintaining both quality and velocity throughout a project’s development lifecycle. Imagine this scenario:

The “Code Monkeys” team just began its 6th Sprint. At the last retrospective, the team observed that there was not enough time to fully test their application and decided to set a ‘dev freeze’ date three days before the end of their 2-week iteration.

On the surface, this sounds like a solid plan (and a win for Agile?). The ‘Code Monkeys’ identified an issue with their development process, brainstormed a solution, and self-organized to remedy the problem. What more could we want?

Let’s fast-forward a few weeks:

The Code Monkeys just began its 12th Sprint. At the last retrospective, the team observed that production bugs seem to be increasing. Some team members complained that three days isn’t enough time to completely test the application anymore. The team decides to switch to three-week iterations, and to dedicate the additional week to testing.

Once again, our team has self-organized a solution to a process issue … but this is not good! Imagine what the team might do on its 18th Sprint, when even a full week isn’t enough time to test their growing product. Will they add an additional week? Maybe they’ll hire another developer. These approaches only delay the problem rather than fixing it.

The Code Monkeys have slipped into the classic agile trap of “mini-waterfall”. Development and testing activities are separated by a codified process—and as a software project grows over time, so does the effort required to test it. The result is an unsustainable cycle that will eventually collapse under its own weight.

By The Numbers

Do the “Code Monkeys” sound like an agile team you’ve seen before? Or perhaps even been on yourself? The trap of mini-waterfall is real. To understand how teams get there, let’s dissect the root problem using a little math.

Sprint Capacity

A team’s Sprint Capacity is a measure of work the team is able to contribute to a given iteration. For the sake of illustration, let us assume that a given development team spends this capacity on two primary types of activities:

  1. Development: Any work needed to deliver some value.
  2. Testing: Any work needed to ensure some value was delivered correctly.

A team’s total capacity for a sprint can then be represented as the sum of capacity spent on these two activities:

Equation 1

Sprint CapacityTotal = Sprint CapacityDevelopment + Sprint CapacityTesting

Sprint Capacity is a constant, though a cross-functional team should be able to focus that capacity on either development or testing activities as needed.

Team Velocity

A team’s Velocity is a measure of actual value the team delivers for a given iteration. This can be represented as:

Equation 2

Velocity= Sprint CapacityDevelopment × Vc

In this equation, V_c is a coefficient that converts a team’s development capacity into a measure of delivered value (its numerical value isn’t really important, so don’tget hung up on it). The important thing this equation communicates is that a team’s velocity is directly proportional to development capacity.

Velocity and Testing

If we do a little algebra, we can combine the two previous equations to get:

Equation 3

Velocity=(Sprint CapacityTotal - Sprint CapacityTesting )×Vc

This tells us what should be intuitively clear: As a team spends more of its capacity testing, less capacity is spent developing and velocity decreases.

Regression Testing Over Time

As The Code Monkeys discovered, the effort required to fully test a software project continuously increases over time as new features and functionality are added. This looks something like:

Equation 4

Full Regression Testing Effort
= (∑Previous Sprints' Testing Efforts) + Current Sprint Testing Effort

A Visual Analysis

If we assume that a team’s capacity is constant, the team is then left with two approaches to development. The can operate under a rule of either:

  1. Maintained Quality or
  2. Maintained Velocity

Option 1: Maintained Quality

If a team wants to maintain its quality standards by completing a full set of regression tests before completing each Sprint, Equation 3 tells us this will result in an ever-decreasing velocity. Eventually, this will completely paralyze the team as the effort required to complete a full regression will exceed the team’s total capacity.

Option 2: Maintained Velocity

A completely paralyzed team is not useful to anyone. Instead of maintaining quality and permitting paralysis to happen, teams will inevitably choose to maintain their velocity. With a constant velocity and sprint capacity, Equation 3 tells us that testing capacity will remain constant as well—but it won’t be enough to test everything! Over time, a constant testing capacity means that most features will go untested from sprint to sprint, eventually leading to degraded quality in the form of increased bugs and defects.

Changing the Rules with Test Automation

Frankly, I don’t find either of these approaches acceptable. Agile teams shouldn’t have to sacrifice quality to maintain velocity and, fortunately, they don’t have to!

The key is test automation.

Test automation lets us change the rules by setting the effort needed to fully test previous sprints’ delivered value to virtually 0. This effectively changes Equation 4 to:

Equation 5

Full Regression Testing Effort= Current Sprint Testing Effort

With the cumulative element of the equation removed, so goes the root cause of our original problem.

Of course, nothing is free, and test automation comes with a price. Namely, the effort needed to write automated tests can easily match (or even exceed) the effort of actual development. The result is a much lower (though sustainable) velocity that doesn’t sacrifice quality. I’ll take that any day!

Sustainable Pace vs. Immediate Velocity

Due to the high up-front cost of writing automated tests, it may take some time for a team to realize a return on their investment. Consider the following chart representing a team’s possible velocity over time:

At first glance, you may think that Sprint 5 is where the team would ‘break even’, but that isn’t quite right. When considering total value delivered over time, it isn’t until Sprint 10 that this team would start realizing a long-term velocity benefit from having automated tests.

While this example is completely imaginary and theoretical, I think it communicates my point. In an industry already famous for being behind schedule and over budget, it can be difficult to justify the cost of automated tests to a stakeholder who really wants that one extra feature by next week. 

Making the Case

“When you’re up to your neck in alligators, it’s hard to remember your original intention was to drain the swamp.” – Unknown

For sustained, long-term agile projects, the need for test automation is clear. Automated testing is the only way agile teams can maintain quality standards over time while continuing to deliver new value. For an agile team to succeed, this needs to be understood by both its members and stakeholders. How companies can foster a Culture of Quality is a topic for another time, but for now I can offer a few ideas to help you grow test automation practices on your team:

  • Include automated tests in your definition of done. Like The Pragmatic Programmer says: “Coding ain’t done ‘till all the tests run”!
  • If you’re going to write automated tests anyway, why not take it a step further and make them part of your requirements-gathering process? ATDD anyone?
  • Include a demonstration of your automated tests in your Sprint Review. Stakeholders may appreciate your team’s focus on continued quality.
  • Run your tests as often as makes sense (at least daily). Prioritize fixing failed tests over new development.
  • Make writing automated tests the responsibility of the whole team—not just testers or dedicated “automation engineers”, and definitely not a separate team entirely.

Share:

About The Author

App Dev Consultant

Adam is a developer and ALM coach at Cardinal Solutions. He is passionate about helping teams develop quality software using modern tools and best-practices.