top of page

The Death of Automation Part 2 - Doubt


Why do so many automation projects get discarded in the face of challenges? Trust is the only currency that spends in Test Automation.

The survival of any software project, in the end, relies on stakeholders seeing and knowing the value that they are getting from their investment. Today, we’ll focus on the business stakeholders, and leave the technical ones for a different post.

Product owners, executives, sales and marketing, and support and deployment teams all have one basic need for test automation and that need is confidence. They need to trust the automation’s report on the state of the application. Specifically, it means that they want to be able to make expensive decisions on things like release schedules, project planning, support staffing, industry messaging, sales roadmaps, and more knowing the automation protects them from unexpected, high impact problems. Metrics on test counts and code coverage feel good, but they do not speak to the real need. Stakeholders want to know that high impact areas are covered with tests and those tests provide adequate validation.

The definition of high impact areas varies for all of those same stakeholders. Users want to know that the features that they most value and use will work or the users will abandon the product. Support wants to know that their deployment teams and support engineers will still get to go home to their families after new software goes out. Sales wants to know that they will not look foolish in front of potential customers, one on one or on a stage at a trade show.

In the abstract, if we are building the product features from most valuable first on down, then automating tests of the most important features would have to start back at the beginning and work toward current. If we do not have full coverage up to this point (and few do), just starting with the current features is, by definition, starting with the lowest value features to date. In practice, feature value is not fixed over time and priorities vary constantly, which means that there is an automation backlog to build and manage. I have never seen an test automation product owner role, but I would love to.

Adequate validation is a different question. I explain it like this. In order for your test that are green and passing to be meaningful, you need to have seen them fail. Otherwise, with all the best intentions, we easily rack up lovely metrics that show we are exercising lots of parts of the product, but we cannot be sure that the outcomes are correct. The green bars can become a placebo that will quickly lose trust as issues sneak by it. I personally have had success with test first approaches in this area, because the tests always start out failing.

The funny thing about both of these points, like much in software development, is that the stakeholders often have only rough visibility into them. The stakeholders build their trust or doubts based on the effects of these pieces more than the causes. When application problems pop up, get fixed, and then come back, trust falters. When automation passes and then deployments go badly, trust falters. If your business is scared to pull or deploy a new product release, it shows that the automation is not trusted. Trust in automation is built by making sure that the stakeholders are not surprised. They may decide to roll out a release with a lot of failing tests, but then they know they are choosing an uphill battle and can plan and staff accordingly.

Automation must be trusted to be valued.

If your team wants to take their test automation game to another level, check out our public workshops here or contact us to bring us into your shop.

bottom of page