Does this sound familiar? Each morning when I got into my office and opened my mailbox I would have received a report of the nightly test run, mainly Selenium and TestStack White run through Team City on dedicated test agents. Most days at least something had failed and most days we didn’t worry too much about it. After I while, a voice in the back of my head started nagging me about there being a pattern but I couldn’t pinpoint it when looking at each individual test run.
When I put all the historic data into a model and started twisting and turning it, a number of patterns stood out clear as a day and by using them I could start working on long-term continuous improvement instead of putting out fires.
During this talk we will look at a number of perspectives you can explore and how they might give you important insights:
- Tests that are always green: Do we need them? Why are they never failing?
- Tests that are always failing: Do they add value? Can we remove them?
- Tests that fail a lot: Is there an underlying issue? Are we addressing the problem or just re-run them locally and blame the environment?
- Are the tests run when we need them, or are we running them nightly because we see no other option? Can they be sped up to a point where they can provide a faster feedback loop? No? Are we sure?
We will also address the problem of trying to use the same collection of tests to solve multiple competing needs: fast feedback to developers about recent changes, feedback to testers on where to exlore, feedback to team about releasing to production and feedback to managers who want to know they are spending their money in the right places. Can it be done? Should it? Are there alternatives?
- Looking at your test results over a longer time period will give you additional insights
- What you can look for
- Trends can be analyzed for continuous improvement