You are here

Alfa 1+2

Automation - the Good, the Bad and the Ugly

When I first heard of automation (in the context of continuous delivery) I thought it is the holy grail of testing that will save me time and make testing better. Although the previous two statements are true, what I have learned over the last 6 years is that it can be both good and bad, and sometimes ugly. I will show you what I have learned through doing it every day (and sometimes in my sleep), what mistakes I’ve made and also what success looks like. It's a story of the journey we embark on when creating an automation library, how we used it in our tests, how did it improve our daily lives but also what we didn't do so well along the way.

Key takeaways:

  • Automation will challenge you like no other but it will be fun and rewarding to overcome those challenges
  • Automation done badly can do more harm than good 
  • How to start automation on a project and where does that lead you


Come and have fun on our afterparty! Besides different fun activities there will be food and drinks as well!

KEYNOTE: The Reality of Testing in an Artificial World

Our world is changing. Artificial intelligence is being employed in just about all walks of life - from virtual assistants to self-driving cars. How does this new way of life impact software testing? What is our there even one?

Of course there is! While the future of artificial intelligence is mostly unknown, remember that as testers, one of our strongest assets is being able to discover and report the unknown. Many software developers jump at the opportunity to learn and implement the new skills required for artificial intelligence. Let’s get on board and take our place in making history!

In this talk, we’ll discuss the tester’s role in an artificial world, the skills needed to test such applications, and the impact that a tester can make in this space.

Key Takeaways:

  • You will gain a better understanding of the testable features of AI
  • You will examine the skills needed to test AI
  • You will explore the impact that testers can have on developing AI applications

KEYNOTE: From Being a Tester to Changing The Way QA is Done: My Trials and Tribulations

Kristel was just 23 when she quit her job as a QA tester and started down the path to building Testlio – an end-to-end QA management platform and worldwide community of vetted testers.

She started the company primarily with the thought of changing the way QA is done. She wanted to elevate the importance of testing and create a place where testers felt truly appreciated.

But building companies is tough – even when things are going well it comes with its own challenges. It’s easy to get caught second-guessing decisions.

In this fast-moving technology landscape, you are only as good as your product and its ability to solve your customers’ pain points. So over time, she’s learned to let things go, grow a thicker skin and face every challenge with positivity. During this session, she shares her experience and struggles as a young idealistic tester set on changing the industry.

Key takeaways:

  • What it takes to bring a new product / service to market
  • Mistakes to avoid when scaling a team

KEYNOTE: Stress Test - the Anatomy of a Burnout

Stress testing is essential for mission critical software, making sure the systems tested are robust, available and can handle errors under a heavy load - and that they recover gracefully. When working in highly driven environments, with projects where deadlines are tight and clients as well as interdepartmental stakeholders are breathing down everyone’s neck to get stuff done, we are constantly putting ourselves under a stress test as well. And this, in the end, leads to burnout - when we only do not recover gracefully, but we might not recover at all.

What is burnout and why do people sustain it, keeping up the activities that lead to it? How long does it take to recover from one? What are organisational and personal symptoms of burnout? What are the stats in Europe and Estonia especially with regards to workplace stress levels and management styles? These together with my own burnout story - working on Star Wars: The Force Awakens and Fast & Furious 7 in the desert of Abu Dhabi three years ago - are all topics that will be explored within the keynote.

Research shows that burnout is not a talent issue - despite most HR departments and management viewing it as such - it is instead an organisation’s issue. Research also shows wellness programmes designed by companies to combat exactly the scare of burnout are not effective in combating and avoiding it and that descent into this condition is usually rapid, whilst recovery comparatively takes years. The people who are most likely to burn out are the high achievers and the most engaged and driven employees, i.e. the really valuable ones. Therefore it is understandable why HR departments are often especially anxious about this topic - as it leads to high churn rate and creates knowledge gaps - and more work for them of course. Harvard Business Review has reported that compared to roughly 20 years ago, people are twice as likely to report that they feel constantly exhausted. And that there is a significant correlation between feeling lonely and work exhaustion - this is not because of social isolation but because of emotional exhaustion due to burnout at work. Loneliness reduces longevity more than drinking, smoking or being obese.

The keynote will conclude by summarising key trends that are being used to address workplace stress levels, what to do when panic bites (I quit Star Wars - sometimes it really is best to vote with one’s feet!) and why all of this is relevant in software testing.

Build and Test with Empathy

Did you know that more than 57 million Americans have one or more forms of disability and so does about 75 million users in EU? That’s roughly one-sixth of the country’s population. With such large user base there are a lot of websites and application that could better accessible. How do you know if you are building your product that serves their needs as well?

With the above preface, I'd like to target by bringing more awareness and being responsible for creating a better talk about why one should care about building an accessible product, how one can develop and test for it and how one could scale development of features with having robust automated tests in place.


  • Why should we care? What are some common accessible features that you can help develop for your product?
  • Color and contrast play
  • Keyboard navigation
  • Screen reading capabilities for your website How do we test for the above?
  • Introduction to powerful tools to leverage How do we scale along?
  • Automated tests FTW (for the win)
  • Introduction to powerful framework to leverage
  • Identification of when to test for what Where can we learn more?
  • Resources to go after to sharpen your technical acument


Key Takeaways:

  • To empower people with building inclusive products
  • Test with empathy
  • Become an advocate for accessibility

From Robotium to Appium: Choose Your Journey

Mobile Testing is challenging. It combines a complexity of testing web applications as support for hybrid mobile applications continues to grow, and native mobile applications which run on different mobile operating systems. In other words, Mobile User Interface testing is often twice as involving as regular web application testing. The result of a high demand for reliable UI testing in mobile domain is a creation of many UI test frameworks. In the open source community, two projects are responsible for majority of UI testing: Robotium and Appium.

In this talk, the speaker will attempt to take his audience on a journey of UI testing starting with introduction of Robotium and its main principals, and later moving on to Appium while highlighting why one would choose Robotium over Appium and vice versa. Ultimately, listeners should be able to make a choice of what UI test framework is the most applicable to their use case. This talk will involve demonstration of basic and advanced functionalities of Robotium and Appium using Java. To conclude the presentation, TestObject and Firebase will be used to demonstrate how both frameworks can be scaled with the cloud as a testing ground.

Key takeaways:

This talk’s takeaways can be summarized in few bullet points:

  • Main functionalities of Appium and Robotium in practice (using Java)
  • What’s and How’s of both frameworks
  • Scaling Appium with TestObject and Robotium with Firebase
  • Differences and use cases for both test frameworks: Appium and Robotium
  • When to use and when not to use either of frameworks

Ultimately, the audience should be able to choose their own “journey”; in other words, they will choose what test framework best fits their use case.

How-To Guide: Statistics Based on the Test Data

Each of us has a project: your favorite, dear, to whom you wish to grow and prosper. So, we are writing many manual tests, automating the repetitive actions, reporting hundreds of issues in Jira or any other bug management tool, and as a result, we generate a lot of data that we do not use. However, how do you assess your project's prosperity, if there are no criteria for this very prosperity?

How can you react quickly to problems before they become incorrigible, if you are not gathering any information that can give you a hint that something goes wrong?

How do you understand what should be improved, if you don't know that problems even exist in your project?

I have an answer: “Statistics!" Yes, when you hear this word in the context of testing, you might have thought that this is much better applies to sales or any other marketing field, but definitely not related to the testing process itself. That's why, instead of formulas and a list of metrics, I will tell you about my experience of collecting and analyzing statistics - and the results that I have achieved after I started using them.

Key Takeaways:

Statistics is needed to effectively manage the project: diagnose problems, localize them, correct and verify whether the methods of solving the problem that you have chosen helped you. The goal is to extract the key values and present them into a compact, straightforward way.

During the presentation I will provide the following information:

  • why test statistics gathering is important
  • how and where to collect the statistics
  • what value the test results can bring into your daily workflow
  • how to make decisions based on the information you can get from the test execution statistics
  • how to find a root cause of failures and solve testing-related problems
  • samples of stats you can start gathering right now

A QA’s Role in a Devops World – Quality Initiatives in the Devops Tool Chain

There is a lot of talk about DevOps and the death of testing (again). The role of the tester might change with the faster and heavily automated approach to development and operations but the need for testers still exists. The presentation is based on actual experiences and will de-mystify the planning and execution of the quality work within a DevOps organization.

I will cover how you can identify QA initiatives along the DevOps tool chain and provide an easily adaptable five step model to plan and implement these initiatives as boundaries of job responsibilities between developers and testers become blurred.

Among other things this presentation will touch on the pros and cons of automated checks vs. manual tests and testing vs. monitoring, guarded commits, non-functional requirements, roll-back processes, up & down stream dependencies, quality coaching, A-B testing and full circle testing.

Key Takeaways:

  • Tips on how to identify quality initiatives in a DevOp tool chain
  • A real-life model for applying test strategies
  • An understanding of the changing role of a tester

Harnessing the Power of Learning

Size of the software industry doubles every five years, meaning that half of us have less than five years of experience. How do those with little experience get up to speed while working with team of more seasoned professionals? With the pace kids learn before someone kills their enthusiasm, how long does it take to train a 15-yo to be more valuable tester than a 40-yo? What would it take to give a 40-yo the curiosity of a 15-yo?

In this talk, we share our lessons learned while working together in a team, sharing tasks for a year. We show you a variety of induced learning approaches and their impact: learning while pairing, learning while doing (with / without immediate help), learning in school, learning on industry courses, and learning by reading. In particular, we will share our lessons of the importance of early and continued pairing for enhanced learning and establishing basic testing skills. This is our shared story, with the 15-yo and the 40-yo. Can a year of learning be enough to outperform a seasoned tester?

Key Takeaways:

  • How to create a mix of learning approaches to grow someone's knowledge and skills while working
  • What knowledge and skills we recommend new testers to start from based on our experience
  • How new and old testers get easily fooled by the unknown unknowns in delivering product quality information
  • What kinds of things kill the enthusiasm and how we can bring it back to testing?

KEYNOTE: Screw Testing, Let’s Talk Quality

Many of us have heard the phrase ‘Quality is a team responsibility’ meaning instead of quality being owned by one person (typically a tester) it’s something that many are responsible for.

But what does the word quality mean for testers, your team and importantly for your stakeholders? Will the role of tester no longer exist? Is quality simply another word for testing, or is there more to Quality than we think?

This talk explores the importance of understanding quality in the context of a contemporary engineering practices such as Extreme Programming, CI/CD and Devops. When quality is owned collectively, it can facilitate testability and shift testing closer to the design process.

Do I believe testers have a place in this future? Absolutely! In fact, I think the role of a tester is needed more now than ever but perhaps not in a way we have traditionally seen our roles.

A must for anyone moving to contemporary engineering approaches

Key Takeaways:

  • Learn how testing fits into quality
  • How testing might fit into contemporary engineering practices
  • A way to have a conversation about quality at your workplace

Discovering Logic in Testing

We all test in different ways and sometimes it can be hard to explain the thought processes behind how we test. What leads us into trying certain things and how do we draw conclusions, surely there is more going on here than intuition and luck? After working in games testing for almost a decade, I will draw from my personal experience to explain how games testers develop advanced logical reasoning skills. Using practical examples that will make you think, I will demonstrate logical patterns, rules and concepts that can help all of us gain a deeper understanding of what is actually happening in our minds when we test.

Key takeaways:

  • See how testing looks and feels from the perspective of a games tester hear about some of the challenges games testers face
  • About the differences between Deductive, Inductive and Abductive reasoning along with the theory of Falsificationism
  • Identify some of the biases we encounter when using personal observations and how logical reasoning can be applied when testing

How to Win with Automation and Influence People

Choosing an automation framework can be hard. When Gwen started at her current role there were nine different test automation frameworks in use for acceptance testing and a lot of the tests had been abandoned and were not running as part of the CI solution. If test automation is not running, what value could it add? The tests that were being run were labeled only as Functional tests and replaced unit tests.  These tests covered component, integration and sometimes even end to end testing. Entire layers of testing were missing which made refactoring and receiving quick feedback difficult.

This is an experience report from when Gwen joined a large organisation and how, with the help of other members of the team created a clear multi team automation solution. By implementing practices such as pairing, cross team code reviews and clear descriptions of what layers of testing covered what the teams came together to write clear, useful automation.

If you have a team working on multiple products, implementing a framework that can be picked up easily when moving between teams is essential and within this talk, Gwen will explain how to present the ideas to not only members of the team but also, how to get senior management on board with delivering an easy to use, multi-layered framework.

Key Takeaways:

  • Attendees will understand the different layers of testing - and how to sell that idea to not only within the team but outside to senior management as well.
  • They will understand how to solve the problem of frameworks not covering all layers of automation.
  • Attendees will find out how to get all members of the team on board to create tests at all layers, not just the testers or the developers.

How This Tester Learned to Write Code

Every few months the same old question pops up: Should testers learn how to code? And I don't think they do. You can spend a full career in testing, improving your skills every step along the way, without ever feeling the need or want to add coding to your skill set. However, if you are thinking about learning how to write code, I'd like to share three stories with you about how I learned.

The titles of the three stories are: how I got started how I impressed myself for the first time how I finally learned some dev skills. More important than the stories themselves, are the lessons I learned. So I will share some practical advice and some interesting resources. And perhaps most importantly, I will show how two testing skills give you a great advantage when learning how to code.


Key takeaways:

  • Writing a bit of code that's useful to you, is a perfect first step in learning.
  • Iterative development, it works!
  • Developers have interesting heuristics about clean code.
  • Testing skills help you tackle the big and complex task of learning to write code.

Test Automation in Python

Here Kyle provides a look at test automation in Python. Python is continuing to show strong growth in general language adoption and jobs markets. Python’s urge for simplicity and pragmatism has brought about a vibrant and supportive community, making for a powerful language with a shallow learning curve. It’s an excellent language for test tooling and Kyle hopes to give you a simple overview to allow you to build a practical, maintainable and scalable test infrastructure for your client facing and systems level integration test needs.

Kyle will give you an overview of Pytest, a simple open source test framework, which contains very powerful features to help you construct concise tests quickly. He will also show you the rough design structure implemented at FanDuel, the leading daily fantasy sports company in the US and Canada, that aims to foster stability, ease of use and ease of contribution.

Even if you currently have a solution for your project or organisation, Kyle hopes you will have takeaways from the approaches above and wise tokens of hard lessons learned in test automation efforts.


Key takeaways:

  • An understanding of what Python and Pytest has to offer for test automation/tooling needs.
  • An insight into dependency injection as a means of test setup and teardown as a test automation design structure and the advantages/disadvantages of this structure.
  • Wise tokens of hard lessons learned in test automation efforts.

10 Mobile App Testing Mistakes to Avoid

In this talk I will share 10 common mobile app testing mistakes and how to avoid them. I will share my knowledge in the field of mobile testing and will present practical examples about the mobile testing mistakes that I have seen in the past 9 years while working with different mobile teams across several mobile apps. The content of the talk will cover topics from manual and automated mobile testing, mobile guidelines, mobile testing techniques and how to release a mobile app without making the customer unhappy.

Key takeaways:

Each of the 10 mistakes will help the audience to not make the mistakes I have seen in the past and to improve their mobile testing.

1. Avoid easy and common mobile testing mistakes.

2. A list of testing ideas to build up a powerful mobile testing strategy.

3. Useful resources to get your mobile testing started.

Final Frontier? Testing in Production

No tester wants to hear a developer say, "it works on my machine!" because what is being said is: "since it worked on my development environment I assume it also works on your test environment hence you cannot possibly have found a bug". We know this not to be true, yet we make the same assumption between environments in a later stage: We test our software on staging environments and assume that our test results carry over to production. We are not testing the software in the setting where our users are facing it. To top it off, we spend a considerable amount of money trying to copy production. Managing test environments is often hard, complex and needs a lot of maintenance effort. A lot of people are already using techniques, which take testing into production like Beta Testing, A/B Testing or Monitoring as Testing. We intend to push the envelope a little further and additionally move running automated checks or exploration to the production stage. To do so we need to take several things into consideration, such as making sure test data does not mess up production data and analytics, as well as hiding untested features from customers.

In this talk, Marcel Gehlen will show you some popular techniques for testing in production. He will also present various strategies, which help tackle common constraints faced when testing in production and they'll also provide you with an approach to gradually shift your testing to production.

Subscribe to Alfa 1+2