You are here

Alfa 1+2

Automation - the Good, the Bad and the Ugly

When I first heard of automation (in the context of continuous delivery) I thought it is the holy grail of testing that will save me time and make testing better. Although the previous two statements are true, what I have learned over the last 6 years is that it can be both good and bad, and sometimes ugly. I will show you what I have learned through doing it every day(and sometimes in my sleep), what mistakes I’ve made and also what success looks like. It's a story of the journey we embark on when creating an automation library, how we used it in our tests, how did it improve our daily lives but also what we didn't do so well along the way.

Key takeaways:

  • Automation will challenge you like no other but it will be fun and rewarding to overcome those challenges
  • Automation done badly can do more harm than good 
  • How to start automation on a project and where does that lead you

DINNER & PARTY

Come and have fun on our afterparty! Besides different fun activities there will be food and drinks as well!

Build and Test with Empathy

Did you know that more than 57 million Americans have one or more forms of disability and so does about 75 million users in EU? That’s roughly one-sixth of the country’s population. With such large user base there are a lot of websites and application that could better accessible. How do you know if you are building your product that serves their needs as well?

With the above preface, I'd like to target by bringing more awareness and being responsible for creating a better talk about why one should care about building an accessible product, how one can develop and test for it and how one could scale development of features with having robust automated tests in place.

Agenda:

  • Why should we care? What are some common accessible features that you can help develop for your product?
  • Color and contrast play
  • Keyboard navigation
  • Screen reading capabilities for your website How do we test for the above?
  • Introduction to powerful tools to leverage How do we scale along?
  • Automated tests FTW (for the win)
  • Introduction to powerful framework to leverage
  • Identification of when to test for what Where can we learn more?
  • Resources to go after to sharpen your technical acument

 

Key Takeaways:

  • To empower people with building inclusive products
  • Test with empathy
  • Become an advocate for accessibility

From Robotium to Appium: Choose Your Journey

Mobile Testing is challenging. It combines a complexity of testing web applications as support for hybrid mobile applications continues to grow, and native mobile applications which run on different mobile operating systems. In other words, Mobile User Interface testing is often twice as involving as regular web application testing. The result of a high demand for reliable UI testing in mobile domain is a creation of many UI test frameworks. In the open source community, two projects are responsible for majority of UI testing: Robotium and Appium.

In this talk, the speaker will attempt to take his audience on a journey of UI testing starting with introduction of Robotium and its main principals, and later moving on to Appium while highlighting why one would choose Robotium over Appium and vice versa. Ultimately, listeners should be able to make a choice of what UI test framework is the most applicable to their use case. This talk will involve demonstration of basic and advanced functionalities of Robotium and Appium using Java. To conclude the presentation, TestObject and Firebase will be used to demonstrate how both frameworks can be scaled with the cloud as a testing ground.

Key takeaways:

This talk’s take aways can be summarized in few bullet points:

  • Main functionalities of Appium and Robotium in practice (using Java)
  • What’s and How’s of both frameworks
  • Scaling Appium with TestObject and Robotium with Firebase
  • Differences and use cases for both test frameworks: Appium and Robotium
  • When to use and when not to use either of frameworks

Ultimately, the audience should be able to choose their own “journey”; in other words, they will choose what test framework best fits their use case.

How-To Guide: Statistics Based on the Test Data

Each of us has a project: your favorite, dear, to whom you wish to grow and prosper. So, we are writing many manual tests, automating the repetitive actions, reporting hundreds of issues in Jira or any other bug management tool, and as a result, we generate a lot of data that we do not use. However, how do you assess your project's prosperity, if there are no criteria for this very prosperity?

How can you react quickly to problems before they become incorrigible, if you are not gathering any information that can give you a hint that something goes wrong?

How do you understand what should be improved, if you don't know that problems even exist in your project?

I have an answer: “Statistics!" Yes, when you hear this word in the context of testing, you might have thought that this is much better applies to sales or any other marketing field, but definitely not related to the testing process itself. That's why, instead of formulas and a list of metrics, I will tell you about my experience of collecting and analyzing statistics - and the results that I have achieved after I started using them.

Key Takeaways:

Statistics is needed to effectively manage the project: diagnose problems, localize them, correct and verify whether the methods of solving the problem that you have chosen helped you. The goal is to extract the key values and present them into a compact, straightforward way.

During the presentation I will provide the following information:

  • why test statistics gathering is important
  • how and where to collect the statistics
  • what value the test results can bring into your daily workflow
  • how to make decisions based on the information you can get from the test execution statistics
  • how to find a root cause of failures and solve testing-related problems
  • samples of stats you can start gathering right now

A Qa’s Role in a Devops World – Quality Initiatives in the Devops Tool Chain

There is a lot of talk about DevOps and the death of testing (again). The role of the tester might change with the faster and heavily automated approach to development and operations but the need for testers still exists. The presentation is based on actual experiences and will de-mystify the planning and execution of the quality work within a DevOps organization.

I will cover how you can identify QA initiatives along the DevOps tool chain and provide an easily adaptable five step model to plan and implement these initiatives as boundaries of job responsibilities between developers and testers become blurred.

Among other things this presentation will touch on the pros and cons of automated checks vs. manual tests and testing vs. monitoring, guarded commits, non-functional requirements, roll-back processes, up & down stream dependencies, quality coaching, A-B testing and full circle testing.

Key Takeaways:

  • Tips on how to identify quality initiatives in a DevOp tool chain
  • A real life model for applying test strategies
  • An understanding of the changing role of a tester

Harnessing the Power of Learning

Size of the software industry doubles every five years, meaning that half of us have less than five years of experience. How do those with little experience get up to speed while working with team of more seasoned professionals? With the pace kids learn before someone kills their enthusiasm, how long does it take to train a 15-yo to be more valuable tester than a 40-yo? What would it take to give a 40-yo the curiosity of a 15-yo?

In this talk, we share our lessons learned while working together in a team, sharing tasks for a year. We show you a variety of induced learning approaches and their impact: learning while pairing, learning while doing (with / without immediate help), learning in school, learning on industry courses, and learning by reading. In particular, we will share our lessons of the importance of early and continued pairing for enhanced learning and establishing basic testing skills. This is our shared story, with the 15-yo and the 40-yo. Can a year of learning be enough to outperform a seasoned tester?

Key Takeaways:

  • How to create a mix of learning approaches to grow someone's knowledge and skills while working
  • What knowledge and skills we recommend new testers to start from based on our experience
  • How new and old testers get easily fooled by the unknown unknowns in delivering product quality information
  • What kinds of things kill the enthusiasm and how we can bring it back to testing?

Discovering Logic in Testing

We all test in different ways and sometimes it can be hard to explain the thought processes behind how we test. What leads us into trying certain things and how do we draw conclusions, surely there is more going on here than intuition and luck? After working in games testing for almost a decade, I will draw from my personal experience to explain how games testers develop advanced logical reasoning skills. Using practical examples that will make you think, I will demonstrate logical patterns, rules and concepts that can help all of us gain a deeper understanding of what is actually happening in our minds when we test.

Key takeaways:

See how testing looks and feels from the perspective of a games tester hear about some of the challenges games testers face.

about the differences between Deductive, Inductive and Abductive reasoning along with the theory of Falsificationism.

Identify some of the biases we encounter when using personal observations and how logical reasoning can be applied when testing.

How to Win with Automation and Influence People

Choosing an automation framework can be hard. When Gwen started at her current role there were nine different test automation frameworks in use for acceptance testing and a lot of the tests had been abandoned and were not running as part of the CI solution. If test automation is not running, what value could it add? The tests that were being run were labeled only as Functional tests and replaced unit tests.  These tests covered component, integration and sometimes even end to end testing. Entire layers of testing were missing which made refactoring and receiving quick feedback difficult.

This is an experience report from when Gwen joined a large organisation and how, with the help of other members of the team created a clear multi team automation solution. By implementing practices such as pairing, cross team code reviews and clear descriptions of what layers of testing covered what the teams came together to write clear, useful automation.

If you have a team working on multiple products, implementing a framework that can be picked up easily when moving between teams is essential and within this talk, Gwen will explain how to present the ideas to not only members of the team but also, how to get senior management on board with delivering an easy to use, multi-layered framework.

Key Takeaways:

  • Attendees will understand the different layers of testing - and how to sell that idea to not only within the team but outside to senior management as well.
  • They will understand how to solve the problem of frameworks not covering all layers of automation.
  • Attendees will find out how to get all members of the team on board to create tests at all layers, not just the testers or the developers.

How This Tester Learned to Write Code

Every few months the same old question pops up: Should testers learn how to code? And I don't think they do. You can spend a full career in testing, improving your skills every step along the way, without ever feeling the need or want to add coding to your skill set. However, if you are thinking about learning how to write code, I'd like to share three stories with you about how I learned.

The titles of the three stories are: how I got started how I impressed myself for the first time how I finally learned some dev skills. More important than the stories themselves, are the lessons I learned. So I will share some practical advice and some interesting resources. And perhaps most importantly, I will show how two testing skills give you a great advantage when learning how to code.

 

Key takeaways:

  • Writing a bit of code that's useful to you, is a perfect first step in learning.
  • Iterative development, it works!
  • Developers have interesting heuristics about clean code.
  • Testing skills help you tackle the big and complex task of learning to write code.

Test Automation in Python

Here Kyle provides a look at test automation in Python. Python is continuing to show strong growth in general language adoption and jobs markets. Python’s urge for simplicity and pragmatism has brought about a vibrant and supportive community, making for a powerful language with a shallow learning curve. It’s an excellent language for test tooling and Kyle hopes to give you a simple overview to allow you to build a practical, maintainable and scalable test infrastructure for your client facing and systems level integration test needs.

Kyle will give you an overview of pytest, a simple open source test framework, which contains very powerful features to help you construct concise tests quickly. He will also show you the rough design structure implemented at FanDuel, the leading daily fantasy sports company in the US and Canada, that aims to foster stability, ease of use and ease of contribution.

Even if you currently have a solution for your project or organisation, Kyle hopes you will have takeaways from the approaches above and wise tokens of hard lessons learned in test automation efforts.

 

Key takeaways:

  • An understanding of what Python and pytest has to offer for test automation/tooling needs.
  • An insight into dependency injection as a means of test setup and teardown as a test automation design structure and the advantages/disadvantages of this structure.
  • Wise tokens of hard lessons learned in test automation efforts

10 Mobile App Testing Mistakes to Avoid

In this talk I will share 10 common mobile app testing mistakes and how to avoid them. I will share my knowledge in the field of mobile testing and will present practical examples about the mobile testing mistakes that I have seen in the past 9 years while working with different mobile teams across several mobile apps. The content of the talk will cover topics from manual and automated mobile testing, mobile guidelines, mobile testing techniques and how to release a mobile app without making the customer unhappy.

Key takeaways:

Each of the 10 mistakes will help the audience to not make the mistakes I have seen in the past and to improve their mobile testing.

1. Avoid easy and common mobile testing mistakes.

2. A list of testing ideas to build up a powerful mobile testing strategy.

3. Useful resources to get your mobile testing started.

Lessons Learned from Testing Machine Learning Software

The models will learn what you teach them to learn. A phrase that describes the main problem which you face when you testing machine learning software, since not only must test the software but also the learning model.Throughout these three years in my team, we have learned that the software built around machine learning algorithms presents a number of challenges and peculiarities for testers, ranging from the application of statistical elements to the knowledge and understanding of neural learning models, therefore I see starting to emerge the figure of a new type of tester the "Experimental Data Scientist" that as in any other scientific discipline is responsible to validate, check and ensure the accuracy and legitimacy of what theoretical models predict.

But not only this, the ecosystem and the necessary infrastructure to conduct experiments is also peculiar, it is here where the testers have to use a variety of technologies and tools capable of supporting anything that involves exercising this kind of software, ranging from containers to the most specialized math libraries.

With all this as a preamble, in this talk, we will see all the things that we fail when faced with this kind of software as well as how to build the necessary ecosystem, and show you some examples of challenges and practical solutions.We'll see how by many failures, errors, and mistakes we have learned some lessons on how to deal with such problems. Some of the things you will see are:

  • Experimental Data Scientist
  • Knowing and learning from the model (Everything is around of training data, objective functions, and metrics)*When the model is wrong (Example with spurious correlations, and adversarial examples)
  • Property-based testing applied to models
  • Supervised versus unsupervised learning, or together
  • Big Data, how many really we used?
  • Dependencies, the output of a model is always the input of the next model (Example showing a real model)
  • Infrastructure, when your laptop is no longer enough
  • Managing expectations, this is not magic!

 

Key takeaways:

  • Basic knowledge about machine learning
  • Stories about real problems that we faced in testing machine learning software
  • Some techniques of software testing applied in machine learning
Subscribe to Alfa 1+2