You are here

Beta&Gamma

Learning Programming by Osmosis

Many different roles contribute to building software: product owners, business specialists. testers. Yet knowledge of programming keeps these roles at a distance. In this talk, I will share how I have come to programming: not through wanting to program and taking courses on it, but through working with programmers in a style called mob programming. This talk serves as an inspiration for programmers to invite non-programmers to learning code a layer at a time, immersed in the experience of creating software together to transform the ability to deliver. Lessons specific to skillsets rub in both ways, leaving everyone better off after the experience.

 

In this talk, you will learn:

  • What is mob programming and why you should care about working in that style
  • How to use strong-style pairing as a means of connecting everyone regardless of their programming skill level
  • What contributions non-programmers make in a mob before they learn to program
  • How I became a programmer through working in mobs at work and at community meetups over learning by studying programming

Integration Testing: You Keep Using That Word, I Do Not Think It Means What You Think It Means

Very few projects have a substantial or well-written integration test layer. A major reason for this is that “integration testing” is a pretty poorly defined term that covers a range of testing types. We’re often tempted to skip this layer, but unit tests and functional end-to-end tests are not enough by themselves to ensure our code is working properly. In this talk, we will learn why integration testing matters and how to clarify what that term actually means (hint: there's no right answer). Once we can define what integration testing actually means for our project, we can build a test harness that is easier for both developers and testers to extend and trust.

Determining Your Application's Heartbeat Through Monitoring and Logging

You’ve deployed to live but can you actually tell that your application is healthy? This talk discusses how monitoring and logging can provide vital information for the live team to enable them to support the application and how monitoring and logging can be an invaluable resource for the test team.  Looking at different layers of monitoring and different levels of logging, this talk with provide information on how to create logging and monitoring to build a system where the state is known and how to use these resources to help you test.  

 

Key takeaways: 

  • Basic understanding of monitoring layers
  • Basic understanding of logging levels
  • Understanding of how to use monitoring and logging to support testing

A Story of a Tester Building His First Mobile App

Having a new idea may cause some sleepless nights for you, just like a newborn baby. What ideas are actually worth pursuing? How to begin developing the idea? How to transform an idea into a real solution that will make a difference? I faced these questions and even more challenges when I started building a mobile app TesterKey (http://testerkey.com/) to boost testers’ productivity.

I will discuss the challenges of finding and developing an idea that makes an impact. It turns out that sometimes when you need a specific tool it is worth giving a go at trying to build one yourself. Coming from a tester’s background I will share my insights and experience from developing for mobile environment.

 

Key takeaways:

  • How to get inspiration for an awesome idea and what to do if you think you have found one
  • Simple steps based on my personal experience showing, how to get started with mobile app development
  • Quick overview of a useful tool for testing web and native applications on mobile devices

How to Give More Value to Business as a Tester

Testers are eager to find bugs! That is something we are so proud of. We can find even those bugs which nobody else can find. As testers we do not really concentrate on the value our bugs give to business. If bunch of bugs are closed with notification "Will not Fix", then it is kind of a sign that our work is not valued as much as we hope. 

In 15-minute track I will cover the topic how to find bugs which give value to business. How to find out the real value of business. How we can help to rise this value.

This topic is important because sometimes testers are not really valued. Testers are seen as expense and some project managers try to skip testing sometimes to avoid that expense (of course if they do this, they soon get to know that this was not a good idea). 

 

Key takeaways: 

After the track attendees will know:

  • How to find out what are the critical values of specific business
  • How to give more value as a tester
  • How to convince that testing is really needed

The Trials and Tribulations of a Non-Functional Test Consultant

I've been part of the Test Community for nearly 3 years, and I don't know many others who work for large consultancies. I mean, I know these people do exist - I've worked with hundreds of them - but you don't often bump into them at conferences or on Twitter. Now maybe this says more about the kind of company I like to keep, or perhaps, how insular big companies can be sometimes, but it got me thinking. I want to help demystify the work that the big consultancies do - specifically around Non-Functional Test. There seems to be a feeling amongst the Test Community (having been on the receiving end of this discrimination) that the big consultancies don't 'do Test properly', and while I'll admit that I do disagree with the approach they sometimes take, I also want to show people that working for a large consultancy can create some amazing opportunities for personal growth and development.

Through this talk, I'll explain why Non-Functional Test is different to Functional Test from a consultancy perspective (running tests is only half the battle, I spent most of my time justifying my existence as a Non-Functional Tester!). I'll also look at why working for a big consultancy tends to be different to working for smaller companies - huge teams, offshore working, JFDI syndrome - and why these can be good things!

Consultancy was one of the most challenging, enjoyable and exciting roles I've ever done, and I want to show people that Consultants can be 'proper' Testers too.

 

Key takeaways: 

  • Things to consider before working for a big Consultancy.
  • The importance of truly understanding the business reasons for testing.
  • Why working as a Consultant helped make me a better Tester.

Using Versatile Power-Tools for Testing Embedded Systems Efficiently

The investment of project time and money into buying and learning advanced systems for testing an embedded system with mechatronics, can often discourage teams from trying to automate with the use of mechanics, sensors, switches or other hardware. In recent years, simple platforms, such as Arduino and Raspberry Pi, have emerged and proven to be both easy to use, fast to develop on and very versatile. I will talk about how using such low-investment and recyclable tools, makes it easier and faster to set up new tests, to adapt your new tests to what your testing discovers, and worry less about critique over your expenses on unused equipment.

 

Key takeaways: 

  • What you can do with equipment for less than €100, or even €50
  • How to get started with Arduino
  • How you can use Arduino to test better and more
  • How quick and easy it is to change a test setup

Not Making a Drama Out of a Crisis: How we Survived Losing One Third of our Testers Overnight

So how would your test team cope if you lost one third of your testers overnight?  Hopefully you’ll never have to find out, but for us it really happened.  We went from 12 testers to 8 testers overnight (with no warning), covering the same number of feature teams and developers, and with no hope of replacing them.  So how did we cope?

In this talk we’ll be looking at what we did to survive.

We’ll be looking at some of the improvements that we made, and how those same improvements can help any Test Team be stronger and work better.

Some of the things that we had to do included:

  • Analysing all our Smoke, Regression and Release Support testing to identify those tests that truly added value, and eliminating the rest.
  • Keeping communication channels open with the rest of the department so we could identify additional needs and requirements as soon as they appeared.
  • Re-jigged our testers across the feature teams to ensure that only the more senior testers were left in the more stressful roles.
  • Identified all existing (and previous) single points of knowledge (‘bus factors’) and ensured this information was documented and shared.
  • Looked for areas for automation that we hadn’t previously considered.

 

Key takeaways: 

  • Eliminating the ‘bus factor’.
  • Automating things you’d never considered automating.
  • Trimming unnecessary and low-risk testing and support.
  • Identifying your stake-holders and including them in your decision-making process.

Analysis and Modification of Mobile Applications Traffic

My talk is about using a proxy (I use Burp Suite for this purpose) during mobile applications testing (I work with iOS and Android apps, an example that I provide covers iOS apps) to analyze and manipulate network traffic.

The main idea is to show the audience why it is essential to test client-server interaction during mobile apps testing.

In my talk, I will use real examples of such manipulations. You can also check an article about this theme: https://stanfy.com/blog/monitor-mobile-app-traffic-with-sniffers/ which shows basic info about the topic (though in the presentation I would talk less about setting up a proxy and more about why to use this approach).

Test Trend Analysis: Towards Robust, Reliable and Timely UI Tests

Writing good UI Automation is challenging. Slow, unreliable tests are typical problems that people can face. In this talk you will get ideas about how you can instrument your test result information to provide valuable insights, paving the way for more robust, reliable and timely test results.

By capturing this information over time, and when combined with visualization tools, we can answer different questions than with existing solutions (Allure / CI tool build history). Some examples of these are:

  • Which tests are consistently flaky
  • What are the common causes of failure across tests
  • Which tests consistently take a long time to run

Using this information we can move away from the ‘re-run’ culture and better support continuous integration goals of having quick, reliable, deterministic tests

Key takeaways: 

  • Why slow and non-deterministic tests are a problem
  • How visualization of test result information will help you have insights
  • Why capturing test result information over time is important
Subscribe to Beta&Gamma