You are here

Tracks

Consistent and Repeatable Metrics to Manage Test Activities

Implementing more and more new functionalities within an always getting shorter time frame is one of the biggest challenges for software developer today. The most development teams have backlogs of new features that are waiting for release. The problem is the often the full system test. Especially embedded hardware is slow and has very limited resources. Therefore, a full system test sometimes takes days or weeks. Unfortunately, the functional safety requires a proof of test on the real hardware. Long test times lead to delays between implementation and test activities. Defects were found very late in the release cycle which leads to stress for the development team, delays and unhappy customers because of buggy software.

In this speech I will discuss different metrics for managing testing activities which allow teams and team members to collaborate their testing activities and provide up to date metrics on the current release readiness.

Important metrics are code complexity, frequency of code changes, test case status and code coverage data. The right analysis of the collected data and a graphical report allows you to find bottlenecks earlier in the development process. The right information allows team leader to put one’s resources into the right places and get realistic assessments. 

The speech shows how does it work and gives key insights to software quality and how to use test collaboration, code coverage, change-based tests, change impact analysis, test case maintenance and continuous testing as solutions to solve the problems you probably recognize within the reports. The speech includes a live demo.

 

Key takeaways:

  • Key insight in different test methodologies.
  • How to check where I am in my testing activities, where I should look first, which part is most important etc.
  • How to get a useful report regarding the test activities.

The Trials and Tribulations of a Non-Functional Test Consultant

I've been part of the Test Community for nearly 3 years, and I don't know many others who work for large consultancies. I mean, I know these people do exist - I've worked with hundreds of them - but you don't often bump into them at conferences or on Twitter. Now maybe this says more about the kind of company I like to keep, or perhaps, how insular big companies can be sometimes, but it got me thinking. I want to help demystify the work that the big consultancies do - specifically around Non-Functional Test. There seems to be a feeling amongst the Test Community (having been on the receiving end of this discrimination) that the big consultancies don't 'do Test properly', and while I'll admit that I do disagree with the approach they sometimes take, I also want to show people that working for a large consultancy can create some amazing opportunities for personal growth and development.

Through this talk, I'll explain why Non-Functional Test is different to Functional Test from a consultancy perspective (running tests is only half the battle, I spent most of my time justifying my existence as a Non-Functional Tester!). I'll also look at why working for a big consultancy tends to be different to working for smaller companies - huge teams, offshore working, JFDI syndrome - and why these can be good things!

Consultancy was one of the most challenging, enjoyable and exciting roles I've ever done, and I want to show people that Consultants can be 'proper' Testers too.

 

Key takeaways: 

  • Things to consider before working for a big Consultancy.
  • The importance of truly understanding the business reasons for testing.
  • Why working as a Consultant helped make me a better Tester.

Using Versatile Power-Tools for Testing Embedded Systems Efficiently

The investment of project time and money into buying and learning advanced systems for testing an embedded system with mechatronics, can often discourage teams from trying to automate with the use of mechanics, sensors, switches or other hardware. In recent years, simple platforms, such as Arduino and Raspberry Pi, have emerged and proven to be both easy to use, fast to develop on and very versatile. I will talk about how using such low-investment and recyclable tools, makes it easier and faster to set up new tests, to adapt your new tests to what your testing discovers, and worry less about critique over your expenses on unused equipment.

 

Key takeaways: 

  • What you can do with equipment for less than €100, or even €50
  • How to get started with Arduino
  • How you can use Arduino to test better and more
  • How quick and easy it is to change a test setup

Up and Away? How Moving Into Software Management Gave Me a Different View Of Testing

Until two years ago I had a fairly typical career in testing; perhaps you recognise it as similar to your own. I’ve been a Tester, a Test Manager, a Testing Coach and various other testing roles in between. I’ve lived my life in the software development industry within the friendly, inclusive world of software testing and the software testing community.

For most of the time I was happy in my bubble. But at times I was frustrated; frustrated with the way that testing is often perceived, frustrated that testers are frequently seen as second class citizens on development teams, and that the value that comes from good testing is not effectively recognised.

As the famous saying goes - “If you can’t beat em, join em”. So I did.

This talk is about different perspectives and how we, as testers are viewed. It’s about change and transition, and about the opportunities that we can all exploit to make us better testers, if only we are aware that they exist. And it’s a personal story of why the view of testing from outside our community may pleasantly surprise us all.

Main Points

  • A personal story of how moving from a traditional testing career to taking ownership for the whole software development process has influenced my views on testing.
  • How a testing career is invaluable when building and running teams who operate with quality at their heart.
  • Why we need to look outside of our own community in order to drive testing forward.
  • Why Agile and Lean are driving fundamental changes to our roles within the testing industry and advice on how to positively manage this change.
  • Experience based advice on how testers can work effectively with their managers and senior stakeholders for mutual benefit.

 

Key takeaways:

  • Different perspectives on testing, based on my experience of both testing and whole team management.
  • Advice on how to make the most of their place in their team, and how they can maximise the value of their testing.
  • A better understanding of how to form their ‘testing message’ in a way that is relevant for their stakeholders.
  • An alternative view of testing careers which may inspire them to consider alternative approaches.
  • The confidence to influence quality from more angles than merely the traditional tester role.

Design Thinking in a Nutshell

What is that strange animal called design thinking? Is it the new savior of IT or just a hype? You decide after listening to this introduction to how to successfully build things.

Design thinking is based on a few key principles:

1. In order to be able to create really good stuff we need to understand users and business deeply. There is a big difference between collecting requirements and building empathy!

2. We need to model and define our problems in a way that everybody understands. Our models need to be simple enough but still useful.

3. The first draft of anything is shit - Ernest Hemingway is supposed to have said. In order to find really good solutions to problems we need to experiment a lot. One draft is just simply not good enough!

4. We will make mistakes and we need to learn fast so we can quickly get of the bad route to failure. User testing - early, often and rapid is an incredibly effective way of getting on track.

 

Key takeaways: 

We fail to often. In order to succeed we need to:

1. Truly understand business and users by building empathy.

2. Modelling the user journey in a way that everyone is literally on the same page.

3. Understanding the need of lots of experimentation in order to find the best solution.

4. How to plan and execute really cheap, yet valuable, user testing.

A Chase of Incremental Improvement

When our ways of working make us feel distressed, we're inclined to hope for a quick, even radical, remedy. Missing relevant skills, we hire an expert over gradually teaching people who might be unwilling and unable. Having trouble with release quality and frequency, we turn to Agile and hope that magically improves the state of things.

For most of my career, I've personally lived by the principle of Every day is a learning opportunity, incrementally making me better. As years in the industry accrue, my principle has extended from individual development to helping organizations evolve. I don't believe in radical change, but in radical impact with small, incremental changes continuously. Getting a little better continuously helps you get a lot better over time, and the investment of being better grows interest.

In this talk, I will introduce change into a medium-sized organization one senior tester can bring in less than a year. The story to be told does not exist yet, as I've joined a new organization about a month ago. But instead of talking about past experiences, I will share an honest view into a recent experience. Experience from someone who has significant past experiences as a tester and believes that empirical evidence and collaboration are keys to successful software development. And empirical evidence sounds like a job for a tester. Recent matters, and future is where we're heading through continuous experimentation and learning.

 

Key takeaways: 

  • What Incremental Test Improvement looks like in timeframe of 9 months
  • What Experiments and approaches to introducing them I could try?
  • How to make good enough still better

10 Non-Obvious Tips to Testers

In this presentation I will share some of the many tips I’ve learned while training, coaching and mentoring testers from around the world. I call the presentation “10 not obvious tips” as I’ve tried to identify things most testers I’ve met do wrong, don’t know about or the things they can improve the most. The goal is to extract the key lessons from big topics and present them in a compact, straight forward way.

Content:

  1. Risk catalogs
  2. Quick tests
  3. Creating a test strategy is actually quite simple
  4. Practice test techniques
  5. Visualize, visualize and visualize
  6. Any relevant oracles exist outside your company
  7. Learning is part of the work
  8. Good testability in the product is half the work
  9. Focus on value
  10. Coaching developers doesn’t have to be complicated
  11. Dare to break rules (but don’t go behind the backs of people)
  12. Dare to speak up
  13. Explain your testing
  14. Semi-automation is great
  15. Having fun as a measurement of good testing

 

Key takeaways: 

Each of the tips above is a prepackaged takeaways and which ones are "key" will differ from participant to participant but the ones I will focus a little extra on will be

  1. Do check out test catalogs.
  2. Start to create test strategies, it's actually quite simple.
  3. There are so many oracles out there, using them will save you from so many "that's just your opinion" accusations

Not Making a Drama Out of a Crisis: How we Survived Losing One Third of our Testers Overnight

So how would your test team cope if you lost one third of your testers overnight?  Hopefully you’ll never have to find out, but for us it really happened.  We went from 12 testers to 8 testers overnight (with no warning), covering the same number of feature teams and developers, and with no hope of replacing them.  So how did we cope?

In this talk we’ll be looking at what we did to survive.

We’ll be looking at some of the improvements that we made, and how those same improvements can help any Test Team be stronger and work better.

Some of the things that we had to do included:

  • Analysing all our Smoke, Regression and Release Support testing to identify those tests that truly added value, and eliminating the rest.
  • Keeping communication channels open with the rest of the department so we could identify additional needs and requirements as soon as they appeared.
  • Re-jigged our testers across the feature teams to ensure that only the more senior testers were left in the more stressful roles.
  • Identified all existing (and previous) single points of knowledge (‘bus factors’) and ensured this information was documented and shared.
  • Looked for areas for automation that we hadn’t previously considered.

 

Key takeaways: 

  • Eliminating the ‘bus factor’.
  • Automating things you’d never considered automating.
  • Trimming unnecessary and low-risk testing and support.
  • Identifying your stake-holders and including them in your decision-making process.

Analysis and Modification of Mobile Applications Traffic

My talk is about using a proxy (I use Burp Suite for this purpose) during mobile applications testing (I work with iOS and Android apps, an example that I provide covers iOS apps) to analyze and manipulate network traffic.

The main idea is to show the audience why it is essential to test client-server interaction during mobile apps testing.

In my talk, I will use real examples of such manipulations. You can also check an article about this theme: https://stanfy.com/blog/monitor-mobile-app-traffic-with-sniffers/ which shows basic info about the topic (though in the presentation I would talk less about setting up a proxy and more about why to use this approach).

Test Trend Analysis: Towards Robust, Reliable and Timely UI Tests

Writing good UI Automation is challenging. Slow, unreliable tests are typical problems that people can face. In this talk you will get ideas about how you can instrument your test result information to provide valuable insights, paving the way for more robust, reliable and timely test results.

By capturing this information over time, and when combined with visualization tools, we can answer different questions than with existing solutions (Allure / CI tool build history). Some examples of these are:

  • Which tests are consistently flaky
  • What are the common causes of failure across tests
  • Which tests consistently take a long time to run

Using this information we can move away from the ‘re-run’ culture and better support continuous integration goals of having quick, reliable, deterministic tests

Key takeaways: 

  • Why slow and non-deterministic tests are a problem
  • How visualization of test result information will help you have insights
  • Why capturing test result information over time is important

Do Testers Need a Thick Skin? Or Should We Be Proud of Our Humanity?

As an profession we are constantly striving to move away from the stereotypes that QA means no bugs in productions, that testing should be considered throughout the development process rather than just at the end. When a bug is found in released software we aim to look at how that bug escaped detection and improve the team's testing and coding processed going forward. However, the statement that "there's a bug in production" still drives a lightning bolt of self-doubt and blame through me. I start questioning internally whether I'm being blamed and kicking myself for missing the bug as I feel the adrenaline flow and my heart beat faster. Even in the best agile project team where blame is never mentioned - I still blame myself for letting the bug through (even though logically I know that I did no such thing). I'm sure I'm not alone.

The truth is that the adrenaline and self-doubt can be great drivers of understanding problems, finding or confirming fixes and staying alert until the situation is resolved. However, when these situations occur on a frequent basis, or in batches without sufficient recovery time then the blame and self-doubt can take hold.

It is hard to maintain a healthy and happy sense of self whilst also being the 'bad guy' pointing out the problems. Advice to deal with the situation by describing oneself as merely an "information provider" can exacerbate difficulties by separating the tester's personality and self from their role and the team that they work within. At the end of the day the information we provide is often bad news for one (or more) of our colleagues - let's acknowledge that fact.

I don't have a perfect solution but I have a few ideas to share from my own personal experience. Hopefully this talk will inspire others to share their stories and experiences about how we are all human and about how embracing our humanity can help us work better together.

Key takeaways: 

  1. We can change or reinforce stereotypes of testers through our own actions
  2. Get involved and be part of your teams - regardless of job title we are human beings that need to work together
  3. Healthy, happy and empowered teams build awesome products

How to Scale Mobile Testing Across Several Teams

All started in 2010 when XING, the largest business network in German speaking countries, decided to go mobile and to staff a team with 2 iOS and Android developers, 2 software testers, 1 product manager and a freelance mobile designer. Back then the mobile team developed against a non-public API and tried to catch up with features from the web platform that had been developed over the past 7 years. In the initial 2 years everything was working more or less fine, but mobile traffic increased and exceeded half of the overall traffic of XING.com including iOS, Android and Windows Phone. Alongside the increased traffic, customers requested more mobile features, but feature development speed of the singular mobile team was too slow.

The development approach with only one mobile team did not scale compared to over 200 web developers in more than 15 teams. Therefore, the company decided to adapt the scale of the mobile development to the whole company and unleash mobile development onto the web teams. 

As of early 2015, XING has 7 mobile teams with iOS and Android developers as well as software testers. The so-called domain teams are now responsible for feature development on web and mobile. However, the scaling onto multiple mobile development teams exceeding a total of 50 people bore new challenges that had to be solved.

In this talk Daniel will show you how XING is scaling mobile development and testing efforts to 7 mobile teams with more than 20 mobile developers and 12 (mobile) software testers. He will explain how the bi-weekly releases are coordinated and organized and how real users play an important role in the release process. The second part of this talk will concentrate on the mobile test automation solutions that are in use within the XING mobile teams and how an internal device cloud was established to provide several devices to all the mobile teams across different locations.

Key takeaways: 

  • How to scale mobile testing across several teams.
  • How to organize bi-weekly native app releases for iOS and Android.
  • How to setup a mobile test automation environment including a private device cloud.

Big Ships are Hard to Turn (Quickly): Navigating Towards the Automation Promised Land

Since the fall of 2014, I have been leading the first team of testers dedicated to building and promoting the use of automation at our company of over 2000 people. Sure, there have been developers writing unit tests for years, but no one has been trying to use software to test our software much beyond that. Not before my team that is.

In this talk, I will give a condensed view of the high(and low)lights of our journey. I will cover the lessons we learned, many of them the hard way, during our push to expand the intelligent use of automation across Test and Development roles.

This talk is targeted at anyone in an environment that currently does not make use of automation, but could benefit from it, especially if you have a divided Test and Development departments.

I will share the key learnings I have distilled from my team’s experiences so that you can hopefully avoid, or at least be prepared for, some of the hurdles we faced.

Key takeaways: 

  • You are adding work, which takes time from somewhere
  • Top level promotion is a must for teams that aren’t self-motivating
  • Befriend your IS team, or find ways to avoid them
  • Start small, but think big
  • Find your friends

Threat Modeling - Masking Testing with Big Words

Hang out a bit around those security guys, and very soon you’ll encounter the term “Threat modeling”.

It sounds cool and quite heavy, so you nod your head and let them go on rambling in their special lingo. What you don’t know is that a large part of the terms going over your head only because of a translation problem - They are using different words to describe something you are already familiar with.

In this talk we’ll see how threat modeling works and how it is similar to activities we do on a daily basis such as design reviews or risk analysis.

Key takeaways: 

  • Learn what is threat modeling and how to do it.
  • Get a glimpse into security people language.
  • Understand that security testing (a.k.a. penetration testing) is more about testing than it is about security, and that testers can and should contribute to this effort and even lead it.

 

Test Your Java Applications with Spock

Remember the old days when you tested using JUnit? How boring it was? You made a lot of excuses to avoid testing your code. Luckily those dark days now belong to the past because Spock is with us. Spock is a Groovy-based testing and specification framework for Java and Groovy applications that makes writing tests fun again. We can write beautiful and highly expressive tests because of its DSL and all the power that Groovy provides us.

In this live-coding session you'll learn the basics of Spock and you'll see how easily you can test a Java application. After the talk you won't have any excuse to don't test your applications, so you have been warned before coming to the talk! 

Key takeaways: 

  • Write better and more expressive tests
  • Improve your knowledge about how to test Java code
  • Learn a little bit of Groovy
Subscribe to Tracks