You are here

Tracks

PARTNER TRACK: Automated Non-Functional Testing and Quantitative Estimation

Please note that this is a sponsored track.

There are some common questions which bother QA and Scrum teams: How to do automated non-functional testing In development Stabilization phase? Can we control the failure rate if there are quantitative requires?

During the presentation, I would like to share our experience to run automated non-functional testing and quantitative estimation.

Key takeaways:

  • How to do automated non-functional testing In development Stabilization phase.
  • How to do quantitative estimation based on automated non-functional testing results.

We’re in This Together - Mentoring a New Tester as a New Tester

Picture this situation: You’re the lone tester in your team. You find out that the company has just hired another tester. After a little digging you discover this is someone with little or no testing experience. You want to help the new hire, but maybe you aren’t all that experienced yourself. A bigger hurdle is that you’re also new to mentoring.

Not so long ago, I was a junior tester in a company that hired someone with no previous experience of the tech industry or testing. My mentee from there is now a successful lone tester. I will share what did and did not work in our mentor/mentee relationship. We will explore together the various stages of learning for a person with no testing experience from day one right up to the day they were using ZAP and JMeter to successfully identify issues in the application. I’ll also talk about how I felt when my mentee became better than me at using some of these tools and how the roles became reversed somewhat.

We’ll look at how to build mentor/mentee rapport and create an environment that enables people to feel safe if they fail.

Participants will get to hear the good, the bad and the ugly that can happen in a mentoring process from both the perspective of the mentor and the mentee.

In this session, I’ll share my own experiences with mentoring, with tips you can use to be an effective mentor, regardless of your experience level or situation.

Key takeaways:

  • Resources both mentor and mentee can use for the mentor/mentee relationship
  • Guidance to help both the lone mentor and the mentor who is a member of a larger team
  • Structuring and scheduling retrospectives to enhance learning
  • Understanding that you don’t need to know everything to be a good mentor

Pros of Proactive Ios App Profiling as a QA Pro

Imagine an application that suits you perfectly, but makes your phone literally a hot potato. Or, another case - in the middle of the day you find out that battery is critically low, though it was fully charged just a few hours before. Trying to find out the reason - you spot the app that was folded, however continued to actively exchange data with the server. I bet that as a user you will be irritated with either of the options.

I believe that QA engineer must care about all app characteristics that are crucial for the users and could do much more than just tapping the device screen and exploring how the UI responds.

So, let’s take an iOS app, xCode with Instruments and see which information could be retrieved. Energy log, Time Profiler, Activity Monitor… all these items might sound messy for the first time, but let me show you how beneficial their use could be for the testing and for the quality of the product. Also I'll show how we created & adjusted tools that help us in some of the specific checks.

After that let’s also discuss how we could cooperate with the developers in such investigations. That’s important as only together we could find out the root causes, but not the symptoms only.

Key takeaways:

  • We will learn what could irritate user and how an app lives on the user device 
  • We will explore how to collaborate with the developer and help them 
  • We will understand some of the developers’ pains 

Automation - the Good, the Bad and the Ugly

When I first heard of automation (in the context of continuous delivery) I thought it is the holy grail of testing that will save me time and make testing better. Although the previous two statements are true, what I have learned over the last 6 years is that it can be both good and bad, and sometimes ugly. I will show you what I have learned through doing it every day (and sometimes in my sleep), what mistakes I’ve made and also what success looks like. It's a story of the journey we embark on when creating an automation library, how we used it in our tests, how did it improve our daily lives but also what we didn't do so well along the way.

Key takeaways:

  • Automation will challenge you like no other but it will be fun and rewarding to overcome those challenges
  • Automation done badly can do more harm than good 
  • How to start automation on a project and where does that lead you

Red Teaming on Production

Description

We are all familiar with the notion of penetration testing, where security experts will try to find all of the vulnerabilities present within a small surface area, such as a public facing web application. This narrow-scope approach to security testing is of course necessary. But real hackers often do not care to attack such hardened targets, and will instead leverage social engineering and other techniques, which are harder to defend against and can yield a greater payoff.

Introducing Red Teaming

Red Teaming is real adversary simulation, where the “hired guns” will try to get to your crown jewels by almost any means necessary, including social engineering and physical intrusions. A Red Team does not attempt to find every vulnerability that you have, but rather “the path of least resistance” to your company’s most treasured assets. In summary, Red Teaming is the experience of “really getting hacked”, but without the inflicted damage.

The key question that needs answering is how does one execute Red Teaming on a production environment safely.  Come and hear “Tales from the Trenches: Red Teaming on production” to find the answer.

Key Takeaways

When we are done you will understand what it means to do Red Teaming on a production environment, what are the advantages over classical penetration testing, and what red teaming can not replace. Most importantly, you will understand why this is the future!

Build and Test with Empathy

Did you know that more than 57 million Americans have one or more forms of disability and so does about 75 million users in EU? That’s roughly one-sixth of the country’s population. With such large user base there are a lot of websites and application that could better accessible. How do you know if you are building your product that serves their needs as well?

With the above preface, I'd like to target by bringing more awareness and being responsible for creating a better talk about why one should care about building an accessible product, how one can develop and test for it and how one could scale development of features with having robust automated tests in place.

Agenda:

  • Why should we care? What are some common accessible features that you can help develop for your product?
  • Color and contrast play
  • Keyboard navigation
  • Screen reading capabilities for your website How do we test for the above?
  • Introduction to powerful tools to leverage How do we scale along?
  • Automated tests FTW (for the win)
  • Introduction to powerful framework to leverage
  • Identification of when to test for what Where can we learn more?
  • Resources to go after to sharpen your technical acument

 

Key Takeaways:

  • To empower people with building inclusive products
  • Test with empathy
  • Become an advocate for accessibility

From Robotium to Appium: Choose Your Journey

Mobile Testing is challenging. It combines a complexity of testing web applications as support for hybrid mobile applications continues to grow, and native mobile applications which run on different mobile operating systems. In other words, Mobile User Interface testing is often twice as involving as regular web application testing. The result of a high demand for reliable UI testing in mobile domain is a creation of many UI test frameworks. In the open source community, two projects are responsible for majority of UI testing: Robotium and Appium.

In this talk, the speaker will attempt to take his audience on a journey of UI testing starting with introduction of Robotium and its main principals, and later moving on to Appium while highlighting why one would choose Robotium over Appium and vice versa. Ultimately, listeners should be able to make a choice of what UI test framework is the most applicable to their use case. This talk will involve demonstration of basic and advanced functionalities of Robotium and Appium using Java. To conclude the presentation, TestObject and Firebase will be used to demonstrate how both frameworks can be scaled with the cloud as a testing ground.

Key takeaways:

This talk’s takeaways can be summarized in few bullet points:

  • Main functionalities of Appium and Robotium in practice (using Java)
  • What’s and How’s of both frameworks
  • Scaling Appium with TestObject and Robotium with Firebase
  • Differences and use cases for both test frameworks: Appium and Robotium
  • When to use and when not to use either of frameworks

Ultimately, the audience should be able to choose their own “journey”; in other words, they will choose what test framework best fits their use case.

How-To Guide: Statistics Based on the Test Data

Each of us has a project: your favorite, dear, to whom you wish to grow and prosper. So, we are writing many manual tests, automating the repetitive actions, reporting hundreds of issues in Jira or any other bug management tool, and as a result, we generate a lot of data that we do not use. However, how do you assess your project's prosperity, if there are no criteria for this very prosperity?

How can you react quickly to problems before they become incorrigible, if you are not gathering any information that can give you a hint that something goes wrong?

How do you understand what should be improved, if you don't know that problems even exist in your project?

I have an answer: “Statistics!" Yes, when you hear this word in the context of testing, you might have thought that this is much better applies to sales or any other marketing field, but definitely not related to the testing process itself. That's why, instead of formulas and a list of metrics, I will tell you about my experience of collecting and analyzing statistics - and the results that I have achieved after I started using them.

Key Takeaways:

Statistics is needed to effectively manage the project: diagnose problems, localize them, correct and verify whether the methods of solving the problem that you have chosen helped you. The goal is to extract the key values and present them into a compact, straightforward way.

During the presentation I will provide the following information:

  • why test statistics gathering is important
  • how and where to collect the statistics
  • what value the test results can bring into your daily workflow
  • how to make decisions based on the information you can get from the test execution statistics
  • how to find a root cause of failures and solve testing-related problems
  • samples of stats you can start gathering right now

A QA’s Role in a Devops World – Quality Initiatives in the Devops Tool Chain

There is a lot of talk about DevOps and the death of testing (again). The role of the tester might change with the faster and heavily automated approach to development and operations but the need for testers still exists. The presentation is based on actual experiences and will de-mystify the planning and execution of the quality work within a DevOps organization.

I will cover how you can identify QA initiatives along the DevOps tool chain and provide an easily adaptable five step model to plan and implement these initiatives as boundaries of job responsibilities between developers and testers become blurred.

Among other things this presentation will touch on the pros and cons of automated checks vs. manual tests and testing vs. monitoring, guarded commits, non-functional requirements, roll-back processes, up & down stream dependencies, quality coaching, A-B testing and full circle testing.

Key Takeaways:

  • Tips on how to identify quality initiatives in a DevOp tool chain
  • A real-life model for applying test strategies
  • An understanding of the changing role of a tester

Harnessing the Power of Learning

Size of the software industry doubles every five years, meaning that half of us have less than five years of experience. How do those with little experience get up to speed while working with team of more seasoned professionals? With the pace kids learn before someone kills their enthusiasm, how long does it take to train a 15-yo to be more valuable tester than a 40-yo? What would it take to give a 40-yo the curiosity of a 15-yo?

In this talk, we share our lessons learned while working together in a team, sharing tasks for a year. We show you a variety of induced learning approaches and their impact: learning while pairing, learning while doing (with / without immediate help), learning in school, learning on industry courses, and learning by reading. In particular, we will share our lessons of the importance of early and continued pairing for enhanced learning and establishing basic testing skills. This is our shared story, with the 15-yo and the 40-yo. Can a year of learning be enough to outperform a seasoned tester?

Key Takeaways:

  • How to create a mix of learning approaches to grow someone's knowledge and skills while working
  • What knowledge and skills we recommend new testers to start from based on our experience
  • How new and old testers get easily fooled by the unknown unknowns in delivering product quality information
  • What kinds of things kill the enthusiasm and how we can bring it back to testing?

Discovering Logic in Testing

We all test in different ways and sometimes it can be hard to explain the thought processes behind how we test. What leads us into trying certain things and how do we draw conclusions, surely there is more going on here than intuition and luck? After working in games testing for almost a decade, I will draw from my personal experience to explain how games testers develop advanced logical reasoning skills. Using practical examples that will make you think, I will demonstrate logical patterns, rules and concepts that can help all of us gain a deeper understanding of what is actually happening in our minds when we test.

Key takeaways:

  • See how testing looks and feels from the perspective of a games tester hear about some of the challenges games testers face
  • About the differences between Deductive, Inductive and Abductive reasoning along with the theory of Falsificationism
  • Identify some of the biases we encounter when using personal observations and how logical reasoning can be applied when testing

How to Win with Automation and Influence People

Choosing an automation framework can be hard. When Gwen started at her current role there were nine different test automation frameworks in use for acceptance testing and a lot of the tests had been abandoned and were not running as part of the CI solution. If test automation is not running, what value could it add? The tests that were being run were labeled only as Functional tests and replaced unit tests.  These tests covered component, integration and sometimes even end to end testing. Entire layers of testing were missing which made refactoring and receiving quick feedback difficult.

This is an experience report from when Gwen joined a large organisation and how, with the help of other members of the team created a clear multi team automation solution. By implementing practices such as pairing, cross team code reviews and clear descriptions of what layers of testing covered what the teams came together to write clear, useful automation.

If you have a team working on multiple products, implementing a framework that can be picked up easily when moving between teams is essential and within this talk, Gwen will explain how to present the ideas to not only members of the team but also, how to get senior management on board with delivering an easy to use, multi-layered framework.

Key Takeaways:

  • Attendees will understand the different layers of testing - and how to sell that idea to not only within the team but outside to senior management as well.
  • They will understand how to solve the problem of frameworks not covering all layers of automation.
  • Attendees will find out how to get all members of the team on board to create tests at all layers, not just the testers or the developers.

How This Tester Learned to Write Code

Every few months the same old question pops up: Should testers learn how to code? And I don't think they do. You can spend a full career in testing, improving your skills every step along the way, without ever feeling the need or want to add coding to your skill set. However, if you are thinking about learning how to write code, I'd like to share three stories with you about how I learned.

The titles of the three stories are: how I got started how I impressed myself for the first time how I finally learned some dev skills. More important than the stories themselves, are the lessons I learned. So I will share some practical advice and some interesting resources. And perhaps most importantly, I will show how two testing skills give you a great advantage when learning how to code.

 

Key takeaways:

  • Writing a bit of code that's useful to you, is a perfect first step in learning.
  • Iterative development, it works!
  • Developers have interesting heuristics about clean code.
  • Testing skills help you tackle the big and complex task of learning to write code.

Test Automation in Python

Here Kyle provides a look at test automation in Python. Python is continuing to show strong growth in general language adoption and jobs markets. Python’s urge for simplicity and pragmatism has brought about a vibrant and supportive community, making for a powerful language with a shallow learning curve. It’s an excellent language for test tooling and Kyle hopes to give you a simple overview to allow you to build a practical, maintainable and scalable test infrastructure for your client facing and systems level integration test needs.

Kyle will give you an overview of Pytest, a simple open source test framework, which contains very powerful features to help you construct concise tests quickly. He will also show you the rough design structure implemented at FanDuel, the leading daily fantasy sports company in the US and Canada, that aims to foster stability, ease of use and ease of contribution.

Even if you currently have a solution for your project or organisation, Kyle hopes you will have takeaways from the approaches above and wise tokens of hard lessons learned in test automation efforts.

 

Key takeaways:

  • An understanding of what Python and Pytest has to offer for test automation/tooling needs.
  • An insight into dependency injection as a means of test setup and teardown as a test automation design structure and the advantages/disadvantages of this structure.
  • Wise tokens of hard lessons learned in test automation efforts.

10 Mobile App Testing Mistakes to Avoid

In this talk I will share 10 common mobile app testing mistakes and how to avoid them. I will share my knowledge in the field of mobile testing and will present practical examples about the mobile testing mistakes that I have seen in the past 9 years while working with different mobile teams across several mobile apps. The content of the talk will cover topics from manual and automated mobile testing, mobile guidelines, mobile testing techniques and how to release a mobile app without making the customer unhappy.

Key takeaways:

Each of the 10 mistakes will help the audience to not make the mistakes I have seen in the past and to improve their mobile testing.

1. Avoid easy and common mobile testing mistakes.

2. A list of testing ideas to build up a powerful mobile testing strategy.

3. Useful resources to get your mobile testing started.

The Fraud Squad - Learning to Manage Impostor Syndrome as a Tester

"I've just got where I am though luck!" "I'm going to be found out at any moment!" "I don't deserve the success I've achieved!" "I'm a giant fraud!" These are all pretty common thoughts someone suffering from Impostor Syndrome might have. I hear Impostor Syndrome mentioned frequently as something common in our industry, but hadn't even heard of it until a couple of years ago. It's something a lot of successful and really intelligent people suffer from. Its effects can be debilitating, and it shouldn't be dismissed as non-existent or unimportant. We need to acknowledge it, talk about it, and make it ok for people to admit they are affected by it. I'll talk about how my own feelings of Impostor Syndrome have affected me throughout career, even when I didn't know what it was, what I've done to manage these negative feelings, and how it's enabled me to start making a positive contribution to the testing community.

Key takeaways:

  • An understanding of what Impostor Syndrome is, why people suffer from it and what it feels like
  • If you suffer from Impostor syndrome, you aren't alone
  • If you suffer from Impostor syndrome, it isn't a sign of weakness
  • If you suffer from Impostor syndrome, you can manage it, and do amazing things

Final Frontier? Testing in Production

No tester wants to hear a developer say, "it works on my machine!" because what is being said is: "since it worked on my development environment I assume it also works on your test environment hence you cannot possibly have found a bug". We know this not to be true, yet we make the same assumption between environments in a later stage: We test our software on staging environments and assume that our test results carry over to production. We are not testing the software in the setting where our users are facing it. To top it off, we spend a considerable amount of money trying to copy production. Managing test environments is often hard, complex and needs a lot of maintenance effort. A lot of people are already using techniques, which take testing into production like Beta Testing, A/B Testing or Monitoring as Testing. We intend to push the envelope a little further and additionally move running automated checks or exploration to the production stage. To do so we need to take several things into consideration, such as making sure test data does not mess up production data and analytics, as well as hiding untested features from customers.

In this talk, Marcel Gehlen will show you some popular techniques for testing in production. He will also present various strategies, which help tackle common constraints faced when testing in production and they'll also provide you with an approach to gradually shift your testing to production.

Subscribe to Tracks