You are here


Learning to Learn in a Mob

Prerequisites for attending the workshop: You will need to bring your laptop.

Mob programming, mob testing, or generally mobbing is a wonderful approach to uncover implicit knowledge and learn from each other. It’s “all the brilliant people, working on the same thing, at the same time, in the same space, and at the same computer“. But what if you introduce a new technology no one in the mob had worked with before? What if you suddenly need knowledge in the team that nobody has? Does mobbing still prove to be an efficient way of learning in that case? Join this experiment and learn something nobody in the mob has done before. Let’s see how far we get in mastering a new skill together!

Key takeaways:

  • Learn the basics of the mob approach and practice them hands-on
  • Experience the benefits of uncovering and sharing implicit knowledge to help everyone learn
  • See how having all brains in helps solve unknown challenges and get the best out of everybody

Don't Take It Personally

Receiving feedback can be tough. It can be hard to remember that it is meant to help improve work going forward, not to point out current flaws. This is even more prevalent in an industry packed full of introverts and especially in roles where it is your job to find issues and help fix them. It can be incredibly easy to take these kinds of feedback or comments personally in the workplace, but what is the impact when we do so?

When we personalize situations, we tend to lose sight of the bigger picture. It becomes easier to focus in on minute details and not look at the overall context in which the feedback is being given. The impacts of this lower level of focus can result in wasted time from; chasing the wrong issues, laying blame, making up excuses, refusing to ask for help, and ultimately avoiding discussions around the root cause and ways to improve.

This workshop will draw on experiences and examples of situations such as; testing debriefs (tester to tester interactions), bug discovery (tester to developer interactions), and inter team projects (team to team interactions) and discuss tactics for each on staying objective and productive. After going through what makes feedback and comments personal we will break into small groups to practice tactics for identifying linguistic traits that can make feedback personal and work on ways that we can bring our conversations back to a more productive and objective trajectory. When we look at feedback for what it truly is - a way to improve - we can build better relationships between communities and teams to make them stronger as a result.

Key takeaways:

  • Tactics for how to better identify situations where you may be personalizing 
  • Tactics to reorient thinking back to objectified view vs personalized view
  • Tactics for how to improve communication to avoid negatively received feedback both in one on one conversations and in group settings
  • Practiced, hands on experience will each of these tactics

Pairing Is Caring - Doing Quick Tours on Your Applications with the Power of Paired Exploratory Testing

Prerequisites for attending the workshop:

  • Everyone is requested to bring their laptop, with a text editor installed to take notes during exploratory testing session
  • Everyone is requested to have one smart phone or tablet


Have you ever been in a situation where:-Stakeholders come to you/your QA team and say “We have about 2 hours before we push the new version of the application into production. Could you do some high level acceptance tests and ensure our app is stable before we do it?". You have no idea where to start and what to do in these 2 hours.

You/your QA team has 3 days for testing the new version of the application. You have all these test ideas but do not know which one to do first, how to prioritize your testing and what kind of vulnerabilities to look out for?

I was one such Tester, who was in the above situation many times. Based on my experience testing various desktop, mobile and headless applications for several years now, I started categorizing various defects I have found and realized; there are some common testing approaches you could follow to quickly find vulnerabilities in your applications.

To take this one step further, I did a lot of research on Session Based Exploratory Testing (SBET) and realized the power of paired testing. In this session, you will learn different approaches to break applications by Pairing and doing SBET with live applications.

Key takeaways: 

  • Different testing approaches to break applications
  • What is Session Based Exploratory Testing (SBET)
  • Use the template I formed and do paired exploratory testing on a live application

Resilience Testing: Let the chaos begin!

Nowadays we build applications via the microservice principles to make our applications easier to maintain, deploy, test and change. These microservices can easily be deployed on cloud platforms. Multiple microservices together form one application. But is that application resilient? What happens if one of the microservices fails? What happens if one microservice gets slower? 

So a resilient service: is a stable & reliable service, has high availability, and does not compromise the integrity of the service or the consistency of the data. But how to test this? 

That is what we will do during this workshop. Together with you we will test the resilience of a cloud application by creating chaos in the form of failures & disruptions, to see what happens to our application.

During this workshop we will tell you more about:

What is Resilience and how you test it; Microservices & Cloud platform; How to perform a load test; How to create chaos manually; How to create chaos automatically.

Key takeaways: 

  • Main statement: Resilience, Stress & Performance test your cloud environment!
  • Key learning 1: What is Resilience testing
  • Key learning 2: Executing your own Performance/Stress Tests
  • Key learning 3: Executing your own Resilience Tests
  • Key learning 4: Automated Resilience testing with Chaos Monkey

Efficient Selenium Infrastructure with Selenoid

Sold out

Selenoid is an alternative lightning fast open-source Selenium protocol implementation running browsers and Android emulators inside Docker containers. It is distributed with a set of ready-to-use Docker images corresponding to the majority of popular browser versions, has a one-command installation utility and works slightly more efficiently than traditional Selenium Grid.

This tutorial shows how to build efficient and scalable browser automation infrastructure using Selenoid and related tools. You will be taught why running browsers in containers is so efficient, how to easily install Selenoid and use its powerful features.

Tutorial covers

  • Selenium:
    • 10 mins of theory Brief Selenium history
    • Current WebDriver architecture
    • how Selenoid works and it motivation
  • Selenoid installation for tests development
    • What is required to start Selenoid
    • Manual installation: creating config file, pulling browser images, starting Selenoid Shorter way: fully automated installation via CM tool Selenoid UI installation and features
  • Basic Selenoid features:
    • Custom screen resolution
    • Looking at live browser screen Recording and downloading video
    • Custom test name
    • Updating browsers
  • Selenium clusters theory
    • Why Selenium Grid is not suitable
    • Client-side load balancing
    • Server-side load-balancing
  • How to share state
    • Session ID magic
  • Ggr server
    • How it works
    • Setting up Ggr - Creating users file
    • Creating quota file
    • Starting Ggr
    • Running tests against Ggr
  • Cluster Maintenance
    • Changing available browsers with no downtime
    • Adding users with no downtime
    • How to deal with multiple quota files
    • Proxying to external commercial Selenium services
    • Proxying video, logs, downloaded files
    • Adding more Ggr instances
    • Health checking instances
  • Ggr UI
    • How it works
    • Setting up and linking with Selenoid UI
    • Advanced Selenoid features for big clusters
  • Advanced browsers configuration file fields
    • Volumes
    • Environment variables
    • Tmpfs
    • Hosts entries
    • ShmSize
  • Sending logs to centralized logs storage
    • Why?
    • Centralized logging storages
    • How to configure Selenoid to send logs
  • Sending statistics to centralized metrics storage /status API
    • Configuring Telegraf to upload statistics
    • Creating statistics dashboard with Grafana
  • Sending logs to centralized logs storage (ELK-stack)
    • What is ELK stack
    • Configuring Selenoid to send logs to ELK-stack
    • Searching across logs
  • Building custom browser images
    • What's inside browser image
    • Ready to use browser images
    • How to build custom image
  • Selenoid for Windows browsers
    • How it works without Docker
    • Difference in configuration file
    • How to run multiple isolated sessions under Windows


Key takeaways:

  • Why Selenium should always be run in Docker containers How to forget about Selenium issues with Selenoid
  • How to efficiently scale Selenium cluster to easily have thousands of browsers running in parallel
  • Where to take ready to use browser images How to use powerful browser tests debugging features in Selenoid


Come and play some fun (board) games!

Subscribe to D-saal