You are here

Vahetänava saal

Learning to Learn in a Mob

Mob programming, mob testing, or generally mobbing is a wonderful approach to uncover implicit knowledge and learn from each other. It’s “all the brilliant people, working on the same thing, at the same time, in the same space, and at the same computer“. But what if you introduce a new technology no one in the mob had worked with before? What if you suddenly need knowledge in the team that nobody has? Does mobbing still prove to be an efficient way of learning in that case? Join this experiment and learn something nobody in the mob has done before. Let’s see how far we get in mastering a new skill together!

Key takeaways:

  • Learn the basics of the mob approach and practice them hands-on
  • Experience the benefits of uncovering and sharing implicit knowledge to help everyone learn
  • See how having all brains in helps solve unknown challenges and get the best out of everybody

Don't Take It Personally

Receiving feedback can be tough. It can be hard to remember that it is meant to help improve work going forward, not to point out current flaws. This is even more prevalent in an industry packed full of introverts and especially in roles where it is your job to find issues and help fix them. It can be incredibly easy to take these kinds of feedback or comments personally in the workplace, but what is the impact when we do so?

When we personalize situations, we tend to lose sight of the bigger picture. It becomes easier to focus in on minute details and not look at the overall context in which the feedback is being given. The impacts of this lower level of focus can result in wasted time from; chasing the wrong issues, laying blame, making up excuses, refusing to ask for help, and ultimately avoiding discussions around the root cause and ways to improve.

This workshop will draw on experiences and examples of situations such as; testing debriefs (tester to tester interactions), bug discovery (tester to developer interactions), and inter team projects (team to team interactions) and discuss tactics for each on staying objective and productive. After going through what makes feedback and comments personal we will break into small groups to practice tactics for identifying linguistic traits that can make feedback personal and work on ways that we can bring our conversations back to a more productive and objective trajectory. When we look at feedback for what it truly is - a way to improve - we can build better relationships between communities and teams to make them stronger as a result.

Key takeaways:

  • Tactics for how to better identify situations where you may be personalizing 
  • Tactics to reorient thinking back to objectified view vs personalized view
  • Tactics for how to improve communication to avoid negatively received feedback both in one on one conversations and in group settings
  • Practiced, hands on experience will each of these tactics

Pairing is Caring - Doing quick tours on your applications with the power of paired exploratory testing

Have you ever been in a situation where:-Stakeholders come to you/your QA team and say “We have about 2 hours before we push the new version of the application into production. Could you do some high level acceptance tests and ensure our app is stable before we do it?". You have no idea where to start and what to do in these 2 hours.

You/your QA team has 3 days for testing the new version of the application. You have all these test ideas but do not know which one to do first, how to prioritize your testing and what kind of vulnerabilities to look out for?

I was one such Tester, who was in the above situation many times. Based on my experience testing various desktop, mobile and headless applications for several years now, I started categorizing various defects I have found and realized; there are some common testing approaches you could follow to quickly find vulnerabilities in your applications.

To take this one step further, I did a lot of research on Session Based Exploratory Testing (SBET) and realized the power of paired testing. In this session, you will learn different approaches to break applications by Pairing and doing SBET with live applications.

Key takeaways: 

  • Different testing approaches to break applications
  • What is Session Based Exploratory Testing (SBET)
  • Use the template I formed and do paired exploratory testing on a live application

Resilience Testing: Let the chaos begin!

Nowadays we build applications via the microservice principles to make our applications easier to maintain, deploy, test and change. These microservices can easily be deployed on cloud platforms. Multiple microservices together form one application. But is that application resilient? What happens if one of the microservices fails? What happens if one microservice gets slower? 

So a resilient service: is a stable & reliable service, has high availability, and does not compromise the integrity of the service or the consistency of the data. But how to test this? 

That is what we will do during this workshop. Together with you we will test the resilience of an cloud application by creating chaos in the form of failures & disruptions, to see what happens to our application.

During this workshop we will tell you more about:

What is Resilience and how you test it;Microservices & Cloud platform;How to perform a load test;How to create chaos manually;How to create chaos automatically; 

Key takeaways: 
Main statement: Resilience, Stress & Performance test your cloud environment!

Key learning 1: What is Resilience testing

Key learning 2: Executing your own Performance/Stress Tests

Key learning 3: Executing your own Resilience Tests

Key learning 4: Automated Resilience testing with Chaos Monkey

Subscribe to Vahetänava saal