This post is part of a series:

Introducing quality, Part 1: Software testing

Introducing quality, Part 2: Continuous Integration and Deployment Automation

Introducing quality, Part 3: QA team and internal processes


Remember the testing pyramid from the first post of the series?

It is a good guide for automating your testing process no doubt, but automation is still manually written and like all code it is prone to human errors and untested functionality. Hell, sometimes it is not even worth it going for that 100% test coverage! Not to worry though because there is indeed a last line of defense: the QA team. Here at Tombola we have welcomed quite a few testers over the last year, embedded into the several teams and in addition to the usual tasks of checking every piece of functionality that we push into the code base they have also contributed to transforming our team processes for the better. So in your mind’s eye carve a little space at the tip of the pyramid for the work that they do. It is at the top because it is not as expansive, rigorous and repeatable as automated tests can be but invaluable in the way it helps uncover gaps in quality and improve our overall velocity.


Once upon a time

Tombola has undergone a lot of transformation over the past couple of years and there indeed there was a time where testers were not involved in the process of putting out new features. Even though the functionality was successfully delivered to the users, it is a lot more stressing to fix issues when they appear in production. And unfortunately sometimes they did. The reason was that developers would test theirs and each others features as much as possible but testing is a very different mindset to coding and switching between the two effectively (if at all 😛 ) is very hard. The last quality gate were the product owners and other stakeholders that did their best to effectively test and verify all aspects of what was delivered. But again, a very different mindset is needed for proper QA testing and their schedules were busy with all kinds of obligations. It was obvious that there was a skill missing.


Embedding QA in the team

The QA tester entering the room on the his first day [heavenly choir sounds]

Mark joined the team and after a very short settling in period he started contributing immensely in the feature delivery. We started with the basics: putting down some baseline regression tests for when a full regression was needed and manually testing every completed feature.

Regression testing

Having joined a team with a platform that has been developed over many years, creating a comprehensive regression suite with complete scenarios, covering all edge cases is a daunting task to say the least. So again, we tried to get the most out of a few quick wins in order to tackle this QA beast. Thankfully most of the mostly used functionality is already covered by our automated e2e tests so we have covered a bit of ground there already. But that was mostly developer generated and had no input from QA. A lot more is needed to be verified to reach a level of confidence that will allow us to move code closer to production faster without much manual intervention (maybe continuously deploying to inactive often and then even to live production). One of the ongoing projects that our QA tester works on is creating a comprehensive list of features to test before we push everything live, a full regression of the site. This is being done manually and ad-hoc at the moment, but once we have a good list of user journeys to follow all of this can be automated with the help of the developers and take even more manual labor out of the deployment process. That would free up out tester’s time to focus on verifying new features (there are a lot of devs to look after 🙂 and also do exploratory testing to reveal bugs and/or missing functionality.

Testing of new features

Another way that QA was integrated in our team process is the manual testing of all the new features that move into our code base. We have been using a customized and efficient “scumban”-type process for quite a while that looked a bit like this (and of course our manual and JIRA boards reflected that):

We then transformed that process like so:

Each user story will be approved from a technical point of view but that does not necessarily cover all edge cases. Developers are usually biased against their own work too, sometimes missing some things when verifying their work. There are cases where the user stories are not clear on all edge cases and clarification is needed. So the QA tester is the gateway of quality before the functionality moves on to product verification. The reason we inserted him in that part of the process is that it makes no sense for QA to focus on testing a feature that is not complete from a coding point of view. There is also no point of QA being involved after the product has verified the solution because they can find so many things to improve that will potentially affect the acceptance criteria. All of these cases, even though not occurring systematically, could potentially cause longer feedback loops back to the developers and we wanted to minimize that as much as possible. The faster we can identify problems in the functionality the faster the developers can respond, code and deploy, we re-test and the time of any people involved upstream is not wasted verifying or deploying things that are not working as expected in the first place. The QA tester got his own column on the ticket boards and practically everything goes through him. That can be a bit overwhelming at times but we have already built a good communication channel with the developers and our quick deployment pipeline can ensure that we can give him our fixes really quickly (so that neither the developers or QA need to context switch all the time waiting for fixes to come through to our various environments).


Before QA joined our team we relied on developers, product people and internal/external users of our live services to find out bugs and address them. Now with QA scanning the site literally every day trying out new features in multiple places, we get these bugs discovered a lot sooner and fix them before any one can ever encounter them on our production environment. The bugs are created and evaluated with the most critical ones being addressed immediately and the rest finding their way into our backlog where they get prioritized along with all the rest of our work. This makes a huge difference in our service quality, not to mention the reputation hit that we avoid by not having issues discovered by end users.


The QA guild

At Tombola every team that works with code has a QA tester. The testers mainly focus on what that team does and they are highly aligned with each team’s processes. So, what they did was to come together from all over the company and form an informal Spotify-like guild to catch up on what they are doing, tools they are using, problems they faced and how they solved them.

The guild started off when there were less QA testers than teams and they had to coordinate the testing of various parts of our services. With the addition of more members to the team, the meetings grew and transformed to knowledge exchange forums. And it has proven very important to exchange all this knowledge. Sharing solutions has helped standardize some QA tools across the teams and known problems are less likely to repeat themselves in different contexts. They are now meeting up weekly.


The happy side effects

The benefits that QA brought are not only related to user stories and discovery of bugs. It has had a cross cutting effect on all the aspects of the SDLC and the team processes.

We already covered how the flow of stories during the scrum sprint was normalized by avoiding too much back and forth between development and validation. The user stories themselves have become increasingly more concise and templated as this helps the validation by the tester. The more clear and precise they are, the easier they are to validate and advance. JIRA has had an update in its workflow to accommodate QA that has simplified it quite a bit. The developers make sure they always have some time for the tester’s questions and feedback so everyone is very aware of their time and obviously make the best effort to build as much quality in from the start as possible. Difficulty in testing on the staging environment has led us to take actions to make it resemble live even more closely than it did. Overall everyone has shifted their focus more on the quality side of things whether that is product design, development, deployment, communication and work culture.


–   –   –



There have been many transformations here at Tombola over the past year. The company has grown a lot and that has brought out some pain points in the development process. But it has responded impressively with very good results to show for it. The team process here at team Bingo has changed and adapted over many iterations to focus squarely at quality and effectiveness. Tackling technical debt, building quality in the code from the beginning, deploying as fast as possible and adjusting our team processes to respond to change quickly and effectively has prepared the ground for our ambitious future. We are in a good position to support the vision of the business and can’t wait to help create it.