Gamesys | Usability Testing
While working at Gamesys, I have been part of a usability testing team to conduct testing in our in-house lab.
Over the course of a year, we ran 5 usability testing projects to validate concepts and evaluate the ease of use of company’s products:
For each of the projects, there were 8-12 users recruited.
Each session would run 1-on-1 with a moderator taking user though the tasks, and an assistant recording sessions, and collecting and analysing data.
Each session would last 60-90 minutes, depending on the amount of tasks.
Having most of our customers as mobile users, the majority of sessions would run on Android and iOS mobile devices.
Throughout all projects, there was a close communication with stakeholders and product owners to define major objectives and flows for the testing. After identifying those, we would create a thorough script and an evaluation matrix for each task and user.
After going through tasks and collecting the feedback, we would create a prioritisation matrix with task completion trends.
We then were identifying critical, major, minor problems, as well as observations, ideas, and positive feedback.
We would also evaluate each task per user on the scale 0-3, to be able to give an average mark of how easy it is to complete a certain task.
Based on the problems and user feedback, we would provide UX recommendations to the product owners.
Each project had 40-50 usability findings. For some of them, product owners suggested to conduct an A-B test with the control version of a product VS the version with usability findings editions. The results showed 13%-21% more engagement and wagering in the versions with the usability findings iterations.