Why we care about participant quality
Helping our clients get high quality feedback from honest participants is one of our top goals at BetaTesting. We constantly measure and rate tester participation in automated and manual ways to improve the quality of our participant pool.
We do this because BetaTesting is all about connecting product makers to real-world people to help bring better products into the world. Higher quality participation and feedback brings more user insights, which increases the speed which our customers can improve their products.
We also want to reward our best testers by connecting them to more testing opportunities that match their interests and profiles, while also encouraging testers to improve and penalizing those that put forth minimal effort.
How ratings are calculated for each test
For each test on BetaTesting, we rate each participant in a wide variety of ways, both automated and manual:
Ratings from the product owners: Each product owner reviews all the feedback provided for every test and can rate every individual piece of feedback (low quality, helpful, excellent) in addition to providing an overall tester rating.
Bonuses: Did the product owner provide the tester with a bonus for high quality participation?
Survey-based feedback: The quality of text-based survey feedback - how much feedback did each tester provide and what was the quality (e.g. did they thoroughly communicate their thoughts with examples, specific suggestions, and details).
Bugs: Quality and quantity of bugs reported (e.g. do bug reports include videos, quality descriptions, device information, etc)
Video-based feedback: Does video length match the duration that was requested or were they much shorter? What is the quality of the feedback provided through audio (did hey thoroughly communicate their thoughts out-loud in the recording with examples, specific suggestions, and details). Were there long gaps where the tester was not speaking? Did they not record audio?
On time submissions: Did the tester miss any deadlines or not submit any surveys?
Tester quality vs. other testers: We compare the results and participation of all tester in each test. Any testers that perform significantly better or worse than other testers will get higher (or lower) ratings.
Messages and follow-ups with product owners: Did the tester respond to questions and follow-ups sent by the product owners?
At the end of each test, each participant that earned a reward can review their ratings for that test:
Other Ratings: Application & Participation Rates
In addition to tester ratings for users that finish each test, we also track invite and participation rates.
Application rates: If we see that a specific tester is not responding to any private invites (typically delivered via email) or applying to any tests, then we won't invite them as often in the future because it's our understand that they are not as interested as other testers.
Participation rates: Likewise, we track each tester's "participation rate", which is how often they successfully finish tests that they are accepted into. We are more likely to invite testers to future opportunities that finish a high percentage of the tests they start in comparison to testers that always drop off during a test.
What do ratings mean and how are they used?
Ratings are used to improve the quality of our participant pool to help our clients get high quality feedback from honest participants.
We want to reward our best testers by connecting them to more testing opportunities that match their interests and profiles, while also encouraging testers to improve and penalizing those that put forth minimal effort.
Testers with high quality participation that finish most tests they start are more likely to get invited first with private test invites in the future.