Skip to main content
All CollectionsFor Our Tester CommunityGeneral Information
How we Measure Quality Tester Participation & Tips for Success
How we Measure Quality Tester Participation & Tips for Success

How are tester feedback ratings determined, and what do they mean?

Updated this week

Why we care about participant quality

Helping our clients get high quality feedback from honest participants is one of our top goals at BetaTesting. We constantly measure and rate tester participation in automated and manual ways to improve the quality of our participant pool.

We do this because BetaTesting is all about connecting product makers to real-world people to help bring better products into the world. Higher quality participation and feedback brings more user insights, which increases the speed which our customers can improve their products.

We also want to reward our best testers by connecting them to more testing opportunities that match their interests and profiles, while also encouraging testers to improve and penalizing those that put forth minimal effort.

How ratings are calculated for each test

For each test on BetaTesting, we rate each participant in a wide variety of ways, both automated and manual:

  • Ratings from the product owners: Each product owner reviews all the feedback provided for every test and can rate every individual piece of feedback (low quality, helpful, excellent) in addition to providing an overall tester rating.

  • Survey-based feedback: The quality of text-based survey feedback - how much feedback did each tester provide and what was the quality (e.g. did they thoroughly communicate their thoughts with examples, specific suggestions, and details).

  • Bugs: Quality and quantity of bugs reported (e.g. do bug reports include videos, quality descriptions, device information, etc)

  • Video-based feedback: Does video length match the duration that was requested or were they much shorter? What is the quality of the feedback provided through audio (did hey thoroughly communicate their thoughts out-loud in the recording with examples, specific suggestions, and details). Were there long gaps where the tester was not speaking? Did they not record audio?

  • On time submissions: Did the tester miss any deadlines or not submit any surveys?

  • Tester quality vs. other testers: We compare the results and participation of all tester in each test. Any testers that perform significantly better or worse than other testers will get higher (or lower) ratings.

  • Messages and follow-ups with product owners: Did the tester respond to questions and follow-ups sent by the product owners?

  • Bonuses: Did the product owner provide the tester with a bonus for high quality participation?

At the end of each test, each participant that earned a reward can review their ratings for that test:

Other Ratings: Application & Participation Rates

In addition to tester ratings for users that finish each test, we also track invite and participation rates.

Application rates: If we see that a specific tester is not responding to any private invites (typically delivered via email) or applying to any tests, then we won't invite them as often in the future because it's our understand that they are not as interested as other testers.

Participation rates: Likewise, we track each tester's "participation rate", which is how often they successfully finish tests that they are accepted into. We are more likely to invite testers to future opportunities that finish a high percentage of the tests they start in comparison to testers that always drop off during a test.

What do ratings mean and how are they used?

Ratings are used to improve the quality of our participant pool to help our clients get high quality feedback from honest participants.

We want to reward our best testers by connecting them to more testing opportunities that match their interests and profiles, while also encouraging testers to improve and penalizing those that put forth minimal effort.

Testers with high quality participation that finish most tests they start are more likely to get invited first with private test invites in the future.


How to be a great tester and provide quality feedback

Read below to learn what to do, and what not to do when testing and providing feedback!

Complete your profile with accurate data about you

  1. Make sure your core profile data is accurate (e.g. name, location, education, age, etc).

    1. If you don't want to answer a profile question, please leave it blank. Note that our clients do target based on profile data, so if a field is blank, you won't be considered for a test that targets that specific criteria.

    2. Don't provide fake data. If we detect that a participant has provided fake data through their profile, we'll remove them from our community (e.g. country, age, or education is not accurate).

  2. Complete all the profile surveys to be considered for more tests.

Feedback should be clear, specific, actionable, and constructive

  1. Clear - Make sure your feedback is clear and illustrate with specific examples. Don't provide general statements or vague ideas.

  2. Specific. Focus on what and why. For long text feedback questions, you should be providing sentences, not words. Explain your unique perspective.

  3. Actionable. What could be done to make it better?

  4. Constructive. Be honest, but also be respectful. When giving negative feedback, focus on the specific details of the issue, and avoid personally attacking the product or the people behind it.

Submit a bug report through BetaTesting any time you encounter an issue

  1. It's critical to submit quality bug reports with videos and/or screenshots to demonstrate any issues you encounter, along with the steps to reproduce the issue.

  2. If we detect that you encountered bugs that you didn't report, you will be rated poorly and you won't be invited to as many tests in the future. This is true on a per-test basis and over time, as we measure the number of bugs reported per test, and if your history is dramatically below average, it will impact your tester score.

Don't use AI

  1. Don't use AI to help generate your responses. The most valuable part of BetaTesting is the fact that we connect real people to give real feedback. If our clients want AI, they'll ask AI and save money in the process. If they want feedback from real humans, they'll ask you. If we know or suspect that it's likely that a user is using AI to formulate their responses, we'll remove them from our platform. So, don't risk it, and don't use AI.

  2. Don't copy/paste. This is associated with using AI or otherwise being a bot, so don't do it unless it's something that is requested or necessary (e.g. logs from an app)

Did this answer your question?