Testing Strategy

A/B Testing with
User Feedback

Metrics tell you what happened. Feedback tells you why. Combine both for confident product decisions.

Quick Overview

Don't just track metrics - ask users what they think. When you're testing a new feature, show a brief survey to users in each variant. This gives you the "why" behind your numbers and prevents you from making decisions based on incomplete data.

Why metrics alone aren't enough

Metrics show what, not why

Variant B has 10% higher conversion. But why? Did users like it better, or did they not notice a change?

Statistical significance isn't everything

A statistically significant result could still be wrong for your users if it's causing frustration you can't see.

Hidden negative effects

A feature might boost one metric while quietly damaging brand perception or user trust.

How to add feedback to A/B tests

1

Create variant-specific surveys

Create separate surveys for each variant. This lets you compare not just behavior, but sentiment.

Example: For a checkout redesign test, create "Checkout Survey A" and "Checkout Survey B" in FeedbackWall.
2

Trigger at the same moment

Show surveys at the same point in the user journey for both variants. Typically after they've experienced the feature.

Example: Trigger the survey right after checkout completion, regardless of which checkout flow they saw.
3

Ask simple, comparable questions

Use the same questions for both variants so you can directly compare responses.

Example: "How easy was the checkout process?" with a 1-5 rating scale for both variants.
4

Compare both data sources

Look at your quantitative metrics AND the qualitative feedback. They should tell a consistent story.

Example: If Variant B has higher conversion but lower satisfaction scores, dig deeper before declaring a winner.

What to ask in A/B test surveys

Satisfaction

"How would you rate this experience?" (1-5 stars)

Ease of use

"How easy was it to complete this task?" (Very hard to Very easy)

Preference

"Did this meet your expectations?" (Yes / Partially / No)

Open feedback

"What could we improve?" (Optional text field)

When to use this approach

UI redesigns

New layouts can affect usability in ways metrics don't capture. Ask users if the new design is easier to use.

Pricing experiments

Higher conversion doesn't mean users are happy. Check if they feel the pricing is fair.

New features

A feature that gets used isn't necessarily liked. Ask users what they think about it.

Messaging changes

Different copy can affect trust and perception. Survey for sentiment, not just clicks.

What to do with the results

Metrics up, feedback positive

Ship it. You have both quantitative and qualitative evidence that the change is good.

Metrics up, feedback negative

Investigate. You might be optimizing for a metric at the cost of user experience.

Metrics flat, feedback positive

Consider shipping anyway. User satisfaction matters even if it doesn't show in short-term metrics.

Metrics down, feedback negative

Don't ship. Both sources agree this isn't working.

How to set this up with FeedbackWall

1

Create two surveys in the FeedbackWall dashboard with identical questions

2

In your app code, trigger the appropriate survey based on which variant the user is in

3

Compare response distributions in the dashboard after your test reaches significance

// In your A/B test code
if user.isInVariant("checkout_v2") {
    FeedbackWall.showIfAvailable(trigger: "checkout_survey_v2")
} else {
    FeedbackWall.showIfAvailable(trigger: "checkout_survey_v1")
}

Common questions

Won't surveys affect test results?

If you show the same survey at the same rate to both variants, the impact is equal and cancels out in your comparison.

How many responses do I need?

Aim for at least 50-100 responses per variant to see meaningful patterns in feedback.

Should I survey every user?

No. Use sample rates (10-20%) to get enough data without over-surveying. FeedbackWall makes this easy.

What if feedback contradicts metrics?

That's valuable information. Dig deeper with follow-up questions or user interviews before deciding.

Make better A/B testing decisions

Add qualitative feedback to your quantitative tests. Understand the full picture.

Start free trial →

14-day free trial. Better testing starts now.