Metrics tell you what happened. Feedback tells you why. Combine both for confident product decisions.
Don't just track metrics - ask users what they think. When you're testing a new feature, show a brief survey to users in each variant. This gives you the "why" behind your numbers and prevents you from making decisions based on incomplete data.
Variant B has 10% higher conversion. But why? Did users like it better, or did they not notice a change?
A statistically significant result could still be wrong for your users if it's causing frustration you can't see.
A feature might boost one metric while quietly damaging brand perception or user trust.
Create separate surveys for each variant. This lets you compare not just behavior, but sentiment.
Show surveys at the same point in the user journey for both variants. Typically after they've experienced the feature.
Use the same questions for both variants so you can directly compare responses.
Look at your quantitative metrics AND the qualitative feedback. They should tell a consistent story.
"How would you rate this experience?" (1-5 stars)
"How easy was it to complete this task?" (Very hard to Very easy)
"Did this meet your expectations?" (Yes / Partially / No)
"What could we improve?" (Optional text field)
New layouts can affect usability in ways metrics don't capture. Ask users if the new design is easier to use.
Higher conversion doesn't mean users are happy. Check if they feel the pricing is fair.
A feature that gets used isn't necessarily liked. Ask users what they think about it.
Different copy can affect trust and perception. Survey for sentiment, not just clicks.
Ship it. You have both quantitative and qualitative evidence that the change is good.
Investigate. You might be optimizing for a metric at the cost of user experience.
Consider shipping anyway. User satisfaction matters even if it doesn't show in short-term metrics.
Don't ship. Both sources agree this isn't working.
Create two surveys in the FeedbackWall dashboard with identical questions
In your app code, trigger the appropriate survey based on which variant the user is in
Compare response distributions in the dashboard after your test reaches significance
// In your A/B test code
if user.isInVariant("checkout_v2") {
FeedbackWall.showIfAvailable(trigger: "checkout_survey_v2")
} else {
FeedbackWall.showIfAvailable(trigger: "checkout_survey_v1")
}If you show the same survey at the same rate to both variants, the impact is equal and cancels out in your comparison.
Aim for at least 50-100 responses per variant to see meaningful patterns in feedback.
No. Use sample rates (10-20%) to get enough data without over-surveying. FeedbackWall makes this easy.
That's valuable information. Dig deeper with follow-up questions or user interviews before deciding.
Add qualitative feedback to your quantitative tests. Understand the full picture.
Start free trial →14-day free trial. Better testing starts now.