AI is rewriting the rules of research. It’s time we agreed on how to measure it.
The AI in UX Research Benchmark is a joint framework by Looppanel and Sprig, created with input from Microsoft’s Jess Holbrook and researchers across the community.
It defines what ‘good’ looks like when humans and AI work together, measuring not just speed, but trust, control, and insight quality.

Why this matters
AI has made research faster, but it’s still uncharted territory for teams. With varying workflows and no clean data, what’s better for teams is largely a subjective matter.
Research teams need a shared language to measure any AI tools they use.
We built the first transparent benchmark to evaluate AI tools for research, starting with Looppanel’s own data. This report shows how teams are working together with AI.
What you’ll find inside
A living benchmark
This is just the first step. We’ll soon release an open dataset so other teams can contribute their own benchmarks and help define how AI should be measured across the industry.
Get the latest news and jobs in UX, in your inbox every two weeks.