Expert Contributors

Kritika Oberoi
CEO, Looppanel
Ryan Glasgow
CEO, Sprig
Samantha Tu
Research Partner, Sprig
Jess Holbrook
Head of UXR, Microsoft AI

Why this matters

AI has made research faster, but it’s still uncharted territory for teams. With varying workflows and no clean data, what’s better for teams is largely a subjective matter.

Research teams need a shared language to measure any AI tools they use.
We built the first transparent benchmark to evaluate AI tools for research, starting with Looppanel’s own data. This report shows how teams are working together with AI.

What you’ll find inside

  • A clear framework for measuring AI’s impact in research
  • Benchmarks for metrics task efficiency, quality, and researcher confidence
  • Real data from teams using Looppanel
  • Expert insights from Sprig and Microsoft on working with AI without losing control


A living benchmark


This is just the first step. Inside the report, you'll find a link to contribute to the benchmark dataset and help define how AI in research should be measured.

Stay in the loop

Get the latest news and jobs in UX, in your inbox every two weeks.

By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.