Why this matters

AI has made research faster, but it’s still uncharted territory for teams. With varying workflows and no clean data, what’s better for teams is largely a subjective matter.

Research teams need a shared language to measure any AI tools they use.
We built the first transparent benchmark to evaluate AI tools for research, starting with Looppanel’s own data. This report shows how teams are working together with AI.

What you’ll find inside

  • A clear framework for measuring AI’s impact in research
  • Benchmarks for metrics task efficiency, quality, and researcher confidence
  • Real data from teams using Looppanel
  • Expert insights from Sprig and Microsoft on working with AI without losing control


A living benchmark


This is just the first step. We’ll soon release an open dataset so other teams can contribute their own benchmarks and help define how AI should be measured across the industry.

Stay in the loop

Get the latest news and jobs in UX, in your inbox every two weeks.

By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.