Case Study

Reframing AI comparison into
a shared-insight experience

Reframing AI comparison into
a shared-insight experience

Reframing AI comparison into
a shared-insight experience

Turning multiple AI outputs into a clear, trustworthy answer.

Turning multiple AI outputs into a clear, trustworthy answer.

Turning multiple AI outputs into a clear, trustworthy answer.

A displeased viewer watching Prime on TV

📌 Context

📌 Context

The product originally positioned itself as a translation review service, allowing users to validate outputs through AI or human reviewers at different price points.


Over time, direction shifted toward a multi-AI comparison tool where users could:

• View outputs from multiple AI systems

• Understand where those systems agree

• Identify a shared or consensus result quickly


This required more than a visual update. It required a new way of structuring results and conveying trust.

The product originally positioned itself as a translation review service, allowing users to validate outputs through AI or human reviewers at different price points.


Over time, direction shifted toward a multi-AI comparison tool where users could:

• View outputs from multiple AI systems

• Understand where those systems agree

• Identify a shared or consensus result quickly


This required more than a visual update. It required a new way of structuring results and conveying trust.

🚩 The Problem

🚩 The Problem

The existing review-based mental model did not scale well to multi-AI comparison.

Key challenges:


• Multiple outputs created visual noise

• Users struggled to identify what mattered most

• Confidence needed to be communicated without implying absolute correctness

The interface needed to help users synthesize, not just compare.

😰 Tension

😰 Tension

Showing everything preserved transparency, but overwhelmed users.


Simplifying too aggressively risked hiding valuable context.


The challenge was to provide clarity first, depth second.

🤔 Options considered

🤔 Options considered

1. Flat comparison of all AI outputs

• Maximum transparency

• High cognitive load

• Slower decision-making

• Maximum transparency

• High cognitive load

• Slower decision-making

2. Single “best” answer only

• Fast to consume

• Lacked justification

• Reduced trust

• Fast to consume

• Lacked justification

• Reduced trust

3. Layered results model (chosen)

• A primary shared result

• Individual outputs available below

• Clear hierarchy between synthesis and sources

• A primary shared result

• Individual outputs available below

• Clear hierarchy between synthesis and sources

✨ The decision

✨ The decision

I proposed reframing the experience around a shared or focused result, supported by individual AI outputs through progressive disclosure.


This shifted the product from review to insight.

🎨 Design approach

🎨 Design approach

The refreshed experience introduced two layers:

1. Shared result

• Synthesizes commonalities across AI outputs

• Serves as the primary reading surface

• Includes an agreement score to indicate alignment

• Synthesizes commonalities across AI outputs

• Serves as the primary reading surface

• Includes an agreement score to indicate alignment

2. Individual AI outputs

• Displayed in separate blocks below

• Secondary to the shared result

• Available for verification and deeper inspection

• Displayed in separate blocks below

• Secondary to the shared result

• Available for verification and deeper inspection

The agreement score communicated confidence without claiming certainty.

⭐ What I did

⭐ What I did

• Led the UX direction for the product refresh

• Defined the information hierarchy for multi-source results

• Designed the shared result and agreement score presentation

• Adapted a proven internal pattern to a new context

• Ensured continuity for returning users

• Led the UX direction for the product refresh

• Defined the information hierarchy for multi-source results

• Designed the shared result and agreement score presentation

• Adapted a proven internal pattern to a new context

• Ensured continuity for returning users

Execution required minimal iteration due to strong alignment between UX and product goals.

📈 Outcome

📈 Outcome

  • Users navigated the refreshed experience with minimal friction

  • The shared result provided faster clarity

  • The interface supported both quick decisions and deeper inspection

  • The product established a clearer identity as an AI comparison tool

The experience continues to evolve through usage and feedback.

"What’s dangerous is not to evolve."

"What’s dangerous is not to evolve."

"What’s dangerous is not to evolve."

— Jeff Bezos

— Jeff Bezos

Got thoughts? I’m all ears.

I’m always up for thoughtful conversations.

I’m always up for thoughtful conversations.

Create a free website with Framer, the website builder loved by startups, designers and agencies.