UX + Data · 2024–2025

Product
Health Score

A quarterly process I built from scratch to help the team read product data and act on it together.

Team
UX, PM, Product Analyst
Cadence
Every quarter
Data source
Pendo
Status
First cycle complete

01 The problem

Data existed. Nobody was reading it.

Pendo tracked everything. But the team made decisions based on research sessions and gut feel. There was no shared process to look at the numbers, agree on what they meant, and decide what to do.

I started building a framework to fix that. Not to replace research — but to catch what sessions miss: features that break silently, features nobody uses, features users love in testing but abandon in real life.

"I wanted to give the team a shared language for the data — a number that starts a conversation, not ends one."
02 How I built it

Started alone. Became a team process.

I built the first version by myself. I defined which metrics to track, why each one mattered, and how to combine them into a single score. Once the logic was solid, I brought in the product analyst to connect it to the Pendo pipeline and make it repeatable.

It grew into a quarterly process with PM, UX, and the analyst each playing a clear role. Each cycle takes two weeks from data export to shared review.

The hardest part was deciding what questions the framework should answer. Adoption rate alone misses features that are deep but narrow. Success rate alone misses features that are broken but tolerated. The score had to hold all of it at once.

The Playbook sheet documents every formula and every decision behind the weighting — so anyone on the team can challenge the logic, not just read the output.

03 How a feature gets scored

Three dimensions. One number.

Every feature is scored on depth of use, quality of interaction, and how many users reach it. The model runs from a Pendo CSV export in under 30 minutes.

40%
Depth
how intensely it's used
40%
Quality
success rate gate: >80%
20%
Adoption
% of users who touch it
0–100
Health Score
final weighted result

A feature with a success rate below 80% is flagged as a Critical Quality Issue automatically — it skips the strategic scoring entirely. A broken feature should never look strategically healthy.

🏆
Core Asset
High depth + high adoption. Protect it.
💎
Specialized Utility
High depth, smaller audience. Nurture.
Balanced
Healthy middle ground. Monitor.
📉
Low Engagement
Low depth and reach. Audit.
🚨
Critical Quality Issue
Broken. Fix before anything else.
💤
Inactive
Zero interactions. Consider removing.
04 Real results — first cycle

What the data actually showed.

The first full cycle scored every tracked feature in the product. Here is the distribution across categories, and a sample of the most actionable findings.

2
Core Assets
14
Balanced
7
Critical Issues
38
Need Audit
3
Low Engagement
7
Inactive
Feature Assessment Health Score Key signal
Word — Enter Trademark Input 🏆 Core Asset 79.9 433 users, 99.9% success rate
Word — Enter International Classes 🏆 Core Asset 77.1 373 users, 99.7% success rate
Image — Upload Image 🚨 Critical Issue 68.1 1,280 dead clicks — 54% failure rate
Industrial Design — Drag and Drop 🚨 Critical Issue 44.3 266 dead clicks — 72% failure rate
Word — Enter Reference Input 💎 Specialized 74.1 High depth, used by a loyal subset
Industrial Design — Upload Designs 🚨 Critical Issue 40.0 100% failure rate — 0 successful clicks

Data from Pendo — first cycle, April 2026. Total app users: 8,245.

05 The quarterly report

Five sections. Three people. One meeting.

Each person writes their section before the meeting. The shared review takes 45 minutes. Decisions are recorded and checked at the start of the next cycle.

1
Snapshot
How many features in each category this quarter, compared to last. One number per category.
Analyst
2
Critical Issues
Every 🚨 feature listed with its failure rate and a suspected cause. Reviewed first — nothing else moves until these are assigned.
UX + PM
3
UX Spotlight
3 features where the score contradicts what research says. These become candidates for the next qualitative study.
UX
4
What changed
Features that moved categories since last cycle. Any surprises, any jumps after a release.
Analyst
5
Decisions made
What the team agreed to act on. Feature name, owner, target quarter. Checked at the start of next cycle.
PM
Analyst
  • Exports and cleans Pendo data
  • Runs the spreadsheet
  • Writes sections 1 and 4
  • Flags data anomalies
UX Designer
  • Interprets labels in context
  • Cross-references with research
  • Writes section 3 (UX spotlight)
  • Nominates features for next study
PM
  • Maps findings to the roadmap
  • Fast-tracks Critical Issues
  • Writes section 5 (decisions)
  • Closes the loop next cycle
06 What I learned

What this project is really about.

What worked
  • Building the logic first, before involving the analyst — having a clear design intent made the technical refinement much faster.
  • The quality gate (80% rule) was the most important design decision. It stops broken features from hiding behind high adoption numbers.
  • Splitting report ownership by role meant no one person could be a bottleneck. It also made the output more honest.
Still open
  • Quarter-on-quarter comparison needs two full cycles before it becomes meaningful.
  • Scaling to other products requires resetting the benchmark anchors — the framework is product-relative, not absolute.
  • The real test: does seeing a 🚨 label change what gets prioritised, or do other factors override the data?