🔬 UX Research · Data Analytics · Strategic Design

How I Used 1.78M
Agent Sessions to
Redesign a Dashboard

A data driven investigation into agent behaviour analysing login frequency, task prioritisation, and click efficiency to deliver strategic UX recommendations that directly shaped the dashboard roadmap.

1.78M
Sessions Analysed
3
Teams × 4 Months
85%
Task Completion
3
Design Changes
My Role
Senior Product Designer (Contractor)
Company
Fortem Services · PPH Platform
Duration
4 Months · 2024
Team
UX · BA · PO · CS (4 functions)
Tools
Tableau · Matomo · Figma
01 The Problem

Agents were frustrating our support team. We didn't know why.

The PPH (Pay Per Head) platform's customer service team was handling a recurring surge of support tickets every Sunday night and Monday and system maintenance windows were regularly disrupting agents during their most critical work moments.

No one had mapped when agents actually used the platform, what they did when they got there, or why certain moments felt broken. As Senior Product Designer contractor at Fortem Services, I proposed and led a structured behavioural analytics investigation to answer three core questions.

The 3 Questions We Set Out to Answer
1
When do agents log in?
Login Frequency Analysis peak days, hours, and patterns across quarters
2
What do they do first?
Task Prioritisation Analysis their journey across 3 interactions mapped through Matomo
3
How efficiently can they do it?
Click Efficiency Analysis average clicks per action and exit behaviour
02 · My Specific Contribution

What I personally led and owned

This was a team effort but these are the decisions and actions I was directly responsible for.

Led
the research framework defined the 3 focus areas, coordinated 4 teams (UX, BA, PO, CS), and structured the investigation methodology
Built
the data visualisations in Tableau and interpreted the Matomo user flow reports to translate raw numbers into design insights
Owned
the final recommendations presenting findings to product leadership and driving 3 dashboard design changes into the roadmap
03 Method & Approach

Data-first. People-second. In that order.

Before interviewing anyone, I started with the data. I knew that qualitative research alone wouldn't be credible with a product and engineering team that was skeptical of "soft" UX feedback. So I anchored everything in numbers, then layered in human context.

1
Tableau Login Frequency Analysis
Pulled 5 quarters of login data (Q2 2023 → Q2 2024). Analysed login days, hours, and average sessions per agent per day to identify consistent peak patterns across the year.
2
Stakeholder Interviews CS Team & POs
Took the quantitative peak data to the CS team and Product Owners to ask "why." This revealed the 3 behavioural factors Billing cycles, User routines, and Monday Night Football preparation.
3
Matomo Task Prioritisation & User Flows
Used Matomo's user flow visualisation to map the 1st, 2nd, and 3rd interactions across 4.7M visits, identifying where agents went and in what sequence after login.
4
Click Efficiency Analysis
Mapped the average clicks required for each core agent action. Used dashboard-as-starting-point as the baseline. Identified exits (14.68%) as the key area for future investigation.
5
Synthesis & Recommendations to Product
Compiled all 3 analyses into a structured report and presented to the product leadership team. Led the session and fielded questions from engineering and business stakeholders.
04 Key Findings

What the data revealed

Finding 1 Login Frequency

Agents log in most on Sundays at 21:00 and Mondays at 22:00 consistent across 5 consecutive quarters of data.

Login Volume by Day (Relative)
Mon
High
Tue
Low
Wed
Low
Thu
High
Fri
Mid
Sat
Mid
Sun
Peak
💳
The Billing Factor
Weekly billing cycle runs Mon–Sun. Agents check figures on Sunday and settle payments by Thursday creating 2 peak login days driven by financial deadlines.
🏈
Monday Night Football Prep
Bookies log in Monday evenings to adjust odds, manage player injuries, and settle weekend accounts before the NFL game kicks off.
Finding 2 Task Prioritisation

Across 4.7M sessions, agents' journeys are remarkably predictable: Dashboard → Bettor Balance → Bettor Details.

1st Click
95.8%
Dashboard (95.76%)
Other (4.24%)
2nd Click
56.6%
Bettor Balance (56.59%)
Dashboard (19.19%)
Other (6.63%)
3rd Click
46.4%
Dashboard (46.40%)
Bettor Details (31.75%)
Bettor Balance (8.90%)
💡
Key Insight: The Dashboard is the control room.
Agents return to the Dashboard between actions (46.4% of 3rd clicks), which means it's being used as a navigation hub not just a summary screen. This validated making quick links a priority recommendation.
Finding 3 Click Efficiency

Of 1,782,473 total dashboard visits, 85.32% of agents proceeded to complete their intended journey. The 14.68% exit rate is the key opportunity area.

Completion Rate
85.3%
Agents successfully proceed from Dashboard to their intended next destination
✓ Positive signal
Exit Rate
14.7%
Agents exit from the Dashboard without completing a task the priority investigation area
→ Needs investigation
Total Sessions
1.78M
Dashboard visits analysed across the full data window, providing statistically significant insight
✓ High confidence data
05 Recommendations

Three changes I drove into the roadmap

These weren't just suggestions I presented these findings to product leadership and advocated for each recommendation specifically, tying them directly back to the data.

01
Reschedule all maintenance windows
Data shows peak logins are Sunday 21:00 and Monday 22:00. Maintenance should only run between 9:00am–12:00pm on Tuesdays and Wednesdays when usage is lowest. Agents must receive advance notifications.
High Priority
02
Add quick links to the Dashboard
Since agents return to the Dashboard as a navigation hub (46.4% of 3rd clicks), surface the 3 most-accessed sections Bettor Balance, Bettor Details, and Manage Payment as quick links directly on the dashboard.
High Priority
03
User interviews to understand exits
The 14.68% exit rate from the dashboard needs qualitative research to understand intent. Some exits may represent satisfied task completions others may signal friction. We need to know which before designing a solution.
Medium Priority
06 · Reflection

What I'd do differently & what I learned

What worked well
Leading with data built credibility
By anchoring every finding in Tableau and Matomo numbers before introducing qualitative context, I was able to earn buy-in from a sceptical engineering and business audience. The data made the "soft" UX recommendations feel rigorous and actionable.
What I'd do differently
Tie recommendations to business metrics earlier
I'd want to estimate the revenue or retention impact of each recommendation upfront e.g. "reducing exits by 5% equates to X fewer support tickets/week." This would have strengthened the prioritisation argument and made the business case more compelling to leadership.
What I learned about leadership
Research is only valuable if it drives action
The research was thorough but the real skill was in presenting it persuasively. I learned to tailor the narrative for each stakeholder audience: engineers heard data, product heard prioritisation rationale, and business heard revenue protection.
Senior-level takeaway
Define the question before collecting the data
The most valuable decision I made was framing 3 specific research questions before opening Tableau. This kept the investigation focused, prevented analysis paralysis, and made it clear when we had enough data to make a confident recommendation.
Next Case Study
Tracka Civic Technology for Government Accountability
View Next Project →