🟣 HubSpot Service Practical Textbook — 2026 Edition
Chapter 10

Service analysis
Dashboard design/KPI tracking/Report utilization

“You can't improve what you can't measure.” When support teams are driven by data rather than intuition, improvement cycles speed up dramatically.Which KPIs to track, which dashboards to build and for whom, and which reports to bring to regular meetings?——Getting this design right will help you demonstrate support ROI to management, prioritize your team numerically, and detect problems before they occur. This chapter systematically explains KPI hierarchical design, dashboard mockups, standard report catalogs, and analysis cadences.

📖 Estimated reading time: 30 minutes
🎯 Target: CS managers, RevOps, and managers in charge of reporting
📅 March 2026 edition

📋 Contents of this chapter

  1. 10-1Hierarchical KPI design: three layers: management, manager, and agent
  2. 10-2Core KPI definitions, calculation formulas, and benchmarks
  3. 10-3Dashboard design (for whom, what to show, and how to show it)
  4. 10-4Standard report catalog (8 types)
  5. 10-5Analysis cadence and translation into improvement actions
Section 10-1

Hierarchical KPI design: three layers: management, manager, and agent

Not everyone needs to see every KPI. Management doesn't need to check ticket numbers every day, and agents don't need to track NRR (net revenue retention rate).By dividing the design into three layers to determine who looks at what, why, and which indicators, each role can focus on only the data necessary for decision-making.. A flood of KPIs is the same as "not seeing anything."

📊 Three tiers of KPIs — metrics to track by role
Layer 1 — Management / Monthly/Quarterly
Indicators of business impact
NPS CSAT achievement rate Update rate NRR (Net Revenue Retention Rate) AI resolution rate Support cost/item Ticket trend
Viewed by: CEO / CFO / CS Leader / RevOps
Layer 2 — Manager/Weekly
Indicators of team management quality and efficiency
FRT (first reply time) TTR (Resolution time) SLA achievement rate CSAT by person in charge Number of cases by channel Number of items by category Number of unresolved cases (stay)
Viewer: Support Manager / CS Manager / Team Lead
Layer 3 — Agent/Daily
Indicators of individual productivity and quality
Number of tickets you are responsible for my CSAT score Time remaining until SLA deadline Today's due tasks Average reply time (individual)
Viewed by: Each agent himself (Help Desk personal view)
The "useless" rate in KB articles, ticket categories with low CSAT, and comments from NPS critics are the best data to show you where to invest in improvements. Establish a cycle of monthly analysis and conversion into improvement actions.

No one will use a dashboard full of KPIs. Management wants to know how support contributes to the company's revenue, not the average ticket response time.When designing each dashboard, first write a one-sentence statement that says, "What do you decide when you look at this dashboard?"This allows unnecessary indicators to be naturally excluded.

Section 10-2

Core KPI definitions, calculation formulas, and benchmarks

If KPIs are not accurately defined, the meaning of "resolution time" may differ depending on the team, or the starting point of measurement may shift, making data incomparable. Below, we have organized the definitions, calculation formulas, and industry benchmarks for the core KPIs that should be tracked with Service Hub.

FRT — First Response Time
First reply time
The time from when a ticket is created until the agent sends the first reply. The first touchpoint that determines whether a customer feels cared for.
FRT = first reply time − ticket creation time (only business hours can be counted)
AI self-solving rateEmail within 4 hours, chat within 1 minuteMany teams set this as a goal. Urgent tickets are standard within 30 minutes.
⚠️ FRT deteriorates → Check routing design, staff utilization rate, and AI primary response settings
TTR — Time to Resolution
Resolution time (average)
20 minutes
TTR = Closing time − Ticket creation time (measured by category)
AI self-solving rateGeneral inquiries within 24 hours, technical issues within 72 hours. It is important to separate your goals by category.
⚠️ Categories with long TTR → Consider enhancing KB, strengthening staff skills, and reviewing escalation routes
SLA — Service Level Agreement
SLA achievement rate
Percentage of tickets that achieved the set FRT/TTR target (SLA) within the deadline. Track the promises you must keep as a team numerically.
SLA achievement rate = Number of tickets responded within the SLA deadline ÷ Total number of tickets × 100 (%)
the goal:95% or moreis the standard value for many SaaS teams. If it starts to drop below 95%, an immediate cause analysis is required.
⚠️ Decreased SLA achievement rate → Check overworked staff, backlog of unrouted tickets, and the reality of the SLA settings themselves
CSAT — Customer Satisfaction
Customer satisfaction achievement rate
Percentage of respondents who responded ``satisfied'' or ``very satisfied'' in a survey after ticket closure. The most actionable indicator that immediately reflects the quality of an agent's response.
CSAT achievement rate = (number of answers with 4 points + 5 points) ÷ total number of answers × 100 (for 5 point scale)
the goal:85% or more. Points for improvement can be identified by breaking it down by category and person in charge.
⚠️ Agents with low CSAT by person in charge → Check coaching opportunities and Copilot usage status
AI Resolution Rate
AI self-solving rate
Percentage of conversations that Breeze Customer Agent completes without escalating to a human. Reflects the richness of KB and the quality of learning sources.
AI resolution rate = Number of conversations completed by AI ÷ Total number of conversations reached by AI × 100 (%)
It starts at 20-30% at the initial stage of introduction, and as KB becomes more complete.Aim for 50-70%There are many teams.
⚠️ AI resolution rate is sluggish → Identify frequently-occurring topics that are not covered and add/update KB articles
Ticket Volume Trend
Ticket trend
It tracks not just the number of cases, but the "rate of change compared to the previous month/year." It is important to distinguish whether the increase in the number of cases is due to an "increase in customers" or a "problem with the product."
Number of cases trend = Number of cases this month ÷ Number of cases last month × 100 (%) and ARR ratio (number of cases/number of customers)
Even if the number of cases increases“Number of tickets per customer” decreasedIf so, there is evidence of quality improvement.
⚠️ Rapid increase in specific categories → Check the impact of product bugs, KB holes, and new feature releases
Section 10-3

Dashboard design (for whom, what to show, and how to show it)

HubSpot's dashboard is a "canvas for reporting." The setting location is Report → Dashboard → Create new. You can add each report as a widget and set it to automatically update and send regular emails. The completed mockup of the dashboard for managers is shown below.

Service Team Manager Dashboard — March 2026 (Weekly View)
FRT average
2.4h
▲ Improvement -0.6h
TTR average
18.2h
▲ Improvement -3.1h
SLA achievement rate
91%
▼ -4pt (confirmation required)
CSAT
88%
▲ +3pt
AI resolution rate
54%
▲ +7pt
Number of tickets by category (this month)
How to use/function
142
technical malfunction
90
Billing/Contract
55
Cooperation/API
47
Onboarding
33
Weekly Ticket Trend (Past 8 Weeks)
W1W2W3W4 W5W6W7W8
Rapid increase in W4-W5: Concentration of technical inquiries immediately after v3.2 release
Performance by person in charge (this month)
manager Number of cases in charge CSAT FRT average TTR average SLA
Keiko Tanaka 87 94% 1.8h 14.2h achievement
Taro Yamada 92 89% 2.6h 19.8h achievement
Sakura Suzuki 74 76% 3.9h 28.4h Not achieved
Ken Sato 68 91% 2.1h 16.7h Partially unachieved
⚡ Use the CSAT display by person as a coaching tool—for growth, not competition.

When disclosing CSAT scores for each person in charge, the purpose is clearly stated to be ``to quickly identify and coach members who need support,'' rather than ``to have them compete in rankings.''For agents with low CSAT, check whether they are using Copilot's reply suggestions and whether they have a habit of referring to KB, and provide them with opportunities to acquire skills.The approach is productive.

Section 10-4

Standard report catalog (8 types)

HubSpot Service Hub comes with many standard reports, but many people say they don't know which one to use. Below is a summary of the 8 ready-to-use standard reports and the questions they answer. All settings are Report → Report → Service You can choose from categories.

Ticket volume/trend
① Number of tickets (time series)
Graph the trends in the number of newly created tickets on a daily, weekly, and monthly basis. Ideal for visualizing spikes in numbers after a product release or campaign.
“When and why did the number of tickets increase?”
response quality
② Average FRT/TTR (period comparison)
Monthly trends in FRT/TTR are displayed in a line graph. You can drill down by category, person in charge, or channel to identify problem areas.
“Which category/person in charge is slow to reply/solve?”
SLA management
③ SLA achievement rate report
Track the achievement rate of FRT SLA and TTR SLA on a weekly and monthly basis. Identify where SLA violations are occurring by breaking them down by priority and by person in charge.
“Who is in charge, what priority, and what time period are SLAs not being met?”
customer satisfaction
④ CSAT score (by person in charge/category)
In addition to the overall CSAT, the score breakdown by person in charge and category is displayed in a bar graph. You can instantly identify the person in charge or category where low scores are concentrated.
“Which personnel/categories have low customer satisfaction?”
AI utilization
⑤ Customer Agent Resolution Rate Report
Displays the number of conversations resolved by AI, the number of escalated conversations, and the resolution rate over time. The effectiveness of KB investment can be measured by comparing changes in AI resolution rate with the timing of KB updates.
“Which KB categories contribute the most to AI resolution rates?”
knowledge base
⑥ KB article performance
Displays a list of the number of views, feedback score, and ticket creation rate for each article. ``Articles with a high rate of ticket creation after viewing'' = articles that most need improvement can be identified.
“Which KB articles contribute to self-solving and which ones are not working?”
agent productivity
⑦ Performance summary by person in charge
List the number of cases, FRT, TTR, CSAT, and SLA achievement rate for each person in charge. Used as basis data for personal interviews and coaching plans.
“Who on the team needs support?”
CS/cancellation prevention
⑧ Health score distribution report
Track the health score distribution (number and percentage of dangerous, caution-required, and healthy cases) for the entire account you are responsible for on a monthly basis. Measure the results of CS activities based on the trend of whether the number of dangerous accounts is increasing or decreasing.
“Are the number of accounts with high cancellation risk increasing? Are CS activities working?”
Section 10-5

Analysis cadence and translation into improvement actions

Nothing changes just by looking at the data.Incorporate the cycle (cadence) of “who sees which report, when, and decides what” into the organization's calendar.is the only way to transform analysis into improvement actions.

every day
morning
Agents: See today's priorities on your personal dashboard
Check the SLA remaining time, unreplyed tickets, and tasks due today for the tickets you are responsible for. Tickets in the SLA danger zone will be handled with the highest priority. Required time: 5-10 minutes.
weekly
monday morning
Manager: Team weekly review (FRT/TTR/SLA/stay tickets)
Check last week's FRT/TTR/SLA achievement rate. Follow up with low-scoring agents on individual CSAT. If there is a category with a sudden increase in the number of cases, analyze the cause and create tasks for KB updates and workflow adjustments. Time required: 30 minutes.
monthly
1st Monday of the month
CS leader: Monthly service review (all KPIs, NPS, AI resolution rate, health distribution)
Review of all KPI trends, month-to-month comparison of NPS/CSAT, correlation between AI resolution rate and KB fulfillment level, trend of number of health "danger" cases, creation of VoC feedback summary for product. Action: Set the top 3 improvement priorities as next month's improvement themes. Time required: 2 hours.
quarter
beginning of period
For management: Service ROI report (renewal rate, NRR, support cost, AI return on investment)
A one-page executive summary of NPS trends, renewal rates, NRR (net revenue retention rate), support cost reductions due to AI, and ticket count vs. customer count ratio was presented at a company-wide meeting. Action: Agree with management on next quarter's CS investment priorities. Time required: 3 hours for document creation + 15 minutes for presentation.
✅ Incorporate “what to do after seeing the data” into the meeting agenda

Instead of reporting in your weekly review, “SLA was 91%, down from last week,”Deciding in the meeting, “What is the cause of the decline in SLA?” → “What should we change within the next week?” → “Who is the owner?”Structure the agenda. By confirming data and setting improvement actions in the same meeting, analysis becomes a tool for "decision making" rather than "reporting."

📌 Chapter 10 Summary

KPIs are designed in three layers: management, manager, and agent.

Management only tracks NPS, renewal rate, and NRR, managers track FRT, TTR, SLA, and CSAT, and agents only track their own tickets and SLA remaining time. “All metrics for everyone” results in dashboards that no one uses.

Precisely define core KPIs and unify measurement points

FRT, TTR, SLA, CSAT, AI resolution rate: The premise of analysis accuracy is that all the team members share the same understanding of these five definitions and calculation formulas. If the meaning of "resolution time" differs between members, data that cannot be compared will accumulate.

Dashboards are designed after writing in one sentence what to look at and decide.

No one uses a dashboard full of KPIs. By clearly stating ``what decisions will be made by looking at this dashboard'' before designing each dashboard, unnecessary indicators can be naturally eliminated.

Use 8 types of standard reports and choose according to the question you want to answer.

Ticket volume, FRT/TTR, SLA, CSAT, AI resolution rate, KB performance, person in charge, health distribution—By understanding the "question" that each report answers, you will be able to choose the report to bring to the meeting without hesitation.

Fix the analysis cadence on a calendar and decide on "actions after viewing" as a set

Incorporate daily 5-minute, weekly 30-minute, monthly 2-hour, and quarterly executive reporting cycles into your organization's calendar. The process doesn't end with just ``looking at the data,'' but also decides in the same meeting ``who should change what, and by when.''

Person-specific CSAT is a coaching tool—use it for growth, not competition.

Rather than blaming agents with low CSAT, check their Copilot usage status, KB reference habits, and skill gaps and provide support. Creating a culture where data functions as a ``tool for growth'' rather than a ``fear'' will lead to a long-term improvement in the team.

Next Chapter
Chapter 11: Collaboration design with Sales and Marketing — Turn support data into profit →