Logo Tandem AI assistant

Menu

Logo Tandem AI assistant

Menu

Logo Tandem AI assistant

Menu

/

Onboarding Metrics That Predict Revenue & Activation

Feb 13, 2026

Onboarding Metrics That Predict Revenue & Activation

Christophe Barre

co-founder of Tandem

Share on

On this page

No headings found on page

Data-driven onboarding metrics that predict revenue: Time-to-Value, Activation Rate, Intent Resolution, and PQL velocity matter more.

Updated February 13, 2026

TL;DR: TL;DR: Tour completion rates are vanity metrics that mask poor activation. Tour completion rates vary significantly by length, three-step tours achieve 72% completion while seven-step tours drop to just 16% The metrics that predict revenue are Time-to-Value (how fast users reach their first win), Activation Rate (percentage hitting key milestones), and Intent Resolution (did they accomplish what they came to do). We built Tandem to track outcomes and intent, not just tooltip views, giving you cleaner data signals without engineering dependencies.

High tour completion often signals the opposite of engagement. Users click through to make the interruption stop, not because they're learning. If someone completes your product tour in a few seconds, they haven't had the chance to understand how your product works.

I've seen product teams obsess over optimizing the wrong numbers. They A/B test tooltip copy while the real problem sits invisible in their analytics stack. Traditional platforms measure compliance (did they finish the tour?), not activation (did they do the work?).

We built Tandem to create measurement possibilities that traditional tools can't provide. You can finally track whether users accomplished their intent, not just whether they dismissed your guidance. This article breaks down the four metrics that actually predict revenue and shows how modern AI Agents change what you can measure without writing custom tracking code.

Why "tour completion rate" is the ultimate vanity metric

Tour completion numbers lie. The behavioral reality: users experience tooltip fatigue and dismiss guidance to continue working. When you force interaction with tooltips just to proceed, you create friction dressed up as help. Too many tooltips cause desensitization, where users ignore everything just to close it quickly.

Three-step tours achieve 72% completion, but add just four more steps and completion drops to 16%. Yet teams celebrate when 60% of users "complete" their onboarding sequence.

Think about what "completion" actually measures. A user can click through seven steps in 15 seconds without reading a single word. Your analytics record a completed tour. Your user still has no idea how to connect their CRM or invite their team. You optimized for compliance when you needed activation.

The correlation breaks down completely when you look at retention. Tour completion and activation rates can move in tandem, because one measures dismissal behavior while the other measures value realization. The average B2B SaaS activation rate sits between 34% and 37.5%, yet median tour completion for five-step tours is also 34%. These numbers align by coincidence, not correlation.

Traditional Digital Adoption Platforms compound this problem by tracking their own UI elements rather than your product's outcomes. Pendo provides detailed analytics on user interactions with in-app guides, but can't verify whether the underlying business task actually happened. They know a user saw your "invite teammates" guide but not whether any teammates were invited.

Old Way (Vanity Metrics)

New Way (Actionable Metrics)

Tour completion rate

Activation rate by cohort

Time spent on tooltip

Time-to-Value (signup to first value action)

Tooltip views

Intent Resolution (task completed)

Step drop-off analysis

PQL velocity (days to qualification)

Stop celebrating green completion percentages. Start measuring whether users accomplished their goals.

The 4 onboarding metrics that actually predict revenue

1. Time-to-Value (TTV) vs. Time-to-Aha

These terms get used interchangeably, but the distinction matters. Time-to-Value is the period it takes a customer to understand the value of your product, while the Aha Moment describes when users experience product value for the first time.

TTV is a clock metric. You can measure it: days from signup to first successful workflow completion, hours from login to first integration connected, minutes from landing page to first report generated.

We designed our AI Agent to accelerate TTV without adding headcount. At Qonto, Tandem helped direct over 100,000 users to discover and activate paid features by understanding context and completing configuration steps users got stuck on. When our AI executes tasks (filling forms, connecting integrations, configuring settings), TTV drops from days to minutes because the friction disappears.

Track this: Median hours from signup to completion of your defined "first value" action, segmented by user role, company size, and acquisition channel.

2. Activation Rate by cohort

Activation rate refers to the percentage of new users who reach a key milestone that signals they've experienced core product value. This is your north star onboarding metric because it directly predicts retention and expansion revenue.

The benchmark reality: For B2B SaaS products, the average activation rate is 36%, and the median is 30%. Leading PLG companies maintain activation rates between 20% and 40%. If you're below 30%, you have a systemic onboarding problem.

Define activation carefully. It's not a universal fixed point but a carefully defined event unique to each product. For a project management tool, activation might be "created first project and added two collaborators within seven days." For a marketing platform, it could be "connected data source and viewed first dashboard within 48 hours."

At Aircall, Tandem lifted adoption of advanced features by 10 to 20% by providing contextual guidance that helped users complete phone system configurations they previously abandoned. Our AI understood what each user was trying to accomplish and explained features when clarity was needed, guided through workflows when direction was needed, or executed configuration tasks when speed was needed.

Track this: Percentage of signups completing your defined activation event within your timeframe, broken down by cohort (trial vs. freemium, SMB vs. enterprise, organic vs. paid).

3. Intent Resolution Rate (The new AI metric)

This metric didn't exist before conversational AI onboarding. Intent Resolution Rate measures whether users who asked for help (or received proactive assistance) successfully completed the task they came to do.

Traditional platforms can't measure this. They track guide views and clicks on their own UI elements, not whether users accomplished their underlying business goal. AI chatbots face a different problem: They may technically "contain" a conversation by providing any response, but that doesn't mean the customer received actual help.

The distinction is critical. For a conversation to count as "AI agent-handled," the AI has to understand what the customer wants and resolve it without an attempt to escalate. This is task completion, not conversation closure.

Think about the attribution clarity. When Tandem executes a task for a user (filling a multi-field form, connecting a Salesforce integration, configuring team permissions), you have 100% certainty that activation step completed successfully. When it guides a user through a workflow and the user completes the final action, you know their intent was resolved. We give you cleaner data than "user viewed tooltip for 3.2 seconds."

At Qonto, in just two months, over 10,000 users engaged with insurance products and premium card offerings, revenue streams that were previously dormant. Each engagement represented a resolved intent: users wanted to understand or activate a feature, our AI helped them accomplish it, and revenue increased.

Track this: For users who interact with your AI Agent, what percentage complete the task they started? Segment by task complexity (explain vs. guide vs. execute) and user tenure.

4. Product-Qualified Lead (PQL) velocity

Product-Qualified Leads are users who demonstrated meaningful engagement with your product, typically by completing multiple activation milestones or using premium features during trial. PQL velocity measures how quickly users cross this threshold.

The speed matters. Users who qualify faster convert at higher rates, signaling stronger intent and better product-market fit for that segment.

We designed our AI Agent to accelerate PQL velocity by removing friction points that slow adoption. Instead of users getting stuck at complex configuration steps and abandoning until "later" (which never comes), the AI guides or executes those steps in the moment. At Sellsy, activation increased after deploying Tandem to guide complex onboarding flows.

Track this: Median days from signup to PQL qualification, broken down by acquisition source and user segment.

How AI agents change what you can measure

From "Did they see it?" to "Did they do it?"

The measurement gap in traditional onboarding tools is fundamental. They track interactions with their own interface but remain blind to whether users accomplished anything meaningful in your actual product.

We built Tandem differently. Our AI Agent sees what's on the user's screen to provide contextual help, and it can fill forms, click buttons, validate inputs, catch errors, navigate users through flows, pull data from your interface, and complete multi-step workflows.

This gives you attribution certainty. When Tandem executes a task, you know with 100% confidence that the action completed successfully. When it guides a user and they complete the final step themselves, you can attribute that activation to the assistance provided. Compare this to a traditional tooltip that might have been viewed for 0.8 seconds before dismissal.

We built the explain, guide, and execute framework because different user intents require different help, and this matters for measurement. At Carta, employees need explanations about equity value with no task execution required. At Aircall, some users need step-by-step guidance through phone system setup, while others need the AI to handle configuration automatically. Tracking which mode resolved each intent gives you product intelligence that tooltips can't provide.

Capturing Voice of Customer at scale

Every interaction with an AI Agent generates qualitative data alongside the quantitative metrics. Tandem captures what users are asking for, revealing what features they're struggling with and providing direct insight into what they actually want.

This is voice-of-customer research happening automatically at scale. When 500 users ask "How do I invite my team?" in their first session, you've learned that your "Add Team Members" button is invisible or confusingly labeled, even if your tour about team invitations shows high completion rates. When users repeatedly ask about Salesforce integration during trial, you've identified your most important activation feature.

Traditional platforms show you where users drop off. We show you why. This qualitative layer transforms how you prioritize product improvements. You're no longer guessing why activation is stuck at 35%. You have transcripts showing exactly where users get confused, what features they can't find, and which workflows feel too complex.

Quick-start guide: Tracking onboarding success without engineering

You don't need six months and two engineers to start measuring what matters. Here's how to ship better onboarding metrics in days.

Step 1: Install the AI Agent (under 1 hour). Add Tandem's JavaScript snippet to your application. No backend changes required, no API integrations needed. The AI appears as a side panel in your interface and works with any modern web framework.

Step 2: Define your "Aha" moment in the dashboard. What's the specific action that signals a user got value? For Qonto, it was activating paid features like insurance products. For Aircall, it was successfully configuring advanced phone system features. Choose one clear milestone that predicts retention. Make it specific and verifiable.

Step 3: Deploy an AI Agent for that specific workflow. Use the no-code interface to create guidance for your activation milestone. Build playbooks that teach Tandem about your product. Decide which mode makes sense: Explain when users need to understand concepts, Guide when they need step-by-step direction, Execute when repetitive configuration creates abandonment. Deploy it surgically to the exact page or workflow where users get stuck.

Step 4: Measure lift in event completion, ignore "views." Stop tracking tooltip impressions. Start tracking the event itself. How many users completed Salesforce connection this week versus last week? What percentage of users who started the workflow finished it? Your dashboard now shows outcomes (tasks completed, activations achieved, intents resolved) instead of interactions (tours viewed, tooltips dismissed, steps clicked).

Move from "did they see it?" to "did they do it?"

Tour completion percentages are compliance metrics dressed up as engagement. The metrics that predict revenue are Time-to-Value (how fast users reach first value), Activation Rate (percentage hitting key milestones), Intent Resolution (did they accomplish their goal), and PQL velocity (how quickly they cross conversion thresholds). These measure outcomes, not views.

We built Tandem to change what you can track because it understands context and intent. When our AI explains a feature, guides through a workflow, or executes a configuration task, you get clear attribution data about what helped users activate. You're no longer inferring engagement from "time spent on tooltip" or guessing whether users learned anything from your seven-step tour.

The shift is from passive measurement to active assistance. Traditional platforms report on problems after users abandon. Our AI Agent resolves issues in the moment and tells you what happened. Your analytics become clearer because the AI creates structured interaction data rather than fuzzy behavior signals.

If your activation rate sits below 40% and you're still celebrating tour completion numbers, schedule a demo to see how Tandem tracks Intent Resolution and task completion instead of tooltip views.

Frequently asked questions

What is the difference between Activation and Retention?

Activation measures the percentage of new users who reach a key milestone signaling they've experienced core product value, typically within the first 7-30 days. Retention measures whether activated users continue using the product over time (30, 60, 90 days), making activation a prerequisite for strong retention.

How do I calculate Time-to-Value?

Define the specific event that represents "first value" for your product (first report generated, first integration connected, first workflow completed), then track median hours or days from account creation to that event completion.

Can AI measure "Aha" moments automatically?

AI Agents can track when users complete the actions that trigger Aha moments, but you must define what those actions are. The Aha moment varies by product, making automatic detection impossible without product-specific configuration.

Why is tour completion rate considered a vanity metric?

High completion often measures dismissal behavior rather than learning or activation. Users click through in seconds without reading, creating false engagement signals while your actual activation rate remains stuck.

How does Intent Resolution differ from conversation closure?

Intent Resolution verifies the underlying task was completed successfully. Conversation closure simply means the chat ended, which can happen even when the user's problem remains unsolved.

Key terminology

Activation Rate: The percentage of new users who reach a key milestone signaling they've experienced core product value, typically measured within the first 7-30 days after signup.

Time-to-Value (TTV): The period it takes users to complete a defined "first value" action, measured from signup or first login.

Intent Resolution Rate: The percentage of users who interact with an AI Agent and successfully complete the task they came to accomplish, measuring outcome rather than conversation closure or guide views.

AI Agent: AI systems that pull information from multiple sources, understand user intent, and deliver contextual assistance with the ability to see user screens and take actions within the product interface.

Product-Qualified Lead (PQL): Users who demonstrated meaningful engagement with your product by completing multiple activation milestones or using premium features during trial, indicating strong conversion likelihood.

Aha Moment: The situation when users experience product or feature value for the first time, realizing how the product solves their specific pain point or improves their workflow.

Product-Led Growth (PLG): A go-to-market strategy where the product itself drives customer acquisition, activation, and retention, typically through freemium models or free trials enabling self-service adoption.

Subscribe to get daily insights and company news straight to your inbox.

Keep reading

Feb 13, 2026

8

min

AI Segmentation for Personalized User Onboarding Flow

Segment-specific AI guidance adapts onboarding by role and intent, lifting activation 10-20% versus generic product tours.

Christophe Barre

Feb 13, 2026

8

min

AI Segmentation for Personalized User Onboarding Flow

Segment-specific AI guidance adapts onboarding by role and intent, lifting activation 10-20% versus generic product tours.

Christophe Barre

Feb 13, 2026

7

min

A/B Test AI Onboarding Flows for Better Activation

Testing and iterating onboarding with AI lets product teams run experiments in minutes instead of sprints to boost activation rates.

Christophe Barre

Feb 13, 2026

7

min

A/B Test AI Onboarding Flows for Better Activation

Testing and iterating onboarding with AI lets product teams run experiments in minutes instead of sprints to boost activation rates.

Christophe Barre

Feb 13, 2026

8

min

Configure AI Onboarding Without Engineering in Minutes

Self-serve onboarding configuration lets you build and deploy AI flows in under 10 minutes without engineering using no-code tools.

Christophe Barre

Feb 13, 2026

8

min

Configure AI Onboarding Without Engineering in Minutes

Self-serve onboarding configuration lets you build and deploy AI flows in under 10 minutes without engineering using no-code tools.

Christophe Barre

Feb 13, 2026

7

min

Building an AI Onboarding Flow in Minutes: What Happened vs. The Old Tour

I built an AI onboarding flow in 10 minutes and increased completion rates from 11% to 64% versus our static product tour.

Christophe Barre

Feb 13, 2026

7

min

Building an AI Onboarding Flow in Minutes: What Happened vs. The Old Tour

I built an AI onboarding flow in 10 minutes and increased completion rates from 11% to 64% versus our static product tour.

Christophe Barre