Who is it for
Industries
Internal tools
Product
Resources
Reliability & failure modes: Sierra vs. competitors in production
CommandBar implementation: Time, cost & engineering hours required
Why companies leave CommandBar: Real switching reasons & patterns
Close Your 85% PLG Conversion Gap: The PQL Playbook for Sales & Product
Building Custom Conversational AI vs. Sierra: Engineering Hours & Maintenance Reality
BLOG
Close Your 85% PLG Conversion Gap: The PQL Playbook for Sales & Product
Christophe Barre
co-founder of Tandem
Share on
On this page
Close your 85% PLG conversion gap by defining strict PQL thresholds and timing sales engagement to genuine activation signals.
Updated April 23, 2026
TL;DR: The majority of your trial signups never pay. The fix is not more tooltips or faster sales outreach. It is defining strict Product-Qualified Lead (PQL) thresholds based on real usage milestones, then replacing passive product tours with an AI Agent that explains features, guides users through complex workflows, and executes tasks directly on their behalf. This playbook shows you how to define those signals, build the handoff, and use contextual AI to fill the activation gap.
Most product teams obsess over driving trial signups while ignoring the complex setup workflows that cause the majority of those users to abandon before ever reaching first value. Only 36-38% activate, meaning the majority never see what the product can do. Adding another tooltip to your existing tour will not close that gap. This playbook shows you how to define PQL thresholds that predict conversion, fix the onboarding experience so more users reach those thresholds, and build a sales motion that engages at exactly the right moment.
Identifying PLG's trial user drop-off
You face high trial drop-off because of activation failure, not lead quantity problems. Understanding where and why users abandon tells you which levers to pull first.
Where trial users abandon
B2B SaaS products that require real setup, connecting CRM integrations, assigning permissions, importing data, or configuring multi-step workflows, create friction that simple consumer apps never face. Users arrive with high intent and leave when they hit their first genuine decision point they do not understand.
Passive product tours make this worse, not better. Long tours see abandonment rates exceeding 84%, and static tooltips across B2B applications are frequently dismissed within seconds. The average user who does interact with a tooltip spends 2.1 seconds on it, which is not enough time to read and process guidance content of more than 15 words. Across the industry, multi-step tours see low completion rates, meaning the vast majority abandon without finishing the guided experience.
You will see drop-off concentrate at predictable points: multi-field configuration forms, permission assignment screens, CRM connection flows, and data import steps. These are moments where users need contextual help, not a tooltip pointing at a button they can already see. Our onboarding metrics guide covers which specific drop-off points correlate most strongly with 30-day retention across B2B SaaS categories.
Pure PLG's conversion limits
A 100% self-serve motion works for products with simple setup paths and low ACV. For complex B2B SaaS at $10K+ ACV, pure self-serve leaves significant revenue on the table. PLG motions reach activation rates of 25-40% when time-to-value falls between one and seven days, but that window closes fast. Companies closing that gap are not adding more tours. They are replacing passive guidance with an AI Agent that actually completes the work users cannot do themselves.
PQL signals: Pinpointing sales engagement
Getting the PQL definition right is the highest-leverage work in your trial conversion strategy. A definition that is too broad floods sales with unqualified leads. A definition that is too narrow starves the pipeline.
Defining a product-qualified lead
A Product-Qualified Lead (PQL) is a prospect who has experienced meaningful value in your product through a free trial or freemium model, determined by behavioral usage data rather than demographic signals. As Appcues defines it, a PQL is "a lead who has experienced their 'aha moment'... they've seen the value first-hand." This is a fundamentally different signal than an MQL, qualified by marketing engagement like a webinar download, or an SQL, qualified by a sales rep's judgment after a discovery call.
The conversion advantage is substantial. PQLs close at materially higher rates than MQLs and SQLs. That gap makes precise definition worth significant investment.
PQL, MQL, SQL: When sales engages
The funnel transition should be driven by product behavior, not time elapsed since signup:
Signup (PAL): User creates an account. You have no engagement signal yet. Run marketing nurture only.
Early activity: User logs in multiple times and explores core features. Deploy automated in-app messaging.
Activation threshold (PQL): User completes core setup, activates a key integration, invites a team member, or hits a defined feature-usage milestone. Route target accounts to sales.
Account-level signal (PQA): Multiple users in the same account cross usage thresholds. This is the strongest buying signal you have, a Product-Qualified Account showing organizational buy-in rather than individual exploration.
Identifying high-intent product usage
Specific PQL engagement triggers fall into three categories:
Usage thresholds:
3 or more distinct features activated within the trial period
5 or more active sessions across different days
Core workflow completed end-to-end at least once
Expansion signals:
Team member invited (collaboration intent)
CRM, billing, or data integration connected
Custom configuration saved (investment signal)
API key generated or webhook configured
Stuck signals (escalate to CS or sales immediately):
Same workflow attempted and abandoned 2 or more times
Support ticket submitted during trial
Repeated form validation errors without completion
For high-ACV accounts, AI Agent analytics can surface these engagement signals, including context of what users asked and where they got stuck, so reps arrive knowing exactly what the user tried and what they need.
Setting PQL thresholds that predict conversion
Thresholds should map to moments that correlate with long-term retention, not just activity volume.
PQL feature activation
Not all feature usage is equal. Track completion of workflows that map to your product's core value proposition, not page views or login counts. For a spend management tool, the trigger is a first receipt uploaded and approved. For a CRM, it is a first pipeline stage moved. As Appcues documents, the right trigger varies by product category. Define yours against the moment users first experience the outcome your product promises, then instrument that event explicitly in your analytics stack.
Our guide to increasing product adoption in 30 days covers practical steps for identifying these moments across different product categories and compressing time-to-first-value to drive higher trial-to-paid conversion.
PQLs from team invites and collaboration
Account-level signals are stronger conversion predictors than individual user signals. When two or more users in the same account cross your usage threshold, you have a Product-Qualified Account (PQA), a much stronger buying signal than a single active user. Research on PQL definitions identifies multi-user activation as a clear indicator of organizational buy-in rather than individual exploration. Route these accounts to sales immediately, even if individual users have not yet hit every threshold criterion.
A critical data quality note: single-user account activity that looks healthy in aggregate can mask significant churn risk. Pace's PQL analysis recommends alerting sellers when usage originates from a small set of users within an account, since broad organizational activation is a stronger predictor of retention than power-user engagement from one person.
Close the PLG-sales gap: Your handoff blueprint
The handoff from self-serve to sales-assisted is where the most conversion value is lost or gained.
PQL handoff criteria and triggers
Sales should engage when accounts show these specific signals, not simply because a trial clock is running out:
ACV potential above $10K based on company size or usage volume
Enterprise buying signals such as multiple stakeholders logged in or admin configuration attempted
Competitive displacement indicators like importing data from a competitor's export format
Explicit expansion intent such as an admin requesting more seats or a higher-tier feature
Clear role division matters as much as the criteria themselves:
Product team: Defines PQL thresholds, monitors activation metrics, owns onboarding playbooks
CS team: Handles stuck signals and complex configuration questions, passes expansion-ready accounts to sales
Sales team: Engages only on high-ACV PQLs and PQAs showing expansion or displacement signals
Marketing: Runs automated nurture for active users below PQL threshold
Essential data for PQL sales handoff
Pass these data points to sales with every PQL: which specific workflows the user attempted, which features they activated, where they got stuck, how many sessions they completed, and which users in the account were involved. Without that context, reps default to feature pitching. With it, they open with the user's specific goal and the exact point where self-serve failed.
Tandem's human escalation capability passes the full context of every AI conversation directly. The rep knows what the user tried to do, what the AI explained or executed, and precisely where the user needs human help, so no conversation starts from zero.
Sales outreach at PQL should focus on value validation, not feature demonstration. The user has already seen the product. The question is whether it solves their specific workflow. Reference the workflow the user attempted, frame the conversation around the business outcome, and offer to complete a complex setup step together rather than scheduling a full demo.
PQL activation strategies: Drive 45%+ conversion
The highest-performing PLG companies run a hybrid motion where self-serve AI handles routine activation and human sales engages only on high-value signals.
High-engagement PQL outreach sequence
Opt-in trials convert at 18.2% while trials requiring a credit card show materially higher conversion rates. Sales-assisted PQLs, when engaged at the right signal, can achieve strong conversion rates. The structured outreach sequence that captures that lift:
Day 0-2: Automated in-app welcome, contextual onboarding AI active
Day 3-5: Behavioral email triggered by specific feature interaction, not a generic drip
Day 5-7: PQL threshold hit, sales alert fires with full usage context attached
Day 7: Sales outreach referencing specific product activity, opening with the user's goal rather than product capabilities
Day 10-14: Value validation conversation focused on completing the specific workflow where self-serve stalled
Day 0-2: Automated in-app welcome, contextual onboarding AI active
Day 3-5: Behavioral email triggered by specific feature interaction, not a generic drip
Day 5-7: PQL threshold hit, sales alert fires with full usage context attached
Day 7: Sales outreach referencing specific product activity, opening with the user's goal rather than product capabilities
Day 10-14: Value validation conversation focused on completing the specific workflow where self-serve stalled
For users who are active but below PQL threshold, context-triggered in-app messaging achieves higher engagement than static alternatives and shows direct correlation to feature adoption. The goal is to surface the right feature at the moment the user's behavior indicates they need it, not to describe it in a tooltip they will dismiss in 2.1 seconds.
Unblocking stalled PQLs for activation
This is where the explain/guide/execute framework directly drives PQL threshold attainment. Users stall at complex workflows not because they lack intent but because they lack context. An AI Agent embedded in your product addresses this at three levels based on user needs:
Explain (when users lack conceptual understanding): Use explanation when users need to understand why a feature exists or what a decision means before they can proceed confidently. This mode resolves problems where the barrier is not execution complexity but missing context about business implications or technical concepts. At Carta, employees viewing their equity compensation need to understand vesting schedules, strike prices, and tax implications to make informed decisions about exercising options.
**Guide:**Use guidance when users know what they want to accomplish and understand the underlying concept, but the workflow is non-linear or context-dependent and they need step-by-step direction through a path they should own.
Execute (when time cost exceeds learning benefit): Use execution when users understand the desired outcome, the task is procedurally complex or repetitive, and speed matters more than learning the underlying process.
This applies to multi-field configuration forms, bulk permission assignments, API connection sequences, and repetitive data entry tasks where the user's time is better spent on strategic decisions than mechanical steps. The AI Agent completes these tasks directly on the user's behalf, navigating forms, triggering integrations, and configuring settings while the user focuses on outcomes rather than procedure.
At Qonto, this approach helped**10,000+ users in just two months engage withpaid features like insurance and card upgrades, with account aggregation jumping from 8% to 16% activation. At Aircall, the same AI-assisted onboarding lifted activation for self-serve accounts by 20%, and advanced features that previously required human explanation now complete through self-serve flows. At Sellsy, which serves 22,000 companies, activation lifted 18% after deployment to guide complex onboarding flows.
PQL feedback for better conversions
Every user conversation with Tandem's AI Agent generates voice of the customer data: what features users cannot find, which workflows they attempt and abandon, which explanations they request repeatedly. Product sees which workflows need better onboarding investment. Sales sees which objections surface during trial before the conversion call. Our90-day CX transformation guideincludes the measurement structure for tracking activation lift across cohorts and calculating the revenue impact of improved activation rates.
Avoiding PQL program pitfalls
Common PQL mistakes
The most common PQL errors follow a predictable pattern, and each one has a direct fix: Setting a threshold that is too low: If your PQL definition is anyone who signed up for a free trial, your team optimizes for signup volume rather than user success. ProductLed's analysisis direct: this causes everyone to focus on the wrong outcome. Tie your threshold to a specific value moment, not activity volume.
Engaging too early: You damage trials fastest when you engage before users hit their activation moment. A rep calling a user who has not completed core setup interrupts someone who needs product help, not a sales pitch, and research consistently shows this accelerates abandonment for high-intent users.
Treating all PQLs as the same sales cycle: GrowthMentor's PLG hybrid analysis confirms that kicking off a high-touch sales cycle when the user needed a workflow question answered is one of the top reasons PQL programs underperform. Route lower-ACV PQLs to automated contextual help first, not a full sales cycle.
**Ignoring expired trials with partial engagement:**Users who expired without hitting PQL thresholds but showed meaningful engagement during their trial period may respond to targeted re-engagement. Reference their actual product activity rather than sending a generic discount offer, and target the specific workflow where they stalled.
PQL sales engagement: When to offer help?
The tool you use to guide users through complex workflows directly determines how many reach PQL threshold. Here is how the primary options compare:
Dimension | Tandem | Pendo | Intercom Fin |
|---|---|---|---|
Core strength | Product integration, action execution, context-aware | Product analytics and guided tours | Customer support and conversational AI |
Primary use case | Activation and workflow completion | Analytics and passive guidance | Customer support and chatbot |
Action execution | Yes, fills forms, clicks, configures | No | No |
Context awareness | Sees user screen and goals in real time | Page-based triggers | Conversational context |
All DAPs require ongoing content work. Just as you manage email campaigns or help doc libraries, you will manage in-app messages, targeting rules, and playbook content regardless of which platform you choose. The operational difference with Tandem is that product teams handle this through a no-code interface without requiring engineering input for technical maintenance, so iteration stays fast when your UI changes.
For a deeper comparison of execution-first versus guidance-only architectures, our Tandem vs. CommandBar breakdown covers the activation impact in detail.
How long does it take to see conversion lift?
Technical setup requires adding a single JavaScript snippet and takes under an hour with no backend changes. Product teams then configure playbooks and write content through a no-code interface before first deployment. Measurable activation lift appears within the first cohort. At Aircall, the team saw a 20% activation lift for self-serve accounts. For teams evaluating whether to build or buy this capability, ourin-app AI agent guidecovers the build vs. buy tradeoff including the real engineering cost on either side.
Calculate your current activation rate and trial drop-off. If users abandon complex setup workflows and your PQL pool is smaller than your signup volume suggests, schedule a demo to see how Tandem lifts activation for products with real setup complexity. We will show you exactly how Aircall and Qonto moved from passive tours to contextual execution, and the activation numbers that followed.
FAQs
What is a good trial conversion rate for B2B SaaS?
Opt-in trials without a credit card typically convert at around 15-25% according to industry benchmarks, with top-quartile performers reaching higher rates. Trials requiring a credit cardtypically show higher conversion rates due to reduced friction at the payment step.
How does a free trial differ from a freemium model in conversion strategy?
Free trials create urgency with a time limit that forces a purchase decision before access expires, driving conversion through scarcity. Freemium models rely onfeature gates or usage capsto drive upgrades over months, with typical free-to-paid conversion rates of 2.6-5%.
How long does it take to implement an AI onboarding assistant?
Technical setup requires adding a single JavaScript snippet and takes under an hour with no backend changes required. Product teams then configure playbooks and content through a no-code interface before deploying to users, with the timeline depending on the complexity of the workflows they want to support.
What is the difference between a PQL and a PQA?
A PQL is an individual user who has crossed your defined usage threshold and experienced your product's core value. A Product-Qualified Account (PQA) is an account where multiple users have crossed that threshold, which is a materially stronger buying signal indicating organizational buy-in rather than individual exploration.
When should sales not engage a PQL?
Hold off when the user is actively progressing through setup without friction, when ACV potential falls below your threshold for assisted motion, or when usage is driven by a single user with no collaboration signals.
Key terms glossary
Trial conversion rate: The percentage of users who transition from a free trial to a paid subscription, measuring how effectively onboarding delivers the product's core value before the trial expires.
Product Qualified Lead (PQL): A prospect who has experienced meaningful value in your product through defined usage milestones such as completing core setup or inviting team members, rather than through marketing engagement or demographic signals.
Product-Led Growth (PLG): A business methodology where the product itself is the primary driver of acquisition, engagement, and retention, relying on users experiencing value before interacting with a sales team.
Activation rate: The percentage of new users who reach a predefined milestone or "aha moment" within the product. Strong PLG motions target 25-40% activation within the first seven days for complex B2B SaaS products.
Time-to-Value (TTV): The duration between signup and the user's first "aha moment" with the core feature set. For complex B2B SaaS, target TTV under seven days to prevent trial drop-off before activation.
Drop-off rate: The percentage of users who abandon a specific process or funnel step such as a multi-field setup form, identifying the friction points in the user journey that suppress PQL threshold attainment.
Interactive product tour: Software that guides users through an application using UI overlays like tooltips and hotspots, typically passive in nature with an industry-wide completion rate under 5% for multi-step tours.
AI Agent: An embedded agent that understands user context and screen state to explain features, guide through workflows, or execute tasks directly in the product, unlike static tours that point at buttons without completing the work.
Subscribe to get daily insights and company news straight to your inbox.
Keep reading
Apr 24, 2026
10
min
Reliability & failure modes: Sierra vs. competitors in production
Sierra vs competitors in production reliability: honest failure mode analysis, task completion rates, and MTTR data for CTOs.
Christophe Barre
Apr 24, 2026
15
min
CommandBar implementation: Time, cost & engineering hours required
CommandBar implementation takes 2-4 weeks of engineering time. Compare setup costs, maintenance hours, and faster alternatives.
Christophe Barre
Apr 24, 2026
15
min
Why companies leave CommandBar: Real switching reasons & patterns
CommandBar alternatives emerge when passive guidance fails complex workflows. Real churn patterns show 60% of users abandon multi-step setups.
Christophe Barre
Apr 24, 2026
13
min
Building Custom Conversational AI vs. Sierra: Engineering Hours & Maintenance Reality
Building custom conversational AI takes 12 to 18 months and costs $367K to $476K annually while activation problems persist unresolved.
Christophe Barre