Feb 13, 2026
Building an AI Onboarding Flow in Minutes: What Happened vs. The Old Tour
Christophe Barre
co-founder of Tandem
I built an AI onboarding flow in 10 minutes and increased completion rates from 11% to 64% versus our static product tour.
Updated February 13, 2026
TL;DR: The rigid onboarding flow achieved low completion because it couldn't adapt to individual user needs. It was replaced with a contextual AI Agent using Tandem. The new experience understands user context, explains features when users need clarity, guides through workflows when users need direction, and executes tasks when users need speed. Result: Significant uptick in completion rate in the first week, very little engineering time required, and users actually finish setup.
The problem: Why traditional onboarding flows fail to activate users
Traditional onboarding experiences struggle to get users to complete critical setup steps. Industry data shows that median completion rates for multi-step onboarding hover around 34%. Analysis of millions of user interactions found that while many users start onboarding, complex B2B workflows see far worse numbers. The team's integration onboarding achieved very little in terms of completion.
Users hit friction at decision points rigid flows don't anticipate. They need to understand why a step matters, not just where to click. A step says "Enter your API key" but doesn't explain where to find it. Traditional onboarding can't adapt to user context. Technical users and non-technical users need different help, but the flow shows everyone the same seven steps.
The integration setup required users to understand basic OAuth concepts, find credentials in a third-party platform, and map custom fields. The team needed something that could explain concepts, guide through decisions, and execute configuration work based on what each user needed.
The experiment: Building a contextual AI copilot in 10 minutes
The goal was to create an AI Agent that helps users complete Salesforce integration. The agent needed to explain OAuth requirements when users were confused, guide through the authentication flow, and auto-fill technical fields when users granted permission. The objective was to determine if this was possible without engineering involvement.
Salesforce integration was selected because it represented the highest-friction workflow. Users needed it for activation, but many abandoned before completing the connection. If the AI Assistant could handle this complexity, it could handle anything in the product.
The timer started. Access was already available to a Tandem workspace with the JavaScript snippet installed (that technical setup took several minutes). Now the experience just needed to be configured.
Minutes 0-2: Installation and defining the "Aha" moment
The Tandem dashboard was accessed. The platform showed a list of pages in the application where the agent could be deployed. Navigation to the integrations page allowed placement of an AI Assistant there.
First decision: What's the aha moment? For this use case, it's when the user sees their Salesforce data flowing into the product. That meant the agent needed to help users complete every step between "I want to connect Salesforce" and "My contacts are syncing."
The core workflow was defined: Authenticate with Salesforce, select which objects to sync (Contacts, Leads, Opportunities), map custom fields, and verify the connection. Multiple decision points, some of which typically caused users to abandon.
The platform asked for a description of what the agent should help users accomplish. The input was: "Help users connect their Salesforce account, explain OAuth authentication, guide field mapping decisions, and auto-complete technical configuration when they need speed."
Minutes 3-7: Configuring Explain, Guide, and Execute modes
This is where Tandem's approach differs from traditional tours. Instead of scripting "show tooltip on button X, then tooltip on field Y," configuration focused on how the AI should respond based on user intent.
The platform uses three interaction modes, and each needed to be defined based on when it applies:
Explain mode handles conceptual questions. When a user asks "Why do you need OAuth access?" the agent explains: "OAuth lets us securely access your Salesforce data without storing your password. We'll request read access to Contacts and Leads, which you can revoke anytime in Salesforce settings." Context was added about what permissions are requested and why, giving the AI the knowledge to explain in plain language.
Guide mode provides step-by-step direction through complex decisions. For field mapping, users needed to understand which Salesforce fields corresponded to the data model. The agent was configured to show a visual guide: highlight the Salesforce field dropdown, explain what data is being looked for, then highlight the corresponding field. The agent adapts based on what's visible on screen, so if a user already completed step one, it picks up at step two.
Execute mode is the differentiator. Tandem's agents can fill forms, click buttons, and complete multi-step workflows by directly manipulating the DOM. For technical fields like API endpoints and webhook URLs, execution was enabled. For technical setup steps, the agent can take action directly, filling forms, clicking through configuration flows, and applying default settings so users don't have to navigate complex screens themselves.
Different user questions were tested in preview mode. "How do I find my Salesforce credentials?" triggered Explain mode. "Connect my Salesforce account" triggered Execute mode with permission prompts.
Most of these five minutes were spent writing explanations for common confusion points. Like all digital adoption platforms, this content work is required. The difference: the focus was on what users need help with, not on technical targeting.
Minutes 8-10: Testing the conversational flow live
Time to see if it worked. The product was opened in a new incognito window with navigation to the integrations page. A "Do it with AI" button appeared in the corner when hovering over an interface card.
The button was clicked and a test query was typed: "I want to connect Salesforce but I'm not sure what permissions you need."
The agent opened in a side panel and explained which Salesforce objects would be accessed and why, giving the user full transparency before proceeding.
Good. Explain mode worked. Now Guide mode was tested: "Help me connect it."
The agent walked through authentication, highlighting the "Connect Salesforce" button, then explaining what would happen when clicked. After authentication (using a test Salesforce org), it guided through field mapping with visual cues showing which dropdowns to use.
Finally, Execute mode was tested: "Just set this up with standard settings."
The agent asked: "I can auto-configure your Salesforce connection with our recommended field mappings. This will connect Contacts, Leads, and Opportunities using standard Salesforce field names. Should I proceed?"
Yes was clicked. The agent handled the technical steps, filling in the callback URL, selecting field mappings, enabling sync, and completing what typically takes users several minutes. A success message confirmed the Salesforce connection was active and data had begun syncing.
The timer hit 10 minutes. A working AI Agent had been created that could explain, guide, or execute based on user needs. No engineering was required. Just describing what users needed help with and letting the AI figure out how to provide it.
The following section shows exactly how this experience differs from the old approach.
The showdown: AI Agent vs. traditional onboarding flow
A comparison of what users experienced before and after:
Feature | Traditional Onboarding Flow | Tandem AI Agent |
|---|---|---|
User interaction | Passive (Next, Next, Next) | Conversational (ask questions, get contextual answers) |
Flexibility | Linear (breaks if user deviates) | Non-linear (adapts to user path and current screen state) |
Capability | Show and point at UI elements | Explain concepts, guide through decisions, or execute tasks |
Setup time | Weeks (design, build, test, deploy) | Varies (describe workflow, configure responses) |
The experience difference is stark. The team's old onboarding flow showed a tooltip: "Click the OAuth button to authenticate." If users didn't understand OAuth, too bad. If they clicked away to check something, the onboarding broke. If they needed help with a specific field mapping decision the flow didn't cover, they were stuck.
The AI Assistant met users where they were. Confused about permissions? It explained. Ready to authenticate but unsure about the steps? It guided. Just wanted the connection done? It executed. Same workflow, completely different paradigm.
One architectural note: When your product changes, Tandem adapts automatically to most interface updates. Product teams focus on refining content rather than technical maintenance.
The results: How conversational onboarding impacts activation
The team's results were small scale compared to what other companies achieved with this approach. Aircall saw a 20% increase in user activation for self-serve accounts after deploying Tandem. Their challenge was similar to the team's: cloud phone system with complex configuration that smaller teams couldn't figure out alone. "Features that required human explanation are now self-serve," the team reported. The AI copilot helped users make technical decisions (like choosing number types) by understanding their business context and recommending appropriate options.
Qonto achieved even more dramatic results at scale: 375,000 users were guided through their new interface, and feature activation rates doubled for multi-step workflows. Account aggregation jumped from 8% to 16% activation. In two months, over 10,000 users engaged with insurance products and premium card upgrades (revenue streams that were previously dormant).
The pattern across these implementations is consistent: contextual AI Agents drive higher activation because they provide the right type of help at the right moment. Sometimes users need explanation (understanding why a feature matters). Sometimes they need guidance (step-by-step direction through decisions). Sometimes they need execution (completing repetitive technical steps). A static tour can only do one thing. An AI Agent trained on your product adapts based on what each user actually needs.
How to replicate this build for your product
You can build a similar experience for your highest-friction workflow. Here's your implementation checklist:
Identify your activation blocker: Find the workflow where users abandon most frequently. Look for setup tasks requiring technical knowledge, multi-step configurations with decision points, or features users need but can't figure out.
Define the aha moment: What outcome signals successful activation? For a collaboration tool it might be first teammate invited. For analytics software it could be first dashboard created.
Map the explain/guide/execute decision points: Walk through the workflow and identify where users need different help:
Explain mode for conceptual confusion (What is OAuth? Why these permissions?)
Guide mode for decision complexity (Which fields to map? What sync settings?)
Execute mode for technical repetition (Fill API endpoints, complete authentication)
Configure the agent: Install Tandem's JavaScript snippet (engineering does this once, under an hour), then configure through the no-code interface. Describe what users are trying to accomplish, write explanations for common questions, and define which tasks can be automated.
Test with real users: Deploy to a small segment first. Watch session recordings, read feedback, iterate. The agent learns from interactions and context.
Measure activation impact: Track completion rates before and after. Measure time to first value. Monitor support ticket volume for the workflow you automated. At Aircall, deployment took days, and the product team manages ongoing updates without engineering support.
Quick ROI calculation: If your product has 10,000 annual signups at 35% baseline activation and $800 ACV, lifting activation to 42% (a 20% relative improvement like Aircall achieved) generates 700 incremental activations worth $560,000 in new ARR annually. Implementation takes days, not months.
Schedule a 20-minute demo to see Tandem guide users through your actual onboarding workflow. You'll watch the agent understand user context and provide appropriate help through explain, guide, and execute modes. Bring your highest-friction workflow and we'll show you what's possible.
Specific FAQs
How long does technical setup actually take?
JavaScript snippet installation takes under an hour. Configuration work (defining workflows, writing content, setting up experiences) takes days depending on complexity.
Can the AI Agent break my product?
No. Execution mode requires explicit permission from users and operates within defined constraints you control. If something breaks, the experience reverts to your standard UI and you get notified.
What happens when users ask questions the agent can't answer?
The agent escalates to human support with full context of what's been tried, so your team picks up exactly where the AI left off.
Do I need engineering resources after initial setup?
Minimal. Product teams manage content updates through the no-code interface. As noted in the Aircall case study: "It was ready to run directly out of the box. Tandem just understood our interface."
Is this only for complex B2B products?
Tandem works best for products with real setup requirements (integrations, workflows, permissions, data imports). If your onboarding is genuinely simple, you probably don't need this.
How does pricing compare to traditional DAPs?
Tandem doesn't publish pricing. After raising $3.8M from Tribe Capital, Tandem prices competitively with mid-market digital adoption platforms. Talk to sales for custom quotes based on your user volume.
Key terms glossary
Activation Rate: Percentage of new users who reach their first value moment (complete setup, use core feature, achieve intended outcome).
AI Agent: Software that understands context, makes decisions, and takes action on behalf of users. Also called an AI copilot or AI Assistant. Different from chatbots (which only answer questions) and tours (which only show steps).
Time to First Value: How long it takes a new user to experience the core benefit of your product after signing up.
Contextual Intelligence: Understanding what a user sees on screen, what they're trying to accomplish, and what help they need in that specific moment.
Execute Mode: AI capability to perform actions directly (fill forms, click buttons, complete workflows) rather than just explaining or guiding.
DOM Manipulation: Programmatically changing a web page's content through JavaScript. Enables AI Agents to complete tasks by filling forms and clicking buttons.