Sales Law 2

Demo Against Their Own Work, Not Sample Data

75% of benchmark companies

The old playbook: build a polished demo environment with representative sample data. Show it to everyone. What the best companies did: use the prospect's own work as the demo input. The moment the prospect sees their own documents analyzed, their own words reflected back, their own workflow accelerated — they stop evaluating and start planning. Harvey (PACER tactic): Winston Weinberg describes the exact sequence — cold LinkedIn message to a corporate litigation partner, downloaded their last PACER filing before the call, prompted Harvey to attack their own brief. The moment the brief appeared on screen: "instantly read the screen." The prospect stopped evaluating Harvey and started thinking about which workflows to migrate first. The demo took minutes. The cognitive gap took zero time to close. Hebbia: PE fund demos used the prospect's actual deal documents — not generic financial analysis. When the fund partner sees their own deal thesis being cross-referenced against a document set in real time, the product is no longer abstract. Listen Labs: showed prospects their own marketing performance data analyzed live — specific creative decisions from their actual campaigns surfaced in the UI. The demo was not about what the product does; it was a replay of decisions the prospect had already made. Gong: early team demos played recordings from the prospect's own sales calls. The sales manager heard their own reps being coached by the product in real time. Generic conversation intelligence becomes personal feedback within the first meeting. Cognition (Devin): Deployed Engineers run demos against the prospect's actual codebase — the product literally cannot function without connecting to real repositories. The demo is not a simulation; it is Devin working on the prospect's code in real time. This is the strongest structural version of this principle: the product's architecture makes demo-on- own-work the only mode of operation, not a sales tactic. The mechanism: demos on fictional data ask the prospect to perform an imaginative leap — "imagine if that were your contract." Demos on real data eliminate the leap. The prospect's pattern-matching is already engaged with content they know. Their skepticism has no surface to land on.

Key examples
harvey hebbia listen-labs gong abridge cognition
Anti-pattern
Requiring the prospect to "imagine how this would look with your data." Showing a standard demo environment with fictional company names and generic workflows. Saving the custom demo for a follow-up meeting after the prospect is "interested." By then, they are not interested — they are evaluating other vendors who already showed them something real.

Cross-Company Comparison

The exact tactic each company used to demo against the prospect's own work — what data they used, how they obtained it, and what happened in the room

Company Exact demo tactic Data they used The result
Harvey PACER tactic: downloaded prospect's most recent federal court filing before the call, ran prompts attacking their own brief Prospect's own public court filings from PACER (federal court document repository) — obtained before the meeting, not shared by the prospect 'They would instantly read the screen.' The prospect stopped evaluating and started thinking about which workflows to migrate. When Harvey hallucinated, the meeting ended. When it was right, the deal was over.
Hebbia PE fund demos used the prospect's actual deal documents from their current or recent transactions — not generic financial analysis templates Prospect's own deal documents, portfolio data, or financial analysis materials — typically sourced from publicly available filings or brought to the session by the prospect Fund partners saw their own deal thesis cross-referenced against a live document set; the product became concrete, not hypothetical. Tasks that previously required 2–3 hours completed in 2–3 minutes in the room.
Listen Labs Showed prospects their own marketing performance data analyzed live — specific creative decisions from their actual campaigns surfaced in the platform UI Prospect's own campaign data and marketing performance history — brought into the demo environment before or during the meeting The demo was not about what the product does; it was a replay of decisions the prospect had already made. Cognitive distance collapsed because the prospect recognized the content immediately.
Gong Early team demos played recordings from the prospect's own sales calls — the sales manager heard their own reps being coached in real time Prospect's actual sales call recordings — obtained through a trial recording setup or brought into the demo by the prospect's team Generic conversation intelligence became personal feedback about the prospect's own team within the first meeting; the 'imagine if that were your data' leap was eliminated entirely
Abridge Live ambient recording demonstration in clinical settings — physician spoke a patient encounter aloud and the system generated a draft note in real time Live physician speech — the physician's own words, their own clinical vocabulary, their own documentation style — captured and structured in real time during the demo Physicians experienced the product on their own clinical language and workflow, not on a scripted example; the immediate 5–10 minute-per-encounter time savings became personally visible rather than abstractly stated

How This Law Worked in Practice

Evidence from each benchmark company where this law was observed — how it manifested, what the mechanism was, and what sources confirm it.

Harvey

L1
Winston Weinberg's PACER demo tactic is the most precisely documented instance of demo-against-own-work in this cohort. The mechanics: before every call with a corporate litigator, Weinberg went to PACER — the public federal court document repository that every Big Law attorney must use — and downloaded their most recent filing. He then built prompts designed to attack the brief: find weaknesses in the argument, identify citation risks, surface counter-arguments. He ran these prompts using Harvey before the call began. In the meeting, he showed the output on screen. Weinberg's description: "I would basically download the last thing that they submitted to court. And then I would try to come up with prompts that were like, 'This is bad.' And because they're a litigator and I'm basically attacking something that they just wrote — they would instantly read the screen. It was risky because sometimes Harvey would hallucinate and then it would just be over. But the times that they got it right, it was over." (Weinberg, Long Strange Trip, January 2026.) The mechanism is precise: the prospect has professional and reputational investment in the document on screen. They are not evaluating a demo environment — they are looking at their own work being analyzed. The cognitive engagement is immediate and visceral because the content is already deeply familiar. The imaginative leap required by a sample-data demo — "imagine if that were your contract" — is eliminated entirely. Either the analysis is correct, in which case the deal is effectively done, or it is wrong, in which case the meeting ends. The binary nature of the outcome is itself the point: it tests Harvey in conditions that matter, not in controlled demo conditions. Weinberg ran this approach cold — he explicitly noted that his own former firm (O'Melveny & Myers) was "customer 200 or something like that," confirming that the PACER tactic was deployed on strangers, not warm relationships. The tactic was a trust accelerant, not a relationship-closure mechanism. It compressed weeks of vendor evaluation into a single Zoom session.
Key evidence
PACER tactic verbatim: 'I would basically download the last thing that they submitted to court...they would instantly read the screen. It was risky because sometimes Harvey would hallucinate and then it would just be over. But the times that they got it right, it was over.'
Demo ran cold on strangers — own firm was customer #200, confirming tactic was not dependent on prior relationship
Hyper-personalized demo stated as a non-negotiable standard: 'I don't think there is any excuse for someone who is building an AI product and trying to sell to not do hyper-personalized demos.'
Demo rebuilt around prospect's actual work for every partner meeting — recent cases, contract templates, M&A deals; prospects invited to 'fight with the model'
First 50 enterprise customers were all referrals — demo quality was sufficient to generate word-of-mouth rather than requiring broad outbound

Hebbia

L2
Hebbia's demo approach was structurally identical to Harvey's, adapted for the private equity due diligence context. PE fund partners and their analysts manage data rooms of thousands of documents per deal. Hebbia demos used documents that the fund had worked with — their own deal materials, portfolio company financials, or publicly available filings from transactions they knew — rather than generic financial analysis examples. The prospect encountered the product performing analysis on content they already understood deeply. The effect was the same as the Harvey PACER tactic, but operating at the scale of institutional document processing rather than individual brief analysis. A PE partner seeing their own deal thesis cross-referenced against a real document set in real time experiences a qualitatively different evaluation than a partner watching a demo on fictional company data. The question shifts from "would this work on our documents?" to "how do we deploy this on our next deal?" George Sivulka's framing of finance as a buyer: "Finance is the slowest moving, most lethargic Leviathan...unless you're providing outsized alpha or real value, in which case, the minute that there's something real, finance moves faster than any other industry." (Sivulka, 20VC, January 2025.) The demo on real documents was the mechanism for proving that the value was real. The SVB crisis in March 2023 produced the most compressed version of this dynamic at scale. When Silicon Valley Bank collapsed, Hebbia helped PE clients map their entire portfolio's banking exposure across thousands of documents within hours. This was not a demo — it was the product working on the most urgent real-world analytical problem those clients had ever faced. The resulting word-of-mouth within the densely networked PE community drove ARR growth from $900K to $10M in calendar year 2023. The "demo on own work" principle, when deployed in a live crisis, becomes indistinguishable from a reference. Hebbia's "cite first, generate second" methodology — surfacing verbatim source documents and specific evidence before producing synthesis — is a direct response to the trust requirements of showing prospects their own data. A fund partner evaluating Hebbia on their own deal documents needs to be able to trace every output claim to a specific passage in a specific document. The methodology makes the demo auditable, which is the only kind of demo that works in a community where professional accountability is the purchase criterion.
Key evidence
PE fund demos used prospect's actual deal documents — not generic financial analysis — so the product became concrete rather than hypothetical
Tasks previously requiring 2–3 hours completed in 2–3 minutes in real document sets — labor replacement visible within the demo itself
SVB crisis March 2023: Hebbia helped PE clients map portfolio banking exposure across thousands of documents within hours — demo-on-real-work at maximum urgency
ARR 11x in calendar year 2023 ($900K → $10M) — growth driven by word-of-mouth in tightly networked PE community after real-work proof events
'Cite first, generate second' methodology: verbatim evidence surfaced before synthesis — makes demos on real documents auditable for high-accountability buyers

Listen Labs

L2
Listen Labs' demo approach exploits a structural property of their product: the platform analyzes qualitative research data, and enterprises always have past research data lying in agency reports, survey files, and campaign post-mortems. Showing a prospect their own marketing performance data analyzed in the platform UI is therefore not a technical feat — it is a product demonstration that happens to use the most persuasive possible dataset: content the prospect recognizes, decisions they remember making, findings they can validate against their own recollection. Alfred Wahlforss described the conversion dynamic in practical terms: once a marketing insights director sees their own recent campaign research surfaced and cross-referenced in minutes — work that previously required a 6–12 week agency engagement — the question is not "does this work?" but "which agency contracts can we cancel first?" Romani Patel of Microsoft described the pre-product reality: "It takes 4 to 6 weeks to get to insights. By the time we get to them, either the decision has been made or we lose out on the opportunity." (Microsoft case study, Listen Labs.) The demo on the prospect's own data makes that 6-week latency viscerally visible because the prospect recognizes the data and knows exactly how long it actually took to produce. The "cite first, generate second" methodology that Alfred Wahlforss articulated on the Greenbook Podcast (September 2024) is directly applicable to the demo-against-own-work principle. Listen Labs surfaces verbatim customer quotes and grounding evidence before generating synthesis or recommendations. When the prospect's own data is the input, this methodology produces a demo where every AI-generated insight is immediately traceable to something the prospect said, wrote, or commissioned. The epistemological requirement of professional researchers — "how do I know this finding is real?" — is answered within the demo itself, using evidence the prospect can verify from memory. The demo's speed is itself the conversion mechanism. Listen Labs' Microsoft case study documented a 50th anniversary customer story project that moved from 6–8 weeks to one day, at one-third of traditional cost. That compression ratio — shown live on the prospect's own data — collapses the imaginative leap that generic demos require. The prospect is not imagining what the product would do with their data; they are watching it do it, faster than they believed was possible.
Key evidence
Demo used prospect's own marketing performance data — specific creative decisions from actual campaigns surfaced in the platform UI
Romani Patel (Microsoft Senior Research Manager) verbatim: 'It takes 4 to 6 weeks to get to insights. By the time we get to them, either the decision has been made or we lose out on the opportunity.'
Microsoft case study: 50th anniversary project from 6–8 weeks to 1 day; 100+ interviews at one-third of traditional cost — speed visible in demo on real data
'Cite first, generate second' methodology: verbatim evidence before synthesis — every insight traceable to specific customer content the prospect can verify from memory
Demo as conversion event: 'A live demo of the AI conducting an interview is viscerally convincing. It removes the imagination leap required when selling software.'

Gong

L2
Gong's demo-against-own-work mechanic was embedded in the product's design, not just the sales motion. The product recorded sales calls and analyzed them. Demonstrating this to a sales manager required only one thing: playing a recording of their own team's calls and showing the analysis in real time. No sample data. No fictional rep with a fictional objection. The sales manager heard their own people, recognized the specific dynamics, and watched the product surface patterns they had only intuited before. The effect on the demo experience was structural. Sales managers evaluating Gong were not comparing call recording and transcription features on a spec sheet — they were watching their own reps being coached by a product that had just processed conversations from their own organization. The question "does this work?" was answered within the first ten minutes of the demo by content the sales manager knew was real. Eilon Reshef's PMF signal from the alpha cohort — "9 out of 10 complaints were how come you didn't even record this call?" — reflects the same dynamic: once the product was demonstrated on real calls, users became angry when it was absent, not skeptical about whether it worked. Chris Orlob's extension of this principle to Gong Labs was equally deliberate. The first viral piece of content — an analysis of 25,537 sales calls from 17 anonymous customer organizations — worked because it was analysis of real sales conversations, not hypothetical models of sales behavior. Sales leaders had never seen data validating what actually works in sales beyond research from the 1980s. Orlob's framing: "Sales leaders had never seen any data validating what actually works in sales." The content converted because it reflected patterns from real calls that sales leaders recognized from their own experience. The product-led demo was also a qualification mechanism. Gong required prospects to bring a small number of actual call recordings to evaluation sessions. Prospects who would not allow their calls to be analyzed were not the ICP. Prospects who agreed — and then watched the analysis surface patterns they recognized — converted at high rates because the product had already proven itself on data they trusted.
Key evidence
Demo used prospect's own sales call recordings — sales manager heard their own reps being coached by the product in real time, eliminating the 'imagine if that were your data' requirement
Eilon Reshef PMF signal: '9 out of 10 complaints were how come you didn't even record this call?' — product had become essential from first exposure to real-data demos
Gong Labs first viral piece: analysis of 25,537 actual sales calls from 17 customer organizations — real data content converted because sales leaders recognized the patterns from experience
Amit Bendov: trial close — 11 of 12 alpha customers agreed to pay; the trial put Gong into production on real calls, producing the 'instantly essential' response
Category name 'Revenue Intelligence': 'revenue' was literally in the CRO's job title — demo content on real revenue calls landed in a buyer's own domain vocabulary

Abridge

L3
Abridge's demo-against-own-work mechanism is built into the product architecture. Ambient AI documentation works by listening to a physician's actual patient encounter and generating a draft note. There is no sample data equivalent for this use case — demonstrating the product inherently requires recording real speech. When Shiv Rao or a health system implementation team demonstrated Abridge to a physician group, the demo consisted of a clinician speaking a realistic clinical scenario aloud, in their own clinical vocabulary, using their own documentation habits, and watching the system generate a draft note in real time. This structure eliminates the evaluation friction present in every other enterprise AI demo. The physician evaluating Abridge does not need to project their workflow onto fictional data — they watch their own words become structured documentation. The immediate feedback is: does this note capture what I actually said? Do the clinical terms appear correctly? Is the SOAP structure right for my specialty? These are questions the physician can answer instantly because they generated the input. Shiv Rao's description of the ideal product state as "good air conditioning — when it's set right you're not aware of it. You're just comfortable. You're just in the moment focused on other things" is a description of what the demo should feel like: the technology becomes invisible and the physician's attention stays with the patient-facing portion of the encounter. The pilot structure that Abridge uses — 1–3 months, 15–160 clinicians, measured outcomes — is a large-scale version of the same principle. It is not a controlled experiment on synthetic data. It is a deployment on the actual clinical documentation workflow of real physicians at a real health system, measuring real outcomes (time saved per encounter, after-hours documentation reduction, note quality ratings) that the health system's own clinical leaders can verify. Sutter Health CDO Laura Wilt described the implementation: mid-March start to mid-April go-live, 100+ clinicians across all specialties and markets — three weeks from start to production across the real clinical environment. That timeline was possible because the product was being evaluated on exactly the work it was designed to perform. The emotional proof that Abridge generates — physicians texting about eating dinner with their families, writing "love letters" to their CMIOs, reversing resignation decisions — comes from real clinical encounters, not from a controlled demo environment. Alastair Erskine at Emory reported receiving love letters from 500 doctors. That evidence base is real-work data generated at scale, and it is the strongest possible version of demo-against-own-work: the customer's own daily experience becomes the proof.
Key evidence
Demo inherently required real physician speech — ambient documentation has no sample-data equivalent; clinician watched their own words become structured documentation in real time
Sutter Health CDO Laura Wilt: mid-March start to mid-April go-live, 100+ clinicians, all specialties — real-work pilot at scale
Emory 'love letters': Alastair Erskine wrote that 500 doctors sent love letters saying Abridge was safe to their practice, their marriage, and their mental health — real-work proof at scale
UNC Health CMIO: physician who had written a resignation letter chose not to submit it after using Abridge — individual real-work proof at highest emotional intensity
Shiv Rao on product ideal: 'good air conditioning — when it's set right you're not aware of it' — demo should feel like the technology is invisible within the physician's own workflow
← Sales Law 1: Founders Sell Every Deal Until the Motion Is Proven All Sales Laws ↑ Sales Law 3: Willingness-to-Pay Is a Qualification Signal, Not a Negotiation →