Sales Engineers: how to use claude code to build presentation that matters

March 31, 2026

If you have spent any time in cybersecurity sales engineering, you have witnessed this: a smart SE opens ChatGPT and types “create a 12-slide presentation on zero trust architecture,” only to receive a response that sounds polished, yet will bore a technical audience inside three minutes.

The problem is not that AI struggles with writing, but rather the misconception that generating a deck is equivalent to designing one.

If you have ever engaged in vibe coding (developing functionality by articulating intentions and allowing an AI agent to execute the implementation across multiple files), then the underlying concept will seem intuitive. You are likely to face resistance from senior SEs with similar arguments. They will say everything needs to be designed and written by hand. They are not wrong. In the same way that software coders resisted using it but are now embracing it, SEs will have to adapt and start embracing new AI tools. In fact, employing AI can help you create more effective presentations in less time.

Why the “Generate a Deck” using chatGPT Approach Fails

Requesting a language model to create a comprehensive presentation on a specific topic is equivalent to asking it to determine your argument, select your evidence, structure your argument, tailor your message to your audience, establish your product positioning, and polish your writing, all in one go. Even the most skilled human would struggle to juggle all these tasks simultaneously. As a result, the model’s output often lacks substance, uniqueness, and impact.

Neuroscience provides an explanation for why this approach falls short with the audience and why this matters even more if a deck is built using chatGPT. Paul D. MacLean, a neuroscientist, proposed a hierarchical model of the brain with three distinct levels. In my own words and applied to our topic, the reptilian brain explains why polished-but-generic decks die in the room. Your deck was written by a neocortex (Neomammalian) and it speaks to the neocortex (complex arguments, nuanced positioning, detailed evidence). The audience’s primitive reptilian brain, which I like to think of as the ‘crocrodile brain’, receives the message first and decides whether the neocortex should spend energy processing it. The crocodile brain has three possible reactions: ignoring it, labelling it as dangerous, or sending it up to the higher thinking areas for more in-depth analysis. A deck full of abstractions, category overviews, and smooth transitions triggers the first response. The croc brain says “nothing new here, nothing threatening, nothing I want”. The audience starts thinking about what they will be having for dinner or about their next meeting. That deck that starts telling the CSO why cybersecurity is important and why he should care about securing his data with an EDR? Forgotten… The deck needs to trigger desire and tension within the first few minutes, or the higher brain never gets the message. To trigger desire and tension, a deck needs some novelty. Novelty and surprise is the fastest way to get the neocortex to process and remember your pitch.

At its core, ChatGPT functions by predicting the most probable next word. This means it is fundamentally wired to generate the safest, most generic presentation possible. It is the exact opposite of what we actually need.

The Three-Layer Architecture

My system has three layers, and each one does what it is actually good at.

• Obsidian is the project memory. It stores the brief, the research, the claims, the slide cards, the reviews, and the rehearsal materials as structured markdown files with YAML frontmatter. Properties give every note a type and status. Backlinks and graph view show you which claims connect to which evidence. Canvas gives you a visual argument map. Nothing lives only in chat.

• Claude Code is the automation layer. It reads and writes files in the Obsidian vault, runs multiple agents in parallel, and manages the workflow across stages. This is where the multi-agent architecture earns its keep: you are not having a conversation with one model, you are dispatching specialized workers to handle different jobs simultaneously.

• The human is the editor-in-chief. You own the editorial line, thesis, the positioning, the final cuts, and every “should we say this?” judgment. The agents propose. You decide.

This separation matters. When one tool tries to do everything: research, structure, draft, review, remember, and build. It does all of them at about 60%. When each layer handles what it is built for, the compound result is significantly better. Getting to 80% or 90% is what allows you to win the deal and gain the “trusted advisor” status.

The Operating Philosophy

Position → Discover → Prove → Control

This workflow runs on a simple sequence: position the problem correctly, discover what this audience actually needs to believe, prove it with enough specificity to change their mind, and control the frame so the proof actually lands.

If you speak with a psychologist, they will tell you that the frame is the location, the context, emotional undertones, and presentation of the facts. The same principle applies whether you are trying to convince someone water is not dangerous and they won’t drown or whether you are trying to pitch that deploying an EDR won’t cause all their laptops to suddenly become slow.

This is where most weak technical presentations fail. They start proving before they have positioned the conversation. They pitch before they understand the audience. They dump information without managing attention. The result is a deck that sounds competent, covers the right territory, and still does not move the room.

• Position comes first. Before you build slides, you need to decide what kind of story you are telling, what market frame you are operating in, and why this conversation matters now. That decision shapes everything downstream: which competitors matter, which strengths are relevant, and which proof points will actually register. Get the frame wrong and even strong material feels generic. Get it right and the audience understands the stakes almost immediately.

• Discovery comes next. Not generic discovery. Audience discovery. What does this room already believe? Where are they skeptical? What would a CISO need to hear that an architect would not? What would a practitioner reject on sight? Until you know the audience’s current mental model, you are mostly guessing at what counts as proof.

• Prove it. This is where most decks need more discipline. Every section should earn its place by changing a belief, resolving a tension, or making a risk feel concrete. A slide that cannot answer “what does this prove?” is usually just taking up space and wasting time. The goal of a presentation is not to cover a topic. The goal is to move the audience from one conclusion (I don’t need this product) to another.

• Control is what makes the rest of it work. You can have the right thesis and the right evidence and still lose the room if the pacing goes flat, the tension disappears, or the talk slips into vendor-safe abstraction. Attention has to be managed. Energy has to be managed. Frame has to be managed. The audience needs a reason to care before they are willing to process the details.

The workflow is where those ideas stop being theory and start becoming a build process.

How Claude Code’s Multi-Agent System Works (and Why It Matters Here)

If you have used Claude Code for software engineering, you know it can spawn subagents: specialized workers that handle focused tasks and report back to the main session. The same capability transforms presentation work.

Claude Code gives you three agent types that map cleanly onto presentation workflow:

• Explore agents are fast, read-only workers optimized for scanning files and finding information. Use them to survey your research folder, locate specific evidence, or audit the state of your project.

• Plan agents gather context and draft implementation strategies. Use them to analyze your brief and propose argument structures, or to map the gap between your audience’s current beliefs and where you need them to land.

• General-purpose agents handle complex multi-step tasks with full tool access. Use them for drafting slide batches, running hostile reviews, or normalizing messy source material into structured notes.

The key architectural move is parallel execution. You can spawn multiple agents simultaneously to work on independent tasks. While one agent is drafting slides 4–6, another can be running a skeptic review on slides 1–3, and a third can be auditing your claims folder for unsupported assertions. The main session aggregates the results, and you decide what to keep.

This is the vibe coding analogy made concrete. In software, you describe the intent (“add authentication to this endpoint”) and the agent handles the implementation across files. In presentation work, you describe the intent (“draft the core argument batch with evidence from the research folder, no product positioning yet”) and the agent handles the structured labor. You stay the editor. The agents are your team.

Good vibe coders will tell you what those of us who have worked building cloud architecture & CI/CD pipelines have discovered. Build your frame first (unit test) and iterate. Without it, the AI has no way to know if what it is building is AI slop or good content.

The Project Folder: Your Presentation as a Repository

Every presentation gets its own folder in your Obsidian vault. Treat it like a codebase.

talks/
cloud-runtime-security/
00-brief.md
01-thesis.md
02-audience-delta.md
03-not-this-talk.md
04-product-bridge.md
story.canvas
sources/
gartner-cloud-security-2025.md
field-observations-runtime-gap.md
...
claims/
claim-01-visibility-gap.md
claim-02-agentic-identity.md
...
slides/
slide-01.md
slide-02.md
...
reviews/
skeptic-practitioner.md
anti-pitch-detector.md
clarity-editor.md
synthesis.md
rehearsal/
opening-options.md
qa-bank.md
rehearsal-script.md
cut-to-20.md
export/

Every file has YAML frontmatter with a type and status property. This is not useless: it is what makes the agents effective and able to fit what they need in their context window. When you tell Claude Code to “find all claims with status: proposed that lack linked evidence,” it can actually do that without reading every single file because the metadata is structured and machine-readable.

The obsidian vault is the single source of truth. Not the chat. Not your memory. The files.

The Nine Gates: A Staged Workflow with Agent Orchestration

The workflow forces your presentation through nine gates. Each gate has a clear deliverable, and the model cannot skip ahead. This is the discipline that prevents the “polished but generic” failure mode.

Here is how the gates work, and where multi-agent orchestration makes each one faster without sacrificing quality.

What Happens When You Skip the Gates

Here is a pattern that repeats across SE teams. Someone has a conference talk due in ten days. They open a chat, paste their topic, and ask for a deck. The model produces twelve slides that cover the right territory: attack surface evolution, identity gaps in agent workflows, detection challenges. The slides read well. The SE rehearses twice and walks on stage.

The talk lands at about a six out of ten. The audience nods but does not engage. The Q&A is thin. The post-talk conversations are polite rather than substantive. When the SE reviews the recording later, they notice the problem: the deck explains a category instead of advancing an argument. It sounds like three different analyst reports merged into one narrative. There is no moment where the audience’s assumptions get challenged. There is no “I never thought about it that way.”

That is the cost of skipping the gates. The deck was fluent and… forgettable

Gate 1: The Brief (Positioning Baseline)

Nothing happens until 00-brief.md exists and you have approved it. The brief locks your audience, setting, desired outcome, thesis, current audience beliefs, target beliefs, product-positioning rules, banned tropes, and failure modes.

The brief must include a deliberate market frame selection — which category does this talk position your product within? The frame determines your competitive set, which features matter, and which proof points are relevant. A runtime security product framed as “next-gen endpoint protection” competes against CrowdStrike. The same product framed as “cloud workload defense” competes against Wiz. Different frame, different competitors, different arguments. The brief is where you make that choice consciously instead of letting the model default to whatever frame sounds most natural.

The brief should also answer the “why now?” question using three market forces: what economic shift, social behavior change, or technology inflection makes this talk urgent right now rather than interesting-in-general? “Cloud runtime security” is a topic. “Agentic AI workloads are creating runtime identity problems that did not exist eighteen months ago, and the attacker tooling has already adapted” is a why-now frame built on technology and threat-landscape forces. If your brief cannot answer “why now?” with specific forces, the talk will feel informational rather than urgent… and urgency is what gets past the crocrodile brain.

Gate 2: Source Normalization

Every piece of external input — analyst reports, competitor docs, customer notes, field observations, conference themes — becomes a structured source note in sources/. Each note gets frontmatter properties: type: source, source_kind, confidence, relevance, and supports (linking to claims).

Gate 3: Thesis Memo

Before any outline, produce two notes: 01-thesis.md (the sharpest version of your argument) and 03-not-this-talk.md (what the talk explicitly is not).

That second note is the anti-drift mechanism. For a cybersecurity talk, 03-not-this-talk.md might reject: “AI is changing everything” framing, model benchmarks as the core story, threat hype without governance, and product demo disguised as thought leadership.

Gate 4: Audience Delta (Discovery Integration)

Create 02-audience-delta.md with the audience’s default mental model, your target mental model, expected objections, and the specific belief shifts required.

This is where the discovery framework transforms the gate from a guessing exercise into a structured analysis. Map the audience by role: the CISO in the room cares about risk quantification and board-level language, the architect cares about integration complexity and operational overhead, the practitioner cares about daily workflow and alert fatigue. Each role has different objections, and those objections fall into four categories — clarification (they need more info), objection (they disagree), stall (they are not ready), and test (they are probing your credibility). Your audience delta should anticipate all four types, because your talk needs to handle them structurally, not just in Q&A.

Gate 5: Argument Map

Only now do you create story.canvas — the visual story map. Use Canvas with constrained node types: thesis, claim, evidence, example, objection, slide candidate, and product bridge. Canvas stores its data as .canvas JSON files, which means the argument map is machine-readable.

Gate 6: Slide Cards (Proof Structure)

Before writing any polished copy, create one note per slide in slides/. Each slide note must declare its purpose, proof, memory hook, visual concept, and product role. It must also specify what to cut if time is short.

This is the key anti-slop move. The model cannot hide weak thinking behind fluent prose because every slide has to declare its job before it gets to write anything.

This is also where Cohan’s Great Demo! structure pays off. Each slide card should follow a proof logic: what claim does this slide make, what evidence supports it, and what should the audience believe differently after seeing it? Cohan’s sequence — opening context, capability demonstration, key message synthesis — maps directly to how you order your slide cards. The opening batch establishes context and makes the problem personal (Dunford’s market frame in action). The middle batch proves your claims with specifics. The closing batch synthesizes what was proven and bridges to action. Every slide card that cannot answer “what does this prove?” is a candidate for cutting.

Cohan’s “key situation” concept is especially powerful here: instead of showing a generic capability, bridge it to a scenario the audience recognizes from their own work. A slide that says “our agent detects privilege escalation” is a feature. A slide that says “here is what happens when a compromised service account requests admin credentials at 2 AM and your on-call engineer has six minutes to respond” is a proof event.

Each slide card should also pass Klaff’s hot cognition test: does this slide create desire, tension, or both? Desire is “I want that outcome.” Tension is “I need to know what happens next.” Slides that create neither — data summaries, category definitions, landscape overviews — activate cold, analytical processing and lose the room. The fix is not to remove analytical content but to sequence it so it always follows a hot-cognition moment. The audience’s croc brain says “I need to understand this” only after you have made them want something or worry about something. In your slide card template, add a field: cognition_type: hot | cold | bridge. No more than two cold slides in a row, and never open or close a batch with one.

Gate 7: Batch Drafting

Draft only 2–4 slides at a time. Never the whole deck at once.

Batch sequence: framing first, then core argument, then technical proof, then action/close. After each batch, you choose: approve, revise, cut, or replace.

This is also where Klaff’s push/pull pacing becomes structural. A deck that only pushes (here is why this matters, here is what you should do, here is why we are the answer) creates a one-directional energy that the audience’s croc brain reads as “someone is selling me something.” Push/pull alternates: you advance a strong claim (push), then introduce a complication, an honest limitation, or a question that creates uncertainty (pull). The pull creates tension — the audience leans in because they need to know how it resolves. Klaff calls this the intrigue frame. In practice, this means your core argument batch should not be four slides of escalating proof. It should be: claim → complication → deeper proof that resolves the complication → next claim. Each tension loop resets the audience’s attention clock.

Gate 8: Hostile Review

Before finalization, run three mandatory review passes:

• reviews/skeptic-practitioner.md — “Where does this sound generic, weak, or under-evidenced?”

• reviews/conference-reviewer.md — “What is genuinely distinct versus standard vendor content?”

• reviews/anti-pitch-review.md — “Where does the deck start sounding commercial instead of educational?”

At minimum, these three review passes are mandatory. Experienced SEs will probably add more passes with competitive analysis.

Gate 9: Rehearsal and Compression

Only after the deck is structurally approved do you generate rehearsal materials: opening options, closing, Q&A prep, and compressed variants (cut-to-30, cut-to-20).

Eradicating neediness is the single most important delivery concept most SEs never practice. The audience’s croc brain is exquisitely tuned to detect neediness — the subtle signals that you want their approval, their deal, their positive evaluation. When the croc brain detects neediness, it categorizes you as low-status, and low-status information gets filtered out. The formula is: want nothing, focus on doing excellent work, and be willing to withdraw.

In practice, this means your rehearsal should explicitly practice the moments where you are most tempted to seek validation: the product bridge, the closing ask, the Q&A and replace approval-seeking language with confident, take-it-or-leave-it framing. “We think you might find this useful” is needy. “This is what it does. Here is who it is for. You know whether that is you” is not.

Situational status matters here too. You are not trying to dominate the room, that is a different kind of failure. You are establishing local star power: for the next 25 minutes, on this specific topic, you are the most informed person in the room. Your rehearsal should practice owning that status without arrogance.

The rehearsal agents can help: have one generate the three most status-challenging questions an audience member could ask (the “analyst frame” that tries to drag you into granular data defense) and practice redirecting to your frame instead of getting pulled into theirs.

The rehearsal gate is where many SEs skip corners because the argument feels “done.” It is not done until you know your opening cold, you can handle the three hardest questions without fumbling, you have a plan for what to cut when the moderator tells you that you have five fewer minutes than expected, and your delivery posture is confident without being needy.

What a Real Session Looks Like

Here is a concrete example of how this plays out in practice.

You are building a 25-minute conference talk on cloud runtime security for an audience of security architects and engineering leads. You have a vault with your brief, 12 source notes, and 6 approved claims. Now that you have laid the groundwork we can get AI to help us build the rest.

You open Claude Code in the project folder and say:

Read the brief, audience delta, and all claim files. Then do three things in parallel:

1. Propose a 10-slide sequence based on the approved claims

2. Run an Explore agent to find which source notes are not linked to any claim

3. Check if any claims lack supporting evidence

Claude Code spawns three agents. Within a couple of minutes you have: a proposed slide sequence, a list of orphaned sources you might be under-using, and a list of claims that need stronger evidence before they earn a slide.

You approve the sequence with two changes, then say:

Draft slides 1–3 (framing batch). Keep the thesis from 01-thesis.md intact. No product positioning. Use evidence from the linked source notes only.

While that agent drafts, you could also ask:

In parallel, review slides 7–10 from the last session against the skeptic-practitioner persona. Flag anything that sounds like vendor messaging.

Two agents, working simultaneously. One creating, one reviewing. You are the editor-in-chief, reading the outputs and making the calls.

The Takeaway

The SE teams that will build the best presentations with AI are not the ones who generate the fastest. They are the ones who treat presentations like engineering projects: structured inputs, staged gates, parallel workers, human judgment at every decision point, and a project folder that remembers everything the chat window forgets. This is where our technical background shines.

The operating philosophy is simple: Position → Discover → Prove → Control. The nine gates enforce the sequence. The custom agents provide the cognitive diversity that a single perspective cannot.

The model is not your ghostwriter. It is your research team, your structural editor, your hostile reviewer, your competitive analyst, your frame-control auditor, and your compression engine.

Use it that way, and your talks will be sharper than anything you could build alone or anything the model could generate on its own. This is the way.

Image placeholder

Leave a Reply

Discover more from Pier-Luc Charbonneau

Subscribe now to keep reading and get access to the full archive.

Continue reading