# 01 · Course Design — Strategy Document > The bible. Read this before touching any other file in the package. > Every other artefact (slide deck, runbook, workbook, companion) derives from decisions made here. --- ## 1. Why this document exists The existing impress.js deck is a strong **content** artefact. What it isn't yet is a **course** — meaning a designed sequence of experiences that produces a specific change in the participant by the end. This document is the bridge. It does four things: 1. Defines what "results-driven" means in concrete, measurable terms for this workshop. 2. Audits the current deck against May 2026 reality and lists the patches needed. 3. Designs the live interaction structure (labs, polls, transitions) that turns a deck into a workshop. 4. Captures the reasoning behind tier pricing, pre-work, and the 30-day follow-up so future versions can revise the decisions instead of guessing. --- ## 2. Course philosophy: what "results-driven" means "Results-driven" is overused. For this workshop it has a specific definition: > A participant has gotten "results" if, **within 7 days** of the session, they have used an LLM to do something they would have done another way before — and the result was at least as good as the old way. That bar is intentionally low. It is also surprisingly hard to clear. Most AI-curious adults attend a workshop, feel inspired, and never change their behaviour. The course is designed to bend that curve. Three implications: **(a) Aim for one behaviour, not five.** If a participant adopts even one workflow — e.g. always pasting first drafts of important emails into an LLM before sending — that compounds. Five half-adopted ideas compound to zero. The course narrows toward **the RTFC prompting framework + one applied use case the participant picks themselves**. **(b) Knowledge serves the behaviour.** Every concept slide must answer: "Why does this make the participant more likely to use AI well on Monday afternoon?" If it doesn't, it's interesting trivia and goes to pre-work or the advanced course. **(c) Trust calibration is the second deliverable.** A participant who uses AI badly is worse off than one who doesn't use it at all. The second non-negotiable result is: the participant can tell when to trust an LLM output and when to verify. The hallucinations and human-in-the-loop slides carry this weight; they must not be cut. --- ## 3. Learning outcomes Stated in the language of *what the participant can do*, not *what they know*. Three tiers, mapped to the Bloom verbs that matter for adult learners: ### Must-have outcomes (everyone leaves with these) By the end of the session, the participant can: - **Distinguish** AI / ML / DL / LLM in plain language, and explain to a colleague why an LLM "hallucinates." - **Construct** a well-formed prompt using the RTFC pattern (Role · Task · Format · Constraints) from a vague starting question. - **Iterate** on a weak first response across at least two follow-up turns to reach a usable output. - **Decide** whether a given LLM output is safe to use as-is, needs verification, or should not be used at all. - **Name** one workflow in their own work where they will apply AI in the next 7 days. ### Stretch outcomes (most participants leave with these) - Choose between a chat interface, a search-augmented tool, and an embedded copilot for a given task. - Recognise three categories of information that should not be pasted into a consumer LLM. - Articulate the difference between context window and knowledge cutoff, and how each one breaks LLM responses in different ways. ### Coaching-tier outcomes (the extra $100 buys these) - Maintain a personal prompt library tailored to their role. - Operate with a custom system-prompt template that bakes in their preferences (tone, format, audience). - Have a calibrated 1:1 conversation with a domain expert about their specific use case. If the runbook ever forces a choice between content, the *must-haves* are the floor. Drop stretch outcomes first. --- ## 4. Audience analysis The marketing copy promises "From Zero to Confident — For Everyone." That's the public signal. The actual audience clusters into three groups, each needing slightly different facilitation: **(A) The genuinely new (≈50% of seats).** No prior structured AI use. May have tried ChatGPT once. Anxious about looking dumb. Needs the metaphors (autocomplete, contractor, stale expert) and a low-stakes first lab to break the ice. **(B) The casual user (≈35%).** Uses ChatGPT a few times a week. Doesn't know why their outputs feel mediocre. The RTFC framework is the highest-value moment for this group — they will *feel* the difference. Make sure they get a chance to articulate "I'd been doing this wrong." **(C) The technically curious (≈15%).** Engineers, founders, knowledge workers who've read about transformers but haven't put it into practice. Will derail the session with technical questions if not managed. The Q&A protocol (write deep technical questions in chat, batched at the end) keeps them engaged without losing groups A and B. **Facilitation implication:** Calibrate to group B. Group A gets the metaphors as scaffolding; group C gets the chat for depth. Never calibrate to group C in the live narration or you lose the room. **The hidden 4th group:** people who paid for the Coaching tier specifically because they want personal attention. Treat their post-session 1:1 as the actual product they bought — the workshop is included. --- ## 5. The 2.5-hour time budget The current 35 slides at ~4 minutes each = 140 minutes = the whole session, with zero time for labs, Q&A, breaks, or technical hiccups. That cannot work. The honest budget is: | Block | Minutes | What happens | |---|---|---| | Welcome + expectations | 10 | Why we're here, the one promise, ground rules, ice-breaker poll | | Module 1: How AI works | 35 | Compressed from 13 content slides to ~9. Ends with the trust-calibration mini-lab. | | Break + capture | 10 | Genuine break. Participants write one thing they learned in the workbook. | | Module 2: How to use AI | 70 | RTFC, three live labs interleaved, use-case tour, privacy & human-in-the-loop | | Close + 30-day plan | 15 | Each participant types their one-week commitment into the companion. Q&A. | | Buffer / overrun | 10 | Use it or end early. Ending 10 minutes early on a 2.5-hour Zoom is a feature. | **Total content slides used:** approximately 22, not 35. The remaining 13 are either pre-work, post-session reference, or cut entirely. The slide-deck document specifies which. --- ## 6. Pedagogical principles applied A few opinionated calls baked into the design: **Interleave theory and practice, don't sequence them.** The current deck does Module 1 (theory) → Module 2 (practice). That's the wrong shape for a 2.5-hour workshop with mixed-experience adults. The redesigned arc does theory → mini-lab → theory → mini-lab → application. Each lab cements the concept that preceded it. **Cold-call only for opinions, never for facts.** "Has anyone here used Claude?" — fine, low risk. "What's a token?" — never. Use polls, not direct calls, for knowledge checks. **Make the first prompt cheap.** The first live lab is "rewrite this vague prompt to be specific." Participants get a starter and a target. No one has to invent from scratch. Confidence accumulates across the three labs. **Capture commitment in writing, in public-ish.** The companion's takeaway tracker is the mechanism. Saying "I will use AI for X this week" in front of the cohort (even silently, into a shared screen counter) is a behavioural commitment that radically outperforms a private intention. **Don't pretend to be exhaustive.** The course-design temptation is to mention every model, every use case, every caveat. Resist. Mention enough to orient; defer depth to the advanced course. "We're not going to cover X today, but it's in the workbook resources" is a feature, not a failure. --- ## 7. Fact-check audit — patches needed before Monday The current deck reflects a snapshot from roughly mid-2024 to early-2025. The world has moved. This section lists the specific patches needed. Apply at minimum the **P0 patches**; the P1 patches are quality-of-life upgrades. ### P0 — Wrong on the facts, must fix **Slide 7 — "GPT-4 training cost":** The text reads "Estimated 00M+ in compute" — there's a missing character (a corrupted `$1` rendering as `00`). Replace with: *"GPT-4 training cost: estimated **~$100M** in compute. Frontier 2026 models (GPT-5.5, Claude Opus 4.7) are estimated in the **$500M–$1B** range — which is why you don't train your own."* **Slide 8 — "GPT-4 has ~1.8 trillion parameters":** This figure was always an unconfirmed leak (attributed to George Hotz). Anthropic and OpenAI have never disclosed parameter counts for frontier models. Replace with: *"Frontier LLMs are estimated to have **hundreds of billions to trillions of parameters** — exact counts are not publicly disclosed for GPT-5.5, Claude Opus 4.7, or Gemini 3.1 Pro."* Removing the false precision is more honest and doesn't weaken the point. **Slide 12 — "The Models You'll Encounter":** Every named model is one to three generations behind. Replace: - **OpenAI:** GPT-4o / o1 → **GPT-5.5** (April 2026, default in ChatGPT; 1M-token context) and **GPT-5.5 Pro** for harder reasoning. ChatGPT crossed **900 million weekly active users in February 2026**. - **Anthropic:** Claude 3.5 / 4 → **Claude Opus 4.7** (April 2026, frontier; 1M-token context) and **Claude Sonnet 4.6** (the practical workhorse). Safety-focused, strong on long context and following instructions. - **Google:** Gemini 1.5 / 2 → **Gemini 3.1 Pro** (February 2026, 1M-token context). Deeply integrated with Google Workspace. - **Meta:** Llama 3 → **Llama 4** (Scout & Maverick, April 2025; Scout has a 10M-token context window — the largest of any openly available model). Note for the deck: in April 2026, Meta also released **Muse Spark**, their first proprietary closed-weight model — strategically interesting but secondary to Llama 4 for this audience. - Worth adding: **DeepSeek V3/V4** (Chinese open-weight, increasingly competitive) and **xAI Grok** for completeness, though only mention if asked. **Slide 14 — "Context window limits":** All four numbers are wrong. Replace: - GPT-4o 128K → **GPT-5.5: 1M tokens** - Claude 3.5 200K → **Claude Opus 4.7: 1M tokens** - Gemini 1.5 1M → **Gemini 3.1 Pro: 1M tokens** - Llama 3.1 128K → **Llama 4 Scout: 10M tokens** The headline insight has shifted: context windows are no longer the bottleneck for most everyday use. Update the surrounding narrative to match — "even your longest documents fit" is now accurate for most participants' needs. **Slide 15 — "Knowledge cutoffs":** All three dates are wrong. Replace: - GPT-4 April 2023 → **GPT-5.5: December 2025** - Claude 3.5 April 2024 → **Claude Opus 4.7: early-to-mid 2025** (Anthropic does not publish a precise date for the public deck; use "early 2025" if asked) - Gemini 1.5 November 2023 → **Gemini 3.1 Pro: January 2025** The "18-month sabbatical colleague" analogy weakens with shorter cutoffs. Update to: *"a brilliant colleague who went on sabbatical 6–12 months ago — sharp, well-read, but hasn't seen the latest news."* **Slide 5 — Timeline "Today":** The "Today" milestone should now read **"Reasoning models, agents, computer use, MCP"** — these are the 2025–2026 frontier shifts that participants will see references to in the wild and should at least recognise. ### P1 — Right but stale or thin, nice to fix **Slide 11 — Transformers:** Solid as-is. Optional addition: "These same transformer foundations now power image generation (Midjourney, Imagen), video (Sora, Veo), music (Suno), and protein folding (AlphaFold 3)." Helps participants connect the conceptual pieces they've heard about in the news. **Slide 13 — Hallucinations:** Add one sentence: "Hallucinations are getting less common with reasoning models, but they have **not** been eliminated, and the most plausible-sounding ones are the most dangerous." This matters because some participants will have heard that "GPT-5 doesn't hallucinate" — which is false but widely repeated. **Slide 29 — Copilot 55% faster:** The figure is correct (Peng et al. 2023 controlled experiment; 55.8% exact). Citation-strengthening optional. Worth adding: "Pull-request cycle time also dropped from 9.6 days to 2.4 days in the GitHub/Accenture follow-up study (2024)." Makes the productivity story more concrete. **Slide 18 — "The AI tools landscape":** The categories are good. Within each, the tool names need a refresh: - *Chat:* add Claude.ai prominently (Anthropic now has 10% of US daily mobile AI app share, up from <2% earlier in 2026). - *Search-augmented:* Perplexity is still right; add ChatGPT's own search and Claude's web tools. - *Embedded:* add **Claude for Excel, Claude for PowerPoint, Claude in Chrome** (in Anthropic's product line) and Notion AI / Linear AI / Granola. - *Autonomous agents:* Claude Code is now mainstream — call it out. Add Devin, Cursor, Cline, Replit Agent. Mention **MCP (Model Context Protocol)** as the emerging standard for connecting LLMs to tools — participants will hear this acronym a lot in 2026. ### P2 — Optional polish - Slide 6 (neural network image of brain neurons): the "86 billion neurons" figure is correct (~86.1B, Azevedo et al. 2009). No change needed. - Slide 10 (temperature): consider replacing "temperature" jargon with "creativity dial" for group A. The technical term can live in a tooltip. - Slide 31 (privacy): the "default behaviour" warning is accurate but worth noting that the Claude Enterprise and ChatGPT Enterprise/Team plans contractually exclude training-data use. Many participants will be on Pro/Plus consumer plans where this isn't the default. --- ## 8. Content improvements beyond fact-checking What the deck is missing entirely, ranked by importance: **(a) A "Day 1" use-case slide.** Between Module 1 and Module 2, insert one slide: *"What's a good first task to try?"* Concrete suggestions: rewriting a difficult email, summarising a long document the participant already has open, drafting a meeting agenda, generating interview questions. This is the bridge from "I get it conceptually" to "I'll do it tonight." Currently the deck assumes participants will figure this out on their own. They won't. **(b) The "verify in three ways" pattern.** Currently slide 13 says "verify any specific fact." That's not specific enough to act on. Replace with a 3-step protocol: *(1) ask the model to cite its source; (2) check the source actually exists; (3) check the source actually says what the model claims.* Participants who memorise that protocol will avoid 90% of the hallucination problems they'll encounter. **(c) The "MCP / connectors" moment.** This is the biggest shift in how people will use LLMs in 2026 — connecting them to Gmail, Calendar, Slack, Drive, Notion, etc. The deck doesn't mention it. Worth a single slide near the end of Module 2: *"What's next: AI that does, not just answers."* **(d) A diversity of examples.** The current examples lean heavily B2B / SaaS / finance. Audience is mixed. Add examples from healthcare ops, education, hospitality, non-profit, creative work. Increases relatability for non-tech participants. **(e) An explicit "what we're NOT covering today" slide.** Sets honest expectations: image generation, voice, video, agents, embeddings, RAG, fine-tuning. Reinforces the advanced-course upsell without being pushy. What to cut entirely: - Slide 4 (AI family tree): keep, but condense to 30 seconds — the bullet hierarchy is over-elaborate for a verbal medium. - Slide 5 (history): condense to 60 seconds total. Most adults don't care about Turing 1950s; they care about ChatGPT 2022 and what's happening now. - Slide 11 (transformers detail): the "with vs without attention" example is too technical for group A and trivial for group C. Cut to one sentence, save the depth for the advanced course. --- ## 9. Live lab design — three labs, interleaved The single highest-leverage change vs. the current course shape. Three short labs, each 6–10 minutes, embedded inside Module 2 rather than parked at the end. ### Lab 1 · "Vague → Specific" (8 minutes, after the RTFC slide) Each participant sees a vague prompt on the companion: *"Write something about marketing."* Task: rewrite it using RTFC (Role / Task / Format / Constraints). Submit into a chat or the companion. Facilitator pulls 2–3 examples to read aloud (with credit) and shows the model output for the strongest one. Result: participants *feel* the lift from RTFC instead of being told about it. ### Lab 2 · "Three-turn improvement" (12 minutes, after the iteration slide) Each participant takes their RTFC-improved prompt and now runs three follow-up turns: shorter, different tone, add an example. They submit the final output and one sentence on what changed. Result: participants experience iteration as a skill, not as a sign the AI failed. ### Lab 3 · "Your real task" (20 minutes, after the use-cases tour) Each participant identifies one real task from their actual work and runs a complete prompt-and-iterate cycle on it, alone. Facilitator floats in breakout rooms (or on the broadcast Q&A if no breakouts) to help unstick people. Result: participants leave with **one usable output produced in the session itself**. This is the single strongest predictor that they'll use AI again in the following week. The whole course design pivots on this lab. **Backstop:** if a participant arrives without a real task in mind, the companion offers five "starter task" templates (rewrite an email, summarise a long document, generate meeting questions, draft a job description, prepare for a difficult conversation). --- ## 10. Tier strategy & pricing The current pricing structure (Community $29 / Coaching $129) is sound. The Coaching tier needs a clearer "what am I actually buying" signal, because right now "30-min 1:1 + lifetime recording" is undersold. **Reframe the Coaching tier around the personalised system prompt.** That artefact is genuinely valuable, persistent, and hard to produce alone. The 1:1 call is the consultation that produces it. Pitch order: 1. *You'll leave the call with a personalised CLAUDE.md / system-prompt template tailored to your role, audience, and writing style.* 2. *Use it from day one in any LLM that accepts system prompts.* 3. *Your call slot includes a written review of three prompts from your real work.* This is the same product, framed around the durable artefact rather than the ephemeral call. **Don't introduce a third tier.** Two tiers are a clear, comparable decision. Three tiers triggers analysis paralysis at this price point. If demand for more depth emerges, that's the advanced course. **Pricing the advanced course.** When the time comes, anchor against this one. $29 / $129 here implies the advanced course slots somewhere $299–$499 for a multi-session cohort or $1,499–$2,500 for a small-group programme. Don't go below $299 — that signal-jams the perceived seriousness of the offering. --- ## 11. Pre-work strategy 20–25 minutes total, sent T–7 days. Three goals: (a) get tool access set up before the session, (b) establish a baseline so participants notice the change after RTFC, (c) reduce facilitator time spent on definitions. **Pre-work components:** 1. **Sign up for one LLM tool of your choice** (claude.ai, chatgpt.com, or gemini.google.com). Free tiers are sufficient. Confirm you can send one message and receive a reply. 2. **Send the LLM this prompt:** *"In one paragraph, what's the difference between AI, machine learning, and a large language model?"* Read the answer. Save it to paste into the workbook. 3. **Bring one real task** you'd like to try AI on during the session. Doesn't have to be impressive. The most banal example is the most valuable one. 4. **(Optional) Read 1 short article:** Ethan Mollick's "Co-Intelligence" excerpt or a one-pager from learnprompting.org. Workbook includes 2–3 picks. The pre-work is **not graded, not checked, not required**. It's signal not gate. People who don't do it can still benefit — they just won't have the baseline to compare against. --- ## 12. Companion.html — design intent The companion is the artefact participants keep. It does five things, in this priority order: 1. **Reveals each lab prompt on cue.** Facilitator says "Lab 1" — the companion shows the Lab 1 panel. Prevents read-ahead, keeps the cohort synchronised. 2. **Hosts the takeaway tracker.** Each participant types their one-week commitment by the end. Persists in localStorage so they can revisit it from the same browser. (No login, no privacy concerns.) 3. **Builds a starter system prompt.** Participants fill in role, audience, format preferences, and tone notes. Output is a copy-pasteable system prompt they can use in any LLM. Coaching-tier participants get a more elaborate version reviewed personally. 4. **Tracks live session timing.** Visible timer so participants know where they are in the 2.5 hours. Reduces "are we almost done" anxiety. 5. **Houses the resource list and pre-work links.** One place to find everything, before and after the session. Technical notes: - Single HTML file. No build step. Loads from a static URL. - localStorage for persistence (no shared backend needed). - Works offline once loaded — important for participants on weak hotel WiFi. - The "prompt-library builder" output is plain text the user copies. No API calls, no cost, no breakage risk. --- ## 13. Post-workshop / 30-day plan The workshop ends at T+0. The course ends 30 days later. Most learning is in those 30 days, if anything happens at all. **Automated touchpoints:** - **T+1 day:** Send recording link, slide PDF, workbook, and the calendar link for Coaching 1:1s. Single email. - **T+7 days:** Send a 1-question check-in: *"Did you use AI for something this week? Reply with one sentence."* Replies become testimonial fodder (with permission) and tell Florin who's stuck. - **T+21 days:** Send a small prompt-of-the-week pack — 5 useful prompts in domains people actually work in. - **T+30 days:** Invite to a free 30-minute cohort retrospective call. Soft pitch for the advanced course at the end. **Coaching-tier touchpoints, additional:** - T+3 days: book their 1:1 slot if not already booked. - T+7 days (target): the 1:1 happens; written prompt review and personalised system prompt delivered within 48 hours of the call. --- ## 14. Risk register | Risk | Likelihood | Severity | Mitigation | |---|---|---|---| | Tool outage during a lab (Claude/ChatGPT down) | Medium | High | Open two tools before the session. Show the model name on slide swap. Have a screenshot fallback for the lab demo. | | Audio/screen-share failure on Zoom | Low | High | Co-host on standby. Pre-record the highest-stakes 3 minutes (RTFC slide) as a fallback. | | A technical participant derails with deep Q&A | Medium | Medium | Set the chat-for-depth norm in the first 5 minutes. Batch technical Qs at the end. | | A participant pastes sensitive data into an LLM during a lab | Medium | High | Privacy slide moved earlier (before Lab 3). Lab 3 prompt says "use anonymised data." | | Running over time | High | Medium | The buffer block exists. Cut stretch-outcome content first. Always close on time. | | Running under time (rare but happens) | Low | Low | Have one optional deep-dive slide ready (transformer attention, RAG preview). | | Tier confusion: a Community attendee asks for the Coaching deliverable | Medium | Low | The 1-pager differentiating tiers goes in the welcome email. Polite redirect during the session. | | Recording fails | Low | High | Two recording targets (Zoom cloud + local). Confirm both before opening the room. | | The fact-check audit isn't applied | Medium | High | This is the highest-leverage facilitator action. Block 30 minutes for it the morning of the session. | --- ## 15. Success metrics Stated in honest, measurable terms: **Leading indicators (visible immediately):** - ≥85% of participants stay to the end (low attrition on a paid live session). - ≥70% submit a Lab 3 output. - ≥80% type a written commitment into the companion. **Lagging indicators (visible at T+30):** - ≥40% of participants reply "yes" to the T+7 check-in. (Industry benchmark for behavioural change after a 2.5-hour workshop is much lower; this is an aspirational target.) - ≥10% of Community-tier attendees convert to the advanced course within 90 days. - Net Promoter Score ≥40 on the post-session survey. Below 20 = redesign needed; above 60 = move pricing up. **The single most important metric:** how many participants, at T+90 days, can name a specific work output that exists because of what they learned in this session. If that number isn't at least one in three, the course design has failed regardless of how the live session felt. --- ## 16. Open questions for Florin These are decisions the design document deliberately doesn't make — they require facilitator judgement. 1. **Breakout rooms or single-broadcast for Lab 3?** Breakouts are better pedagogically but technically heavier and require a co-host. The runbook is written for single-broadcast as default; toggle if you have help. 2. **Translation?** English-only is assumed. If you anticipate ≥3 native Romanian speakers, consider translating the workbook (cheapest, highest impact). 3. **Live demo of Claude Code, Cowork, or Claude in Excel?** These are emerging tools that would land well with the technical group C, but consume time from the must-have content. The runbook leaves a 3-minute optional slot in case you want to show one. 4. **Use your existing impress.js deck, or convert to a more standard slide tool?** Impress is visually striking but technically fragile on shared screens (some browsers misrender the transforms). Plan B is a flat PDF export. Test both at T–24h. 5. **Cohort group chat (Slack/Discord) for the 30 days after?** High-effort, high-value for Coaching tier. Not in the base package. Worth considering for the next cohort if this one goes well. --- ## Appendix · One-page TL;DR for the morning of the session If you only re-read one page at 7 a.m. on Monday, this is it. - **Promise:** participants leave able to construct an RTFC prompt and have used AI on a real task during the session. - **Three labs:** 1 (rewrite vague prompt, 8 min), 2 (iterate three turns, 12 min), 3 (their real task, 20 min). - **Apply the P0 fact patches:** slide 7 (training cost typo), 8 (parameter count), 12 (model names), 14 (context windows), 15 (knowledge cutoffs), 5 (timeline "today"). - **Calibrate to group B** (casual users). Group A gets metaphors; group C gets the chat. - **Don't run over.** The buffer block is for you. Ending on time is a feature. - **The companion is the artefact they keep.** Make sure they have the link in the welcome email before they log in. Good luck. The course is in good shape.