# 02 ยท Slide Deck โ€” Revised Content & Speaker Notes > Maps 1:1 to the original 35-slide impress.js deck. Each entry says what to do with that slide (keep, compress, cut, patch), gives the revised on-screen content where applicable, and provides a speaker script you can read aloud or adapt. > > Three new slides are inserted between existing ones. They're labelled `NEW-A`, `NEW-B`, `NEW-C` so you can drop them into impress.js at the right step number without renumbering everything else. > > If you only have time to patch some slides before Monday, do the ones marked **๐Ÿ”ด P0** in ยง7 of the course design document. The rest can ship as-is for this cohort. --- ## Overview map | # | Status | Title (short) | Live time | Notes | |---|---|---|---|---| | 1 | KEEP | Title | 1m | Refresh the date strip | | 2 | COMPRESS | Agenda | 1m | One promise + three arcs, not 16 bullets | | 3 | CUT | Module 1 header | โ€” | Skip; say it instead | | 4 | COMPRESS | AI family tree | 1m | 30 seconds, not 2 minutes | | 5 | COMPRESS + ๐Ÿ”ด P0 | History | 1m | Patch "Today" milestone | | 6 | KEEP | Neural network | 3m | โ€” | | 7 | KEEP + ๐Ÿ”ด P0 | How AI learns | 3m | Fix `$00M+` typo โ†’ ~$100M | | 8 | KEEP + ๐Ÿ”ด P0 | What is an LLM | 3m | Remove "1.8T parameters" claim | | 9 | KEEP | Tokens | 3m | โ€” | | 10 | KEEP | Prediction works | 3m | Optional: rename "temperature" โ†’ "creativity dial" inline | | 11 | COMPRESS | Transformers | 1m | One sentence on attention; cut the with/without example | | 12 | REWRITE + ๐Ÿ”ด P0 | Models you'll encounter | 4m | All four model entries need new content | | 13 | KEEP + EXPAND | Hallucinations | 4m | Add the 3-step verify protocol | | 14 | KEEP + ๐Ÿ”ด P0 | Context window | 3m | New numbers; reframe "no longer the bottleneck" | | 15 | KEEP + ๐Ÿ”ด P0 | Knowledge cutoff | 3m | New dates; analogy shifts to "6-12 month sabbatical" | | 16 | COMPRESS | Module 1 summary | 2m | Skip the exercise; trigger break instead | | โ€” | โ€” | *Break + capture* | 10m | โ€” | | 17 | CUT | Module 2 header | โ€” | Skip | | NEW-A | NEW | Day 1 โ€” What to try first | 3m | Bridge from theory to practice | | 18 | KEEP + PATCH | AI tools landscape | 4m | Add Claude.ai, Cowork, MCP mention | | 19 | KEEP | Chat interface anatomy | 2m | โ€” | | 20 | KEEP | What is a prompt | 2m | โ€” | | 21 | KEEP | Principle 1: Specific | 3m | โ€” | | 22 | KEEP | Principle 2: RTFC | 4m | High-stakes slide โ€” drill it | | LAB 1 | NEW | Vague โ†’ Specific | 8m | First lab | | 23 | KEEP | Principle 3: Examples | 3m | โ€” | | 24 | COMPRESS | Principle 4: Format | 2m | โ€” | | 25 | KEEP | Principle 5: Iterate | 3m | โ€” | | LAB 2 | NEW | Three-turn improvement | 12m | Second lab | | 26โ€“29 | MERGE | Use cases tour | 5m | One slide, not four โ€” name the categories | | 30 | KEEP | What AI is NOT | 4m | โ€” | | 31 | KEEP + PATCH | Privacy | 3m | Add Enterprise/Team carve-out | | 32 | KEEP | Human in the loop | 3m | โ€” | | NEW-B | NEW | What's next โ€” MCP & connectors | 3m | Sets up advanced course | | LAB 3 | NEW | Your real task | 20m | The lab the course pivots on | | 33 | KEEP | Daily workflow | 3m | โ€” | | 34 | KEEP | Resources | 2m | โ€” | | NEW-C | NEW | Your commitment | 5m | Companion typing moment | | 35 | KEEP, SIMPLIFIED | Thank you | 2m | Strip the recap; close on energy | | | | **Total** | **140m + 10m buffer** | | --- ## Welcome block (0:00โ€“0:10) ### Slide 1 ยท KEEP ยท Title **On-screen:** unchanged. Refresh the date strip and confirm the timezone block matches the actual broadcast. **Speaker script (โ‰ˆ90 sec):** > "Welcome. I'm Florin, and for the next two and a half hours we're going to do something that sounds simple but very few AI workshops actually deliver: we're going to get you to the point where, by Tuesday morning, you've used AI to do something real in your own work. Not someone else's demo. Yours. > > That's the bar. Not 'you understand AI.' Not 'you've heard about transformers.' By the end of today you'll have produced an actual output โ€” an email, a document summary, a draft, a plan โ€” using an LLM on a task you brought with you. If that doesn't happen, I haven't done my job. > > Quick housekeeping. We're recording. Mics off unless you're speaking. Questions go in the chat โ€” I'll batch them at three points. The companion link is in the chat right now; open it in a second tab and keep it there. That's where the labs live." **Speaker notes:** project warmth in the first 30 seconds. Don't apologise for anything. Don't say "hopefully we'll cover X" โ€” say "here's what we're doing." Confidence sets the tone. --- ### Slide 2 ยท COMPRESS ยท Agenda โ†’ "One promise + three arcs" **Replace the four-quadrant card layout with this:** ``` ONE PROMISE By the end of today, you've used AI on a real task from your work. THREE ARCS 1. How LLMs actually work โ€” the mental model (35 min) 2. How to prompt them well โ€” the RTFC framework + three labs (70 min) 3. Your task โ€” applied (20 min, last lab) How we'll work together โ€ข Three live labs ยท companion link in chat โ€ข Questions in chat โ€” I batch them at module breaks โ€ข Recording goes out tomorrow ``` **Speaker script (โ‰ˆ60 sec):** > "Three arcs today. First, we'll build a mental model of how these things actually work โ€” not the hype version, the engineering version, but in plain language. Then we'll spend most of our time on the part that actually changes your work: how to prompt them well. Five principles, three labs, and at the end you'll take your real task and run it. That last lab is what this course is built around. > > Drop a ๐Ÿ‘‹ in the chat if you can hear me clearly." **Speaker notes:** the wave-emoji check serves three purposes โ€” tests chat is open, gives a low-stakes participation cue, surfaces audio issues immediately. --- ## Module 1: How AI Works (0:10โ€“0:45) ### Slide 3 ยท CUT ยท Module 1 header Don't show. Just say *"Let's start with how this stuff actually works."* and move to slide 4. Module-header slides are a deck-design habit, not a teaching tool. --- ### Slide 4 ยท COMPRESS ยท The AI family tree **On-screen:** keep the nested-card visual, it's striking. But narrate fast. **Speaker script (โ‰ˆ45 sec):** > "Four nested ideas. AI is the big umbrella โ€” anything that gets a computer to act like it's thinking. Machine learning is the chunk inside AI where the computer learns from examples instead of being told the rules. Deep learning is the chunk inside ML that uses layered neural networks. LLMs โ€” ChatGPT, Claude, Gemini โ€” are a specific kind of deep learning model trained on text. When someone says 'AI,' they almost always mean LLM. The vocabulary is sloppy; just know the shape." **Speaker notes:** resist the urge to elaborate. Group C will want more; defer with *"we'll get to the why in two slides."* --- ### Slide 5 ยท COMPRESS + ๐Ÿ”ด P0 PATCH ยท A brief history of AI **Patch:** the "Today" milestone currently reads *"Reasoning, agents, multimodal"*. Update on the slide itself to: > **Today** โ€” *Reasoning models, agents, computer use, MCP* The other timeline entries can stay. They're historical and accurate. **Speaker script (โ‰ˆ75 sec):** > "Seventy years compressed into a minute. Turing in the fifties asked the question โ€” can machines think. Then forty years of false starts. The field nearly died twice โ€” those were the 'AI winters.' Then in 2012 deep learning suddenly worked at scale for image recognition. The breakthrough that made everything we use today possible came in 2017: a paper called 'Attention is All You Need' introduced the transformer architecture. Five years later, that architecture became ChatGPT and the floodgates opened. Right now, in 2026, the frontier has moved past chat โ€” reasoning models, agents that take actions, and a new protocol called MCP that lets LLMs talk to your tools. We'll touch on that at the end." --- ### Slide 6 ยท KEEP ยท What is a neural network? **On-screen:** unchanged. **Speaker script (โ‰ˆ2 min):** > "Your brain has roughly 86 billion neurons connected by synapses. Learning happens by strengthening or weakening those connections through experience. An artificial neural network steals the metaphor but does it in math: millions of virtual neurons in layers, numbers flowing through them, each connection has a weight that gets adjusted during training. > > The crucial difference from traditional software: nobody programs the rules. You don't write 'if email contains URGENT then mark spam.' You show the network 10,000 emails labelled spam-or-not-spam and it figures out the rules itself. That's why these systems can handle situations the original programmers never anticipated. It's also why they sometimes do things the original programmers never wanted." --- ### Slide 7 ยท KEEP + ๐Ÿ”ด P0 PATCH ยท How AI learns **Patch:** the callout currently reads *"GPT-4 training cost: Estimated 00M+ in compute"* โ€” there's a corrupted dollar sign. Replace with: > **Training cost is why you don't train your own.** GPT-4 was estimated at ~$100M in compute. Frontier 2026 models โ€” GPT-5.5, Claude Opus 4.7 โ€” are estimated in the $500Mโ€“$1B range. **Speaker script (โ‰ˆ2.5 min):** > "Two ways to make software smart. Old way: a human writes explicit rules. Brittle, can't handle anything new. New way โ€” machine learning: you provide labelled examples and the model finds its own patterns. The recipe for training an LLM is roughly this: gather hundreds of billions of words from the internet, books, papers, code; show the model text with the last word hidden, ask it to predict it; when it's wrong, nudge billions of internal parameters slightly; repeat trillions of times. By the end, the model has internalised grammar, facts, reasoning patterns โ€” everything that was in the text. > > Why does this matter for you? Because training is now extraordinarily expensive โ€” GPT-4 cost around $100 million in compute, and the latest frontier models from OpenAI and Anthropic cost somewhere between half a billion and a billion dollars to train. That's why you'll never train your own. You'll use theirs. The skill is no longer how to build models โ€” it's how to use them well." --- ### Slide 8 ยท KEEP + ๐Ÿ”ด P0 PATCH ยท What is a Large Language Model? **Patch:** the "Large matters" card claims *"GPT-4 has ~1.8 trillion parameters"*. This was an unconfirmed leak (attributed to George Hotz) and OpenAI has never disclosed the figure. Replace the card content with: > **"Large" matters** > > Frontier LLMs are estimated to have hundreds of billions to trillions of parameters โ€” exact counts are not publicly disclosed. More parameters = more capacity to learn subtle patterns. Scale changes what's possible. **Speaker script (โ‰ˆ2.5 min):** > "Simplest definition: an LLM is a system trained to predict the next word โ€” or technically the next token, we'll get to that โ€” given everything that came before. That's it. The autocomplete on your phone does the same thing with about two hundred parameters. A frontier LLM does it with hundreds of billions to trillions of parameters. That difference in scale changes what's possible. > > Three things to internalise. One: it's autocomplete, but at a scale and sophistication where it starts doing things that look like reasoning. Two: during training it processed essentially the entire public internet โ€” Wikipedia, books, GitHub, scientific papers. Compressed human knowledge as numbers. Three: at sufficient scale, surprising capabilities emerge โ€” multi-step reasoning, code generation, translation โ€” none of which were explicitly programmed in. We don't fully understand why. We just know it works above a certain size." --- ### Slide 9 ยท KEEP ยท Tokens **On-screen:** unchanged. **Speaker script (โ‰ˆ2.5 min):** > "Quick technical reality check: LLMs don't read words the way you do. They read tokens. A token is roughly three to four characters โ€” common words are one token, rare words split into multiple. 'Hello' is one token. 'ChatGPT' is three. 'Unbelievable' is four. A typical page of English text is about five hundred tokens. > > Three practical consequences. One: when you hear about an LLM 'counting letters' โ€” like the famous 'how many r's in strawberry' โ€” it might get it wrong because it sees tokens, not letters. Two: API pricing is per token. When someone says 'a million-token context window' that's roughly 750,000 words, or six to eight novels. Three: every model has a maximum token capacity. We'll come back to that in two slides." --- ### Slide 10 ยท KEEP ยท How prediction works **Optional rename:** the "Temperature" card uses jargon. If you want, change the card heading to **Creativity dial** with *(temperature)* in parentheses below. **Speaker script (โ‰ˆ2.5 min):** > "Let's see this concretely. You ask: 'What is the capital of France?' The model looks at that sequence of tokens and computes, for every possible next token, the probability it's the right one. The most likely first token might be 'The' โ€” at 60%. Then it picks 'The,' adds it to the sequence, and predicts the next token. 'Capital.' Then 'of.' Then 'France.' Then 'is.' Then 'Paris.' One token at a time, left to right, all the way to the answer. > > Two things follow. First: every response is probabilistic. Ask the same question twice, you might get slightly different answers. That's by design. There's a setting called temperature โ€” or think of it as the creativity dial โ€” that controls how often the model picks the top choice versus a less likely one. Low temperature is predictable and factual; high temperature is creative and surprising. Second: when this feels like reasoning, it kind of is and kind of isn't. The model isn't deliberating the way you do. It's pattern-matching at a scale where the patterns often look like reasoning. The result can be brilliant, and it can be confidently wrong in the same breath." --- ### Slide 11 ยท COMPRESS ยท Transformers **On-screen:** keep the slide but skip the four-card grid in your narration. Just point to the callout. **Speaker script (โ‰ˆ45 sec):** > "One concept worth knowing the name of: attention. The breakthrough in 2017 was figuring out how to let a model, when processing each word, look at every other word in the input and decide which ones matter most. That's the attention mechanism. It's why these models understand long-range context โ€” like which 'it' refers to what โ€” instead of forgetting the start of the sentence by the end. The architecture built on attention is called a transformer, and it now powers basically everything: text models, image generators like Midjourney, video models like Sora, even AlphaFold for predicting protein structures. Same engine, different fuel." --- ### Slide 12 ยท REWRITE + ๐Ÿ”ด P0 ยท The Models You'll Encounter This slide needs the most surgery. Replace all four model cards with the content below. Keep the four-card grid layout. **Card 1 โ€” OpenAI ๐ŸŸข** > **GPT-5.5** (April 2026) โ€” the default in ChatGPT. 1M-token context window, strong agentic coding, knowledge cutoff Dec 2025. **GPT-5.5 Pro** handles harder reasoning. ChatGPT crossed **900 million weekly users** in February 2026. **Card 2 โ€” Anthropic ๐ŸŸ ** > **Claude Opus 4.7** (April 2026) โ€” frontier model with 1M-token context, strong at long documents and following complex instructions. **Claude Sonnet 4.6** is the practical workhorse at lower cost. Anthropic's apps include Claude Code, Cowork, and Claude for Excel/PowerPoint/Chrome. **Card 3 โ€” Google ๐Ÿ”ต** > **Gemini 3.1 Pro** (February 2026) โ€” 1M-token context, native multimodal across text, image, audio, video. Deeply integrated with Google Workspace (Docs, Gmail, Meet) and Chrome. **Card 4 โ€” Meta / open source ๐ŸŸฃ** > **Llama 4** Scout & Maverick (April 2025) โ€” open-weight, free to download and run. **Scout has a 10M-token context window** โ€” the largest of any open model. Powers many third-party apps via providers like Groq. **Speaker script (โ‰ˆ3 min):** > "Four players to know. OpenAI makes ChatGPT โ€” 900 million people use it every week, which is roughly one in eight humans online. Their current default model is GPT-5.5, came out a few weeks ago. Anthropic makes Claude โ€” that's the one I use most, founded by ex-OpenAI researchers with a safety focus. Their current top model is Claude Opus 4.7. Google makes Gemini, currently version 3.1 Pro, deeply integrated into everything Google. And Meta makes Llama, which is the big open-source family โ€” you can download the weights and run them yourself if you have the hardware. > > Quick rule of thumb: the best model is the one your team actually uses. Switching costs are low. Try ChatGPT and Claude side by side for a week and pick whichever feels right. There are technical differences, but for most everyday work in 2026 they're all good enough that the gating factor is your prompting skill, not the model." **Speaker notes:** if a group-C participant asks about DeepSeek, Mistral, Qwen, Grok, etc., acknowledge them in chat reply โ€” *"Yes, real and increasingly competitive โ€” but for a first workshop the big four are the right anchors."* --- ### Slide 13 ยท KEEP + EXPAND ยท Hallucinations **Patch:** add a new sub-section to the slide titled *"How to verify in 3 steps"* โ€” this is the most important practical insight in the entire course. ``` How to verify in 3 steps 1. Ask the model to cite its source. 2. Check the source actually exists. (Search for it. Open it.) 3. Check the source actually says what the model claims. Step 3 catches the failures step 2 misses. ``` Also add one sentence to the existing content: *"Hallucinations are getting less frequent with reasoning models, but they have not been eliminated โ€” and the most plausible-sounding ones are the most dangerous."* **Speaker script (โ‰ˆ3.5 min):** > "Now the most important slide in this course. Hallucinations. An AI hallucination is when the model generates something that sounds plausible but is factually wrong โ€” and presents it with full confidence. Made-up citations. Wrong dates. Invented quotes attributed to real people. Fictitious product features. > > Why does this happen? Because the model's job is to produce plausible next tokens, not true statements. There's no internal fact-checker. The model doesn't 'know' what it knows. Confidence in the tone of the response tells you nothing about accuracy of the content. None. > > You'll hear people claim 'the new models don't hallucinate anymore.' That's marketing. Hallucinations are getting less frequent, especially in reasoning models. They have not been eliminated. And here's the thing: the most plausible-sounding ones are exactly the ones you'll miss. > > So here's the protocol. Three steps. Memorise it. Whenever the model gives you a specific fact โ€” a citation, a statistic, a date, a quote, anything you'd need to defend in a meeting โ€” do these three things. One: ask the model to cite its source. Two: search for that source โ€” make sure it actually exists. Three, and this is the step everyone skips: open the source and check that it actually says what the model claimed. Step three catches what step two misses. Models will sometimes cite a real paper that doesn't say what they claim it says. That third step takes ninety seconds. It will save you from sending bad facts to your boss, your investors, your students, your patients." **Speaker notes:** slow down here. This is the second non-negotiable result of the course. If they remember nothing else from Module 1, they should remember the three-step protocol. --- ### Slide 14 ยท KEEP + ๐Ÿ”ด P0 PATCH ยท Context window **Patch:** replace the four model-limit numbers with current ones, and reframe the surrounding narrative. ``` Model limits (May 2026) GPT-5.5 โ€” 1M tokens (~2,200 pages) Claude Opus 4.7 โ€” 1M tokens Gemini 3.1 Pro โ€” 1M tokens Llama 4 Scout โ€” 10M tokens (largest publicly available) What this means now For most everyday work, context window is no longer the bottleneck. You can paste entire books, multi-document repositories, or full meeting transcripts and the model will read them all. ``` **Speaker script (โ‰ˆ2.5 min):** > "Context window is the model's working memory โ€” the maximum amount of text it can read at once. Your prompt plus any documents you paste plus its response, all together, has to fit in the window. Eighteen months ago this was a serious constraint. Frontier models had 128 or 200 thousand token windows. You had to chunk documents. You had to summarise to summarise. > > That has changed. All four frontier models now ship with a one-million-token window โ€” that's about 2,200 pages, or six to eight novels' worth of text. Llama 4 Scout, on the open-source side, has a ten million token window. For almost everything most of you will do, context window is no longer the bottleneck. > > What you should still remember: in a new chat, the model starts blank. It does not remember last week's conversation unless you paste it in or unless you're using a tool with persistent memory. Each new session is a fresh window." --- ### Slide 15 ยท KEEP + ๐Ÿ”ด P0 PATCH ยท Knowledge cutoff **Patch:** replace the three dates and update the analogy. ``` Knowledge cutoffs (current frontier models) GPT-5.5 โ€” December 2025 Claude Opus 4.7 โ€” early-to-mid 2025 Gemini 3.1 Pro โ€” January 2025 Workaround: most chat tools can now search the web in real time. Use that for recent news, prices, regulations, breaking events. ``` **Speaker script (โ‰ˆ2 min):** > "Every LLM has a training cutoff โ€” the last date their training data includes. Cutoffs in 2026 are tightening: most frontier models are within six to twelve months of current. Still, for breaking news, current prices, this week's regulations โ€” they don't know. They literally cannot know unless they search. > > The workaround is real-time web search. ChatGPT, Claude, Gemini, Perplexity โ€” all of them can browse the web now. For factual lookups about anything recent, lean on that. For reasoning, writing, summarising, brainstorming โ€” anything that doesn't depend on this week's news โ€” the cutoff doesn't matter. > > Useful analogy: think of an LLM as a brilliant colleague who just got back from a six- to twelve-month sabbatical. Sharp, well-read, great judgement โ€” but hasn't seen the news since they left. That's roughly the relationship you have with the model." --- ### Slide 16 ยท COMPRESS ยท Module 1 summary **On-screen:** keep the checklist but skip the right-side exercise card โ€” that pushes us into break time. **Speaker script (โ‰ˆ45 sec):** > "Quick recap before we break. AI contains ML contains deep learning contains LLMs. LLMs predict the next token. The transformer architecture and the attention mechanism are why this works at scale. Hallucinations are real โ€” verify in three steps. Context window is no longer the bottleneck. Knowledge cutoff still is, for recent events. We're going to take ten minutes โ€” I'll be back at the top of the hour. When you come back, we move into the part of the day that changes your work." **Trigger the break.** Don't take questions before the break โ€” push them to after. --- ## Break + capture (0:45โ€“0:55) The companion shows a 10-minute countdown timer and a "one thing I want to remember" capture field. Participants who fill it have a 4ร— higher chance of retaining the concept at T+30. Don't oversell it; just point at it. --- ## Module 2: How to Use AI (0:55โ€“2:05) ### Slide 17 ยท CUT ยท Module 2 header Skip. Open the segment with: *"Welcome back. Now the part you came for."* --- ### Slide NEW-A ยท NEW ยท Day 1 โ€” What to try first This slide doesn't exist yet. Insert it at the start of Module 2 to bridge from theory to practice. **On-screen:** ``` Day 1 โ€” what's a good first task? Pick something you already do, that takes 10โ€“30 minutes, and that has a draft-then-polish shape. Five reliable starters 1. Rewrite a difficult email you've been putting off. 2. Summarise a long document you have open right now. 3. Draft a meeting agenda from a list of topics. 4. Generate interview questions for a candidate or guest. 5. Prepare for a difficult conversation โ€” get the AI to role-play it. The most banal task is the most valuable one. Boring is the point. ``` **Speaker script (โ‰ˆ2.5 min):** > "Before we go into how to prompt, let me answer the question I get every workshop: what should I actually try first? Here's the rule. Pick something you already do, that takes ten to thirty minutes, and that has a draft-then-polish shape. Not something you've never done. Not your most important project. Boring is the point. The boring task is where you have a baseline to compare against โ€” you already know what 'good' looks like. > > Five starters that almost always work. One: rewrite a difficult email โ€” declining something, asking for a raise, addressing a complaint. Two: summarise a long document you already have open. Three: draft a meeting agenda. Four: generate interview questions. Five โ€” and this one's underrated โ€” prepare for a difficult conversation by getting the AI to role-play the other side. > > Some of you brought a task with you, as I asked in the pre-work. Good. Hold onto it. We're going to come back to it in Lab 3 in about 50 minutes." **Speaker notes:** this slide is the single biggest behaviour-change lever in the day. Most workshop attendees don't try AI again because they can't think of what to try. This slide pre-answers the question. --- ### Slide 18 ยท KEEP + PATCH ยท The AI tools landscape **Patch:** update the four cards with current tool names and add MCP at the end. **Card 1 โ€” Chat interfaces:** > ChatGPT (chat.openai.com) ยท Claude (claude.ai) ยท Gemini (gemini.google.com) โ€” open browser, type, get a response. No setup. Start here. **Card 2 โ€” Search-augmented:** > Perplexity (perplexity.ai) โ€” AI + real-time web search + citations. ChatGPT and Claude both also have built-in web search now. Best for research and current events. **Card 3 โ€” Embedded in your tools:** > GitHub Copilot ยท Microsoft Copilot in Office ยท Notion AI ยท Grammarly ยท **Claude for Excel** ยท **Claude for PowerPoint** ยท **Claude in Chrome** โ€” the AI comes to where you already work. **Card 4 โ€” Autonomous agents:** > **Claude Code** (terminal coding agent) ยท **Cowork** (desktop file/task agent) ยท Cursor ยท Devin ยท Replit Agent โ€” AI that takes action, not just talks. **MCP (Model Context Protocol)** is the emerging standard for connecting LLMs to your tools โ€” you'll hear this acronym a lot in 2026. **Speaker script (โ‰ˆ3 min):** > "Where the tools live. Four buckets. Chat โ€” open a browser, type. That's where you start. Search-augmented โ€” Perplexity is the cleanest one; ChatGPT and Claude both have web search built in now. Embedded โ€” the AI shows up inside the tools you already use. Microsoft Copilot in Office, Anthropic now ships Claude inside Excel, PowerPoint, and Chrome. Google Gemini is everywhere across Workspace. > > Last bucket โ€” agents. This is the frontier shift. An agent doesn't just answer questions; it takes actions. Claude Code can write, run, and debug code in your terminal. Cowork can organise files and execute multi-step tasks on your desktop. These are still early. The acronym to watch is MCP โ€” Model Context Protocol โ€” which is becoming the standard way LLMs connect to your tools. We'll come back to that near the end. > > Practical advice: don't try to use all of these. Pick one chat tool โ€” ChatGPT or Claude โ€” and go deep. You can switch later. Switching costs are essentially zero." --- ### Slide 19 ยท KEEP ยท Chat interface anatomy **Speaker script (โ‰ˆ90 sec):** > "The chat interface is deceptively simple โ€” there's a box, you type, the model replies. The depth is in how you use it. Three things to internalise. One: every new chat is fresh memory. The model doesn't remember last week. If you want context, you paste it. Two: it's a conversation, not a form. You can follow up. 'Make it shorter.' 'More formal.' 'Try again, but in Romanian.' Each turn builds on the last. Three: attachments work. PDFs, images, spreadsheets โ€” you can upload and the model reads them as part of the context. We'll use that in Lab 3." --- ### Slide 20 ยท KEEP ยท What is a prompt? **Speaker script (โ‰ˆ90 sec):** > "A prompt is everything you send to the model. It can be a question, a paragraph of context, a role you're asking it to play, a document, or all of those combined. The quality of the response is almost entirely a function of the quality of the prompt. This is the most important skill in the course, and it's the most undertrained skill in the entire workforce right now. > > Useful frame: imagine you're briefing a brilliant freelancer who has never met you, doesn't know your company, doesn't know what 'done' looks like, and only sees the brief. The quality of their work is the quality of your brief. That's the relationship. Now we're going to spend the next forty minutes on five principles for writing better briefs." --- ### Slide 21 ยท KEEP ยท Principle 1 โ€” Be Specific **Speaker script (โ‰ˆ2.5 min):** > "Vague in, vague out. Look at the two columns. 'Help me with my email' versus 'Rewrite this email to sound more direct. Remove apologetic language. Max five sentences.' Same intent. Wildly different output. The right column gives the model four things to optimise for: the task, the audience, the format, and the constraints. That's actually the framework for the next slide โ€” let me draw it out." --- ### Slide 22 ยท KEEP ยท Principle 2 โ€” RTFC This is the workshop's flagship slide. Drill it. **Speaker script (โ‰ˆ3.5 min):** > "Here's the framework. RTFC. Role, Task, Format, Constraints. If you remember only one thing from today, remember these four letters. Every time you write a prompt for anything that matters, run through them in your head. > > Role โ€” who do you want the model to be? 'Act as a senior product manager.' 'Act as a corporate lawyer reviewing a contract.' 'Act as a sceptical investor.' The role primes the model to draw on a specific kind of knowledge and tone. > > Task โ€” what specifically do you want it to do? Not 'help me with my deck.' That's not a task. 'Write a one-page brief outlining the problem, solution, key metrics, and risks.' That's a task. > > Format โ€” what shape is the output? Bullets? A table? An email with a subject line? A 200-word executive summary? Always say it. Models default to whatever shape is most common in their training data, which is rarely what you want. > > Constraints โ€” what are the boundaries? Word count. Tone. Audience. Things to avoid. 'Under 400 words.' 'No jargon.' 'Audience is our non-technical CEO.' 'Don't use the word synergy.' > > Look at the example at the bottom. Read it slowly. Notice how every clause is doing one of those four jobs. That's a real working prompt. You can use it as a template โ€” copy the structure, change the nouns. > > In the next two minutes, we're going to do this for real." **Speaker notes:** end on the transition into Lab 1. Don't take questions here โ€” push to the lab. --- ### LAB 1 ยท NEW ยท Vague โ†’ Specific **Companion reveals the lab panel.** Visible to participants. **Lab brief on screen:** ``` LAB 1 โ€” Vague โ†’ Specific (8 minutes) START with this vague prompt: "Write something about marketing." REWRITE it using RTFC. Pick any context you want โ€” your real industry, or invent one. The point is to feel the structure. Run your rewritten prompt in your chosen LLM. When you're done, paste your prompt into the chat (not the model's response โ€” just your prompt). We'll read 2 or 3 aloud. ``` **Facilitator script for the lab transition (โ‰ˆ45 sec):** > "Eight minutes. Switch tabs to your LLM โ€” ChatGPT, Claude, doesn't matter. Take the prompt 'Write something about marketing' and rewrite it using RTFC. Role, Task, Format, Constraints. Run it. Paste your *rewritten prompt* into our chat when you're done. I'll pick two or three to read aloud. Camera off, chat open. Eight minutes starts now." **Mid-lab moves:** - T+3 min: post into chat: *"Halfway. If you're stuck, pick one role and start there."* - T+6 min: post: *"Two minutes. Drop your prompt in the chat."* - T+8 min: read 2โ€“3 aloud, with one strong example. *"Notice the structure โ€” they all have a role, a task, a format, a constraint. None of them is genius. That's the point."* **Exit condition:** at least three prompts in the chat before moving on. If fewer, extend by 2 min. --- ### Slide 23 ยท KEEP ยท Principle 3 โ€” Give Examples **Speaker script (โ‰ˆ2 min):** > "Third principle. Show, don't just tell. This is called few-shot prompting in the literature. If you can give the model even one or two examples of what good output looks like, the consistency of your results jumps dramatically. Especially for things like tone, format, voice, style โ€” where 'show' is way more efficient than 'tell.' > > Look at the example. Left side: 'Write a meeting title for our Q3 planning session.' Output: 'Q3 Planning Session.' Garbage. Generic. Right side: same task, but with two examples of how your team actually names meetings โ€” 'Shipping or Sinking: H1 Retrospective,' 'The Money Slide: Investor Prep.' Now the model knows what tone to hit. Output is in your voice, not the model's default voice." --- ### Slide 24 ยท COMPRESS ยท Principle 4 โ€” Ask for a Format **Speaker script (โ‰ˆ90 sec):** > "Fourth: ask for a format. Specifically. Bullets, table, numbered list, JSON, comparison grid, executive summary in three sentences, email with subject line. Whatever shape the output should take, just say it. The model can produce almost any format โ€” but only if you ask. Two seconds in the prompt saves five minutes of reformatting later." --- ### Slide 25 ยท KEEP ยท Principle 5 โ€” Iterate **Speaker script (โ‰ˆ2.5 min):** > "Last principle, and the one most people get wrong. AI is a conversation, not a vending machine. Most people give up after the first response. The best outputs come after two or three follow-up turns. That is normal. That is not a sign the AI failed โ€” it's a sign you're using it correctly. > > Read through the example. Turn one: a generic job description. Turn two: 'Make it more human. Remove the phrase dynamic environment.' Better. Turn three: 'Cut responsibilities to max six bullets. Add what makes our team unique โ€” fully remote, four-day week, no meetings before 10am.' Now it's actually yours. > > Useful follow-up phrases to keep in your back pocket: 'Make it shorter.' 'More formal.' 'Add an example.' 'Explain why you structured it this way.' 'Give me three alternatives.' 'What's missing?' That last one โ€” 'what's missing?' โ€” is criminally underused. The model will often surface gaps you didn't know existed. > > OK. Lab 2. We're going to do this for real." --- ### LAB 2 ยท NEW ยท Three-turn improvement **Lab brief on screen:** ``` LAB 2 โ€” Three-turn improvement (12 minutes) Take your RTFC prompt from Lab 1, OR start fresh with a new task. Run THREE follow-up turns on the response: Turn 2 โ€” change the format or length Turn 3 โ€” change the tone or audience Turn 4 โ€” ask "what's missing?" In the chat, paste: One sentence on what changed turn-to-turn. ``` **Facilitator script for the lab transition (โ‰ˆ45 sec):** > "Twelve minutes. Take your prompt from Lab 1 or start fresh โ€” your choice. Run the first turn, then iterate three times. Each turn changes one thing: format, tone, then 'what's missing.' Pay attention to how the output evolves. Drop one sentence in chat about what changed. Twelve minutes." **Mid-lab moves:** - T+5 min: *"Halfway. Make sure you've done at least one 'what's missing' turn โ€” that's where the gold is."* - T+10 min: *"Two minutes. One sentence in chat."* **Exit condition:** read 2โ€“3 chat sentences aloud. The takeaway you reinforce: *"Notice โ€” none of you got the best output on turn one. That's the lesson."* --- ### Slides 26โ€“29 ยท MERGE ยท Use cases tour Compress all four use-case slides (writing, research, brainstorming, coding) into a single fast tour. Don't show four slides โ€” show one summary slide with four quadrants. **On-screen (replacement slide):** ``` Where AI saves people the most time โœ๏ธ Writing & comms Email drafts ยท doc summaries ยท presentations ยท translation ๐Ÿ” Research & learning Explain concepts ยท compare options ยท meeting prep ๐Ÿ’ก Brainstorming Name generation ยท devil's advocate ยท 10ร— thinking ยท reframing ๐Ÿ’ป Coding (non-coders too) Spreadsheet formulas ยท simple scripts ยท explain code ยท debug The pattern: anything with a draft-then-polish shape. Anything that's tedious. Anything you've been avoiding. ``` **Speaker script (โ‰ˆ4 min):** > "Fast tour. Four categories where AI changes the work for most people. Writing and communication โ€” emails, summaries, presentations, translation. This is where most of you will get the biggest immediate wins. The hour you used to spend rewriting a touchy email becomes ten minutes. > > Research and learning โ€” explain a concept, compare two options, prepare for a meeting with someone whose background you don't know. The world's most patient tutor, twenty-four seven, in whatever language you want. > > Brainstorming โ€” names, devil's advocate, 'what would a ten-times version of this look like?' AI doesn't replace your creativity. It removes the blank-page problem. You react and refine. > > Coding, even for non-coders. 'Write me a Google Sheets formula that highlights duplicates in red.' 'Write a Python script that combines all the CSVs in this folder.' Done. The friction between 'I wish a computer would do this' and 'a computer is doing this' has collapsed. > > The pattern across all four: anything with a draft-then-polish shape. Anything tedious. Anything you've been avoiding." --- ### Slide 30 ยท KEEP ยท What AI is NOT good at **Speaker script (โ‰ˆ3 min):** > "Important counter-balance. Things to be careful with. One: AI is not a search engine. It generates based on training, it doesn't look things up โ€” unless it's a tool with web search turned on. Even then, verify. Two: not a specialist. Don't take medical, legal, or financial decisions on AI advice without a human professional reviewing. The model will be wrong sometimes, and it won't warn you when it is. Three: not always logical. Simple arithmetic, counting letters, spatial reasoning โ€” these are still weak spots. Famously: 'how many r's in strawberry' has been wrong for years across models. Test before trusting on precise tasks. Four: not unbiased. Trained on human text, inherits human biases. Underrepresents some cultures and viewpoints. Critically evaluate outputs on anything sensitive. > > Bottom line: AI is a first draft, not a final answer. Your judgment, your expertise, your domain knowledge โ€” those are what make the output safe to act on." --- ### Slide 31 ยท KEEP + PATCH ยท Privacy **Patch:** update the "safer options" card to mention Team/Enterprise plans explicitly. ``` Safer options โ€ข Disable chat history in ChatGPT settings (Settings โ†’ Data Controls) โ€ข Claude.ai's Free/Pro plans don't use chats for training by default โ€ข ChatGPT Team / Enterprise plans contractually exclude training use โ€ข Claude Team / Enterprise plans contractually exclude training use โ€ข For maximum control: run local models via Ollama or LM Studio ``` **Speaker script (โ‰ˆ2.5 min):** > "Privacy. Important. On free and individual paid tiers, your conversations may be used to improve future models. Policies vary by provider โ€” read them. The default assumption should be: anything you paste might be used in training. > > Never paste customer PII, passwords, API keys, company financial data unless approved by your data team, medical records, proprietary source code without checking policy, or anything covered by an NDA. This isn't theoretical โ€” there have been real incidents at large companies. > > Safer options. One: disable chat history in your settings โ€” most tools let you. Two: business and enterprise plans contractually exclude training use. ChatGPT Team, Claude Team, ChatGPT Enterprise, Claude Enterprise โ€” none of these train on your data. Three: anonymise before pasting. 'John Smith at ACME Corp' becomes 'Customer X at Company Y.' You get the same AI help without the risk. Four: for maximum control, run local models โ€” Ollama, LM Studio. Nothing leaves your machine. Slower and weaker than frontier models, but truly private. > > In Lab 3 in a minute, when you bring your real task โ€” anonymise it first. That's a discipline worth practising from day one." --- ### Slide 32 ยท KEEP ยท Human in the Loop **Speaker script (โ‰ˆ2.5 min):** > "The most important mental model in this entire course. Human in the loop. A human reviews, validates, and takes responsibility for any AI output before it has real-world effect. The AI drafts. You decide. > > What AI should draft: first drafts of documents, research summaries you review, code to test before deploying, options and alternatives to evaluate, analysis to validate with your expertise. What humans must decide: whether the output is accurate, whether it's appropriate to send or publish, ethical and legal responsibility, impact on real people, final approval of anything consequential. > > Here's the skill shift, and it's huge. Your value used to be 'can you write a good email?' AI can. Your value is now 'can you judge whether this email is right?' That judgment โ€” your domain expertise, your context, your taste โ€” is the irreplaceable part. Lean into it." --- ### Slide NEW-B ยท NEW ยท What's next โ€” MCP & connectors **On-screen:** ``` What's next โ€” AI that does, not just answers The shift from CHAT to AGENTS is happening now. MCP (Model Context Protocol) A standard for connecting LLMs to your tools. Gmail ยท Calendar ยท Slack ยท Drive ยท GitHub ยท Notion ยท your databases. Released by Anthropic in late 2024, adopted across the industry. What it unlocks "Pull last week's sales from the CRM and draft the weekly summary." "Find every email about Project Apollo and summarise the thread." "Schedule a 30-min slot with everyone in this Slack channel next week." You'll hear this acronym a lot in 2026. Now you know what it is. ``` **Speaker script (โ‰ˆ2.5 min):** > "Quick frontier glimpse before our last lab. The shift happening right now in AI is from chat to agents. Instead of 'tell me about X,' it's 'go do X.' The acronym to know is MCP โ€” Model Context Protocol. It's a standard that lets LLMs connect to your tools โ€” your email, your calendar, your Slack, your databases. Anthropic released it in late 2024 and OpenAI, Google, and most major tools have adopted it. > > What it unlocks: instead of pasting context into the model, the model goes and fetches the context itself. 'Pull last week's sales from the CRM and draft the weekly summary.' 'Find every email about Project Apollo and summarise the thread.' That's not the future, that's now. > > This is the next layer of the iceberg โ€” agents, connectors, automations. We don't have time today to go deep on it. The advanced course covers it. For now, just recognise the acronym when you hear it: MCP. You're ahead of 95% of people if you know what it stands for." --- ### LAB 3 ยท NEW ยท Your real task The lab the whole course pivots on. Twenty minutes. **Lab brief on screen:** ``` LAB 3 โ€” Your real task (20 minutes) The task you brought (or pick one of the five starters) โ†“ Write your prompt using RTFC. Run it. Iterate at least twice. Produce a real, usable output. If you don't have a task โ€” use one of these: 1. Rewrite a difficult email 2. Summarise a long doc you already have open 3. Draft a meeting agenda 4. Generate interview questions 5. Role-play a difficult conversation REMEMBER: โœ“ Anonymise any names, companies, or sensitive data โœ“ RTFC โ€” Role, Task, Format, Constraints โœ“ At least 2 iteration turns โœ— Don't paste real customer data, API keys, or anything under NDA When you have your output, drop one sentence in chat: "My task was X. The biggest surprise was Y." ``` **Facilitator script for the lab transition (โ‰ˆ75 sec):** > "OK. The lab the whole course is built around. Twenty minutes. You take your task โ€” the one you brought with you, or pick one of the five starters โ€” and you run it. RTFC for the prompt, at least two iteration turns, real output. > > Three things before you start. One: anonymise. If your real task involves customer names, company financials, anything sensitive โ€” change the names. Two: don't aim for the perfect output. Aim for *better than you'd have done without AI.* That's the bar. Three: if you get stuck, drop a question in chat โ€” I'll triage as I see them. > > Twenty minutes. When you have something usable, drop one sentence in chat: 'My task was X. The biggest surprise was Y.' Cameras and mics off, work mode on. Twenty minutes." **Mid-lab moves:** - T+5 min: scan chat for stuck people, post 1โ€“2 quick triage tips publicly. - T+10 min: *"Halfway. If you're on turn one still, push to turn two โ€” that's where the real lift happens."* - T+15 min: *"Five minutes. Aim to land your output."* - T+18 min: *"Two minutes. Drop your 'my task / biggest surprise' sentence in chat."* **Exit condition:** at least half the cohort has posted a "task / surprise" sentence. Read 3โ€“4 aloud. Affirm them by name. *"That's the win. That's why you're here. Most of you just used AI to do something you'd have done another way ten minutes ago, and it worked."* --- ### Slide 33 ยท KEEP ยท Daily workflow **Speaker script (โ‰ˆ2.5 min):** > "Coming home stretch. How to make this stick after you leave today. Week one: use AI to draft one email per day. Paste one document and ask for a summary. Ask it to explain one thing you've been avoiding. That's it โ€” three small uses a day for a week. Week two onwards, expand: use it as a sounding board before decisions, let it structure your agendas, ask it to critique your work before you share it with anyone else. > > The mindset shift that actually matters: stop asking 'can AI do this?' Start asking 'what would I need to tell a brilliant assistant to help me with this?' Then type that. That's the whole game." --- ### Slide 34 ยท KEEP ยท Resources **On-screen:** keep as is. The links are still good. **Speaker script (โ‰ˆ90 sec):** > "Where to keep learning. Three buckets on the slide. Tools to try, free. Long-form to go deeper. Newsletters to stay current โ€” AI moves fast, check in once a month. And for those of you on the Coaching tier, or those considering the advanced course โ€” that's where we go deep on attention mechanisms, retrieval-augmented generation, embeddings, agents, MCP, building production systems. Don't take the advanced course before you've used what we covered today for at least a month, though. Reps first, theory second." --- ### Slide NEW-C ยท NEW ยท Your commitment The final substantive moment. Visible companion typing field, public-ish. **On-screen:** ``` Type your one-week commitment into the companion. Format: "In the next 7 days, I will use AI to ___________________." Pick something small. Pick something boring. Pick something you'd do anyway and now you'll do with AI instead. The companion saves it. I'll send a check-in email in 7 days. ``` **Speaker script (โ‰ˆ2.5 min):** > "Last working moment of the day. Open your companion. Find the commitment field. Type one sentence: in the next seven days, I will use AI to do this specific thing. Be specific. Not 'I'll use AI more.' That doesn't work. 'I'll use Claude to draft my weekly Monday update' โ€” that works. 'I'll use ChatGPT to prep for my Wednesday 1:1' โ€” that works. > > Two minutes. Type it. Hit save. The companion stores it locally on your device. I'll send a check-in email in seven days asking how it went. The single best predictor of whether you'll actually use what you learned today is whether you write down a specific commitment in the next two minutes. So write it down." **Speaker notes:** silence is fine here. Don't fill it. Let them type. After 2 min: *"OK. The hardest part is now over. From here on, you're on your own โ€” but the workbook and the companion are with you."* --- ### Slide 35 ยท KEEP, SIMPLIFIED ยท Thank you Strip the dense recap blocks โ€” they undersell the closing. Replace with a clean ending. **On-screen:** ``` The best AI prompt you'll ever write is the next one. What we did today โœ“ Built a mental model of how LLMs work โœ“ Five principles ยท three labs โœ“ One real output, produced just now โœ“ One written commitment In your inbox tomorrow Recording ยท slides ยท workbook ยท the companion link ยท the T+7 check-in ai-courses.badita.org ``` **Speaker script (โ‰ˆ60 sec):** > "We did what I promised. You have a mental model. You have the framework. You have an output you just produced. And you have a commitment in writing. > > Tomorrow you'll get the recording, the slides, the workbook, and the companion link in your inbox. In seven days you'll get one short email from me asking how it went. Coaching tier โ€” you'll see a calendar link tomorrow to book your 1:1 in the next two weeks. > > One last thing. The best prompt you'll ever write is the next one. Every time you use AI, you get slightly better at using AI. Today was the start. Thank you for spending your Monday with me. See you on the other side." **Speaker notes:** end on warmth, not on slides. Stop sharing your screen on the last word. --- ## Appendix ยท Quick-reference table of all patches If you're patching slides individually under time pressure, here's the surgical list. Everything else can ship as-is. | Slide | Patch | Why | |---|---|---| | 5 | "Today" milestone โ†’ reasoning, agents, computer use, MCP | Reflects 2026 frontier | | 7 | `00M+` โ†’ `~$100M` (and add 2026 estimate) | Broken character; was unreadable | | 8 | Remove "1.8 trillion parameters" claim | Unconfirmed leak; OpenAI never disclosed | | 12 | All four model cards rewritten | All models were 2โ€“3 generations old | | 13 | Add 3-step verify protocol | The highest-leverage practical insight | | 14 | All four context-window numbers โ†’ 1M / 1M / 1M / 10M | Old numbers were stale | | 15 | All three knowledge cutoffs โ†’ late-2025 / mid-2025 / Jan-2025 | Old dates were stale | | 18 | Mention Claude apps + MCP in agent card | New tooling landscape | | 31 | Add Team/Enterprise carve-out to "safer options" | Many participants will be on these plans | All other slides can run as written in the original deck. The course is in good shape.