Workshop Β· Live online Β· 2.5 hours
Understanding

AI & Large
Language Models

From zero to confident β€” for everyone.
By the end of today, you've used AI on a real task from your work.

Florin Bădiță Mon 18 May 2026 ai-courses.badita.org
Welcome Β· 0:00–0:10 02 / 37
One promise

By the end of today, you've used AI
on a real task from your work.

01Mental model

How LLMs actually work β€” the engineering version, in plain language.

35 MIN

02How to prompt

The RTFC framework. Five principles. Three live labs.

70 MIN

03Your task

Applied β€” the lab the whole course is built around.

20 MIN

Β· Companion link in chat Β· Questions β†’ chat, batched at module breaks Β· Recording goes out tomorrow
Module 1 Β· How AI works 03 / 37
Vocabulary check

Four nested ideas.
When people say AI, they almost always mean LLM.

The vocabulary is sloppy β€” just know the shape.

AI βŠƒ ML βŠƒ DL βŠƒ LLM Β· 45 seconds of narration Β·
Module 1 Β· How AI works 04 / 37
70 years, compressed

A brief history of AI.

1950s
Turing asks the question
"Can machines think?"
1970s–90s
Two AI winters
The field nearly dies. Twice.
2012
Deep learning works at scale
Image recognition breakthrough.
2017
"Attention is All You Need"
Transformer architecture is born.
2022
ChatGPT launches
The floodgates open.
Today Β· 2026
Reasoning, agents, computer use, MCP
The frontier moves past chat.
Module 1 Β· How AI works 05 / 37
The metaphor

A neural network steals
from your brain β€” in math.

Your brain: ~86 billion neurons, connected by synapses. Learning = strengthening or weakening connections through experience.

The crucial difference

Nobody programs the rules. You don't write "if email contains URGENT then mark spam" β€” you show 10,000 labelled emails and the network figures out the rules itself.

Module 1 Β· How AI works 06 / 37
Two ways to make software smart

From "writing rules" to learning from examples.

The old way Β· brittle

if email contains "URGENT"
  and sender not in contacts
  then mark as spam

Can't handle anything the programmer didn't anticipate.

The new way Β· machine learning

1. Show 10,000 labelled emails
2. Model finds its own patterns
3. Handles examples it's never seen

Same idea, scaled to internet-sized data, makes an LLM.

Training cost β€” why you don't train your own 2026
~$100M GPT-4 Β· 2023 estimate
$500M–$1B Frontier 2026 models Β· GPT-5.5, Claude Opus 4.7

The skill is no longer how to build models.
It's how to use them well.

Module 1 Β· How AI works 07 / 37
The simplest definition

An LLM is a system trained
to predict the next token,
given everything that came before.

01It's autocomplete

…at a scale and sophistication where it starts doing things that look like reasoning.

02Trained on (almost) everything

Wikipedia, books, GitHub, papers β€” essentially the public internet, compressed as numbers.

03Capabilities emerge at scale

Multi-step reasoning, translation, code β€” none explicitly programmed in. Above a certain size, they just appear.

"Large" matters

Frontier LLMs are estimated to have hundreds of billions to trillions of parameters β€” exact counts are not publicly disclosed. More parameters = more capacity to learn subtle patterns. Scale changes what's possible.

Module 1 Β· How AI works 08 / 37
Technical reality check

LLMs don't read words. They read tokens.

"Understanding AI is a superpower in 2026"

Under standing Β·AI Β·is Β·a Β·super power Β·in Β· 202 6

~3–4 characters per token Β· "ChatGPT" = 3 tokens Β· 1 page English β‰ˆ 500 tokens

01Counts letters badly

"How many r's in strawberry" β€” the model sees tokens, not letters.

02Pricing is per token

"1M-token context" β‰ˆ 750,000 words β‰ˆ 6–8 novels.

03Every model has a ceiling

Maximum tokens per conversation. Coming up in two slides.

Module 1 Β· How AI works 09 / 37
One token at a time, left to right

"What is the capital of France?"

The
0.60
capital
0.88
of
0.95
France
0.98
is
0.92
Paris
0.99

probability of next token at each step

Every response is probabilistic

Ask the same question twice β†’ slightly different answers. By design. The creativity dial (technically: temperature) controls how often the model picks the top choice vs a less likely one.

It both is and isn't reasoning

Not deliberation. Pattern-matching at a scale where the patterns look like reasoning. Brilliant and confidently wrong β€” sometimes in the same breath.

Module 1 Β· How AI works 10 / 37
One concept worth knowing the name of

Attention lets a model, when processing each word,
look at every other word and decide which ones matter.

The architecture built on attention is called a transformer. It now powers basically everything: text, image generators (Midjourney), video (Sora), audio, and AlphaFold for protein structures. Same engine, different fuel.

2017 Β· "Attention is All You Need"
Module 1 Β· How AI works 11 / 37
Four players to know

The models you'll encounter.

Snapshot Β· May 2026

OpenAI

GPT-5.5

1M context Cutoff Dec 2025

Default in ChatGPT. Strong agentic coding. GPT-5.5 Pro for harder reasoning. ChatGPT crossed 900M weekly users in Feb 2026.

Anthropic

Claude Opus 4.7

1M context Cutoff early–mid 2025

Frontier. Strong on long docs + following complex instructions. Sonnet 4.6 is the workhorse. Apps: Claude Code, Cowork, Excel, PowerPoint, Chrome.

Google

Gemini 3.1 Pro

1M context Cutoff Jan 2025

Native multimodal: text, image, audio, video. Deeply integrated with Google Workspace (Docs, Gmail, Meet) and Chrome.

Meta Β· open source

Llama 4 (Scout & Maverick)

10M context Β· Scout Open weights

Free to download and run. Scout has the largest context window of any open model. Powers many third-party apps via providers like Groq.

Rule of thumb: the best model is the one your team actually uses. Switching costs are essentially zero.

Module 1 Β· How AI works Β· non-negotiable 12 / 37
The most important slide in this course

Hallucinations.

verify before you trust
What it is

The model generates something that sounds plausible but is factually wrong β€” and presents it with full confidence. Made-up citations. Wrong dates. Invented quotes. Fictitious features.

Why it happens

The model's job is plausible next tokens, not true statements.
There is no internal fact-checker.
Confidence in tone tells you nothing about accuracy of content.

Hallucinations are getting less frequent with reasoning models β€” but they have not been eliminated. The most plausible-sounding ones are the most dangerous.

How to verify Β· 3 steps
  1. Ask the model to cite its source. "Where did this number come from?"
  2. Check the source actually exists. Open a tab. Search for it.
  3. Check the source actually says what the model claims. This is the step everyone skips. ~90 seconds. Catches the failures step 2 misses.
Module 1 Β· How AI works 13 / 37
The model's working memory

Context window β€” no longer the bottleneck.

May 2026
max tokens per conversation

GPT-5.5 OpenAI
1M~2,200 pages
Claude Opus 4.7 Anthropic
1Mtokens
Gemini 3.1 Pro Google
1Mtokens
Llama 4 Scout Meta Β· open
10Mlargest open

Unlocks: paste entire books, multi-doc repos, full meeting transcripts β€” the model reads them all.

Still remember: each new chat is fresh. The model does not remember last week unless you paste it in.

Module 1 Β· How AI works 14 / 37
The last date their training data includes

Knowledge cutoff β€” 6 to 12 months ago.

GPT-5.5

Dec 2025

Claude Opus 4.7

Early–mid 2025

Gemini 3.1 Pro

Jan 2025
The workaround

Most chat tools now search the web in real time. Use that for recent news, prices, regulations, breaking events. For everything else, the cutoff doesn't matter.

Useful analogy

Think of an LLM as a brilliant colleague who just got back from a 6–12 month sabbatical. Sharp, well-read, great judgement β€” but hasn't seen the news since they left.

End of Module 1 Β· 45 min 15 / 37
Recap before the break

You now have a mental model.

  • AI βŠƒ ML βŠƒ DL βŠƒ LLM. When people say β€œAI” they mean LLM.
  • LLMs predict the next token, trained on internet-scale text.
  • Transformers + attention β€” same engine, also powers image, video, AlphaFold.
  • Hallucinations are real. Verify in 3 steps: source β†’ exists β†’ says what it claims.
  • Context window is no longer the bottleneck. 1M tokens is standard.
  • Knowledge cutoff still bites for recent events. Use web search.

10-minute break. When you come back: the part that changes your work.

Break Β· 10 minutes
10MIN

Open the companion β†’ "one thing I want to remember"

BACK AT THE TOP OF THE HOUR Β· QUESTIONS AFTER THE BREAK

Module 2 Β· How to use AI 17 / 37
Welcome back Β· the question I get every workshop

Day 1 β€” what's a good first task?

Pick something you already do, 10–30 minutes, with a draft-then-polish shape.
The boring task is where you have a baseline.

  1. Rewrite a difficult email you've been putting off.
  2. Summarise a long document you have open right now.
  3. Draft a meeting agenda from a list of topics.
  4. Generate interview questions for a candidate or guest.
  5. Prepare for a difficult conversation β€” get the AI to role-play it.
Hold onto your task

Some of you brought a real task from your work, as I asked in the pre-work. Good. We'll come back to it in Lab 3 in about 50 minutes.

The most banal task is the most valuable one. Boring is the point.

Module 2 Β· How to use AI 18 / 37
Where the tools live Β· 2026

The AI tools landscape.

01Chat

ChatGPT Β· Claude.ai Β· Gemini. Open browser, type, get a response. No setup. Start here.

02Search-augmented

Perplexity. ChatGPT and Claude also have built-in web search. Best for research and current events.

03Embedded

GitHub Copilot Β· Microsoft Copilot Β· Notion AI Β· Claude for Excel Β· PowerPoint Β· Chrome. The AI comes to where you already work.

04Agents

Claude Code Β· Cowork Β· Cursor Β· Devin.
AI that takes action, not just talks.
The acronym: MCP.

Practical advice: don't use all of these. Pick one chat tool and go deep. Switching costs are zero.

Module 2 Β· How to use AI 19 / 37
Anatomy of a chat

Deceptively simple.
The depth is in how you use it.

01Every new chat is fresh memory

The model doesn't remember last week. If you want context, you paste it.

02Conversation, not a form

"Make it shorter." "More formal." "Try again, in Romanian." Each turn builds on the last.

03Attachments work

PDFs, images, spreadsheets β€” upload and the model reads them as context. We'll use that in Lab 3.

Module 2 Β· How to use AI 20 / 37
The most undertrained skill in the workforce right now

A prompt is everything you send to the model.
The quality of the response is the quality of the prompt.

Useful frame

Imagine you're briefing a brilliant freelancer who has never met you, doesn't know your company, doesn't know what "done" looks like, and only sees the brief. The quality of their work is the quality of your brief.

NEXT 40 MIN Β· FIVE PRINCIPLES FOR WRITING BETTER BRIEFS

Module 2 Β· Principle 1 of 5 21 / 37
Principle 1

Be specific.

Vague in, vague out.

01 / 05
βœ• Vague
Help me with my email.

β†’ Generic response. Doesn't know who, what, why, or how long.

βœ“ Specific
Rewrite this email to sound more direct. Remove apologetic language. Max five sentences.

β†’ Four things to optimise for: task, audience, format, constraints.

Those four words are actually the framework. Let me draw it out.

Module 2 Β· Principle 2 Β· flagship 22 / 37
If you remember one thing from today

RTFC.

02 / 05
R
Role

Who do you want the model to be? "Act as a senior PM." "Act as a corporate lawyer." Primes a tone and a body of knowledge.

T
Task

What specifically should it do? Not "help with my deck" β€” "Write a one-page brief: problem, solution, metrics, risks."

F
Format

Shape of the output. Bullets? Table? Email with subject line? 200-word exec summary? Say it.

C
Constraints

Boundaries. Word count, tone, audience, things to avoid. "Under 400 words. No jargon. Don't use 'synergy.'"

A real working prompt

[Role] Act as a senior product manager at a B2B SaaS startup. [Task] Write a one-page brief outlining the problem, solution, key metrics, and risks for a new onboarding flow. [Format] Markdown with four numbered sections. [Constraints] Under 400 words. Audience is our non-technical CEO. No jargon.

Lab 1 of 3 23 / 37
Lab 1

Vague β†’ Specific.

⏱ 8 MIN
START With this vague prompt: Write something about marketing.
REWRITE Use RTFC. Any context β€” your real industry or invented. The point is to feel the structure.
RUN In your chosen LLM (ChatGPT or Claude β€” doesn't matter).
PASTE Your rewritten prompt into chat. (Not the model's response β€” just your prompt.)
β†’ We'll read 2 or 3 aloud. Camera off, chat open. Eight minutes starts now.
Module 2 Β· Principle 3 of 5 24 / 37
Principle 3

Give examples.

Show, don't just tell. (few-shot prompting)

03 / 05
βœ• Tell only
Write a meeting title for our Q3 planning session.

β†’ "Q3 Planning Session."

Generic. Doesn't sound like your team.

βœ“ Show + tell
Write a meeting title for our Q3 planning session.

Examples of how we name meetings:
Β· "Shipping or Sinking: H1 Retrospective"
Β· "The Money Slide: Investor Prep"

β†’ Now the model knows your voice, not its default.

Especially for tone, format, voice, style β€” show is more efficient than tell.

Module 2 Β· Principle 4 of 5 25 / 37
Principle 4

Ask for a format.

04 / 05

Bullets Β· table Β· numbered list Β· JSON Β·
comparison grid Β· three-sentence exec summary Β·
email with subject line.

The model can produce almost any format β€” but only if you ask.

Two seconds in the prompt saves five minutes of reformatting later.

Module 2 Β· Principle 5 of 5 26 / 37
Principle 5 Β· the one most people get wrong

Iterate. AI is a conversation, not a vending machine.

05 / 05
Turn 1 A generic job description. "Sounds like every LinkedIn ad."
Turn 2 "Make it more human. Remove the phrase dynamic environment." Better tone, still generic.
Turn 3 "Cut responsibilities to max 6 bullets. Add what makes our team unique β€” remote, 4-day week, no meetings before 10am." Now it's actually yours.
Follow-ups to keep in your pocket

"Make it shorter." Β· "More formal." Β· "Add an example." Β· "Give me three alternatives."

"What's missing?" β€” criminally underused. The model surfaces gaps you didn't know existed.

Lab 2 of 3 27 / 37
Lab 2

Three-turn improvement.

⏱ 12 MIN
START Take your RTFC prompt from Lab 1 β€” or start fresh with a new task.
TURN 2 Run a follow-up that changes the format or length.
TURN 3 Run a follow-up that changes the tone or audience.
TURN 4 Ask "what's missing?" β€” that's where the gold is.
DROP One sentence in chat about what changed turn-to-turn.
β†’ Twelve minutes. Pay attention to how the output evolves.
Module 2 Β· How to use AI 28 / 37
Where AI saves people the most time

Four buckets. Same pattern.

Writing & comms
Email drafts Β· doc summaries Β· presentations Β· translation

Biggest immediate wins for most of you.

Research & learning
Explain concepts Β· compare options Β· meeting prep

World's most patient tutor.

Brainstorming
Name generation Β· devil's advocate Β· 10Γ— thinking Β· reframing

Removes the blank-page problem.

Coding (non-coders too)
Spreadsheet formulas Β· simple scripts Β· explain code Β· debug

"I wish a computer would do this" β†’ it does.

The pattern: anything with a draft-then-polish shape.
Anything tedious. Anything you've been avoiding.

Module 2 Β· Counter-balance 29 / 37
Counter-balance Β· know the limits

What AI is NOT good at.

01Not a search engine

It generates based on training, doesn't look up β€” unless web search is on. Even then, verify.

02Not a specialist

Don't take medical, legal, or financial decisions on AI advice without a human professional reviewing.

03Not always logical

Simple arithmetic, counting letters, spatial reasoning. "How many r's in strawberry" β€” still weak.

04Not unbiased

Trained on human text β†’ inherits human biases. Underrepresents some cultures and viewpoints.

First draft, not final answer.
Your judgement makes the output safe to act on.

Module 2 Β· Privacy 30 / 37
Default assumption

Anything you paste might be used in training.

βœ• Never paste
  • Customer PII
  • Passwords or API keys
  • Company financials (without sign-off)
  • Medical records
  • Proprietary source code (check policy)
  • Anything covered by an NDA
βœ“ Safer options
  • Disable chat history (Settings β†’ Data Controls)
  • Claude Free/Pro doesn't train on chats by default
  • ChatGPT Team / Enterprise β€” contractually excluded
  • Claude Team / Enterprise β€” contractually excluded
  • Anonymise before pasting ("Customer X at Company Y")
  • Run local: Ollama, LM Studio

In Lab 3 β€” anonymise. Practise the discipline from day one.

Module 2 Β· The mental model that matters most 31 / 37
Human in the loop

AI drafts. You decide.

What AI should DRAFT
  • First drafts of documents
  • Research summaries you review
  • Code you test before deploying
  • Options and alternatives
  • Analysis you validate with expertise
What humans DECIDE
  • Whether the output is accurate
  • Whether it's appropriate to send
  • Ethical and legal responsibility
  • Impact on real people
  • Final approval of anything consequential

Your value used to be "can you write a good email?" β€” AI can.
Your value is now "can you judge whether this email is right?"

Module 2 Β· Frontier glimpse 32 / 37
What's next

AI that does, not just answers.

The shift from chat to agents is happening now.

MCP
Model Context Protocol

An open standard for connecting LLMs to your tools.

Gmail Calendar Slack Drive GitHub Notion your databases

Released by Anthropic late 2024. Adopted across the industry.

What it unlocks

β†’ Pull last week's sales from the CRM and draft the weekly summary.

β†’ Find every email about Project Apollo and summarise the thread.

β†’ Schedule a 30-min slot with everyone in this Slack channel next week.

You'll hear this acronym a lot in 2026. Now you know what it is.

Lab 3 of 3 Β· the one the course pivots on 33 / 37
Lab 3

Your real task.

⏱ 20 MIN
PICK The task you brought, or one of the five starters from earlier.
PROMPT Write your prompt using RTFC β€” Role, Task, Format, Constraints.
ITERATE At least 2 follow-up turns. Include one what's missing?
SHIP Produce a real, usable output. Aim for better than you'd have done without AI β€” not perfect.
SHARE Drop in chat: "My task was X. The biggest surprise was Y."
βœ• no real customer data, keys, NDA material βœ“ anonymise names & companies β†’Questions in chat β€” I'll triage as I see them.
Closing Β· Make it stick 34 / 37
How to make it stick after you leave today

A small daily workflow.

Week 1
  • One email per day, drafted with AI
  • One document summarised
  • Ask it to explain one thing you've been avoiding

Three small uses a day. That's it.

Week 2 onwards
  • Sounding board before decisions
  • Structure your agendas
  • Critique your work before sharing

Expand from there.

The mindset shift

Stop asking "can AI do this?"

Start asking "what would I need to tell a brilliant assistant to help me with this?"

Then type that. That's the whole game.

Closing Β· Resources 35 / 37
Where to keep learning

Three buckets.

Tools to try Β· free
  • claude.ai
  • chatgpt.com
  • gemini.google.com
  • perplexity.ai
  • Free tiers are enough to start
Long-form Β· go deeper
  • Co-Intelligence β€” Ethan Mollick
  • learnprompting.org
  • Anthropic's prompt engineering guide
  • OpenAI cookbook (technical)
Stay current Β· monthly
  • One Useful Thing β€” Mollick newsletter
  • The Pragmatic Engineer AI section
  • Latent Space podcast
  • Check in once a month, not daily

Reps first, theory second. Use what we covered today for a month before you take an advanced course.

Closing Β· The last working moment 36 / 37
Open the companion Β· find the commitment field

Type your one-week commitment.

"In the next 7 days,
I will use AI to __________________________."

βœ• DOESN'T WORK

"I'll use AI more."

βœ“ WORKS

"I'll use Claude to draft my weekly Monday update."
"I'll use ChatGPT to prep for my Wednesday 1:1."

The companion saves it locally. I'll send a check-in email in 7 days.

Closing 37 / 37

The best AI prompt you'll ever write
is the next one.

WHAT WE DID TODAY

  • Built a mental model of how LLMs work
  • Five principles Β· three labs
  • One real output, produced just now
  • One written commitment

In your inbox tomorrow: recording Β· slides Β· workbook Β· companion link Β· T+7 check-in

AI-COURSES.BADITA.ORG Β· THANK YOU