Incorrect password
Jonas Juhl Nielsen · 2026
A full-day workshop on building real things with AI agents.
From sketch to ship.

As a kid, I couldn't stop making
my own video games
(including playing them)
Computer Graphic Artist
& Software Engineering


Pipeline Manager &
Technical Director
(Oscar-nominated Flee)
Technical Producer &
Project Manager


Head of Production
& Technology
Founder, Animation Studio.
Fully freelance & remote based
With one throughline: intention.
Break — 15 min
Lunch — 1 hour
Break — 15 min
Break — 15 min
Open discussion — share with the room
There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.
— Andrej Karpathy, Feb 2025
Programming by conversation — describe what you want in plain language, and the AI writes the code.
— General definition
The fastest path from idea to working software — no syntax required, just intention.
— Our definition
The intention is what makes what we build human.
"Build me a house"
→ 1,000,000 possible outcomes
"Build me a lamp"
→ 1,000,000 possible lamps
You iterate on option #4,030,252
Low probability it matches what you actually wanted
The human is out of the loop — the result becomes generic
"Build me a brutalist concrete house with an inner courtyard"
→ 10 possible outcomes
"Build me a minimal Japanese paper floor lamp"
→ 10 possible lamps
You iterate on option #3, then option #2
Each step closer to your initial intention
The human stays in the loop — the result becomes yours
Each extra sense gets it closer to understanding what you want.
... growing
An AI agent that lives in your terminal. You instruct — it executes.
It should have been called Claude Instruct.
25 minutes — vibe code anything from scratch
We regroup at 10:25
npm i -g @anthropic-ai/claude-codemkdir my-project && cd my-projectclaudeUsing a different tool? That's fine — the theory is the same.
Need help? Ask your agent.
You just proved you can build. But can you control it?
Next up — Prompt Engineering: Taking Control
Open discussion — share with the room
The process of writing effective instructions for a model, such that it consistently generates content that meets your requirements.
— OpenAI
A relatively new discipline for developing and optimizing prompts to efficiently build with large language models.
— DAIR.AI Prompt Engineering Guide
Prompt engineering is the art of encoding your intention so precisely that the AI has no room to guess.
— Our definition
> Make me a landing page
> Make me a landing page for a coffee shop called Brew. Dark colors, works on phones, single page.
> Make me a landing page for a coffee shop called Brew. Dark colors, works on phones, single page.
I need a hero section, a menu, and a contact form.
> Make me a landing page for a coffee shop called Brew. Dark colors, works on phones, single page.
I need a hero section, a menu, and a contact form.
Put everything in one HTML file. Keep it simple, no frameworks.
> Make me a landing page for a coffee shop called Brew. Dark colors, works on phones, single page.
I need a hero section, a menu, and a contact form.
Put everything in one HTML file. Keep it simple, no frameworks.
Here's a screenshot of the style I'm going for: [image attached]
> Make me a landing page for a coffee shop called Brew. Dark colors, works on phones, single page.
I need a hero section, a menu, and a contact form.
Put everything in one HTML file. Keep it simple, no frameworks.
Here's a screenshot of the style I'm going for: [image attached]
Think through the layout before you start writing code.
> You're a frontend developer who cares about clean, readable code.
Make me a landing page for a coffee shop called Brew. Dark colors, works on phones, single page.
I need a hero section, a menu, and a contact form.
Put everything in one HTML file. Keep it simple, no frameworks.
Here's a screenshot of the style I'm going for: [image attached]
Think through the layout before you start writing code.
| Vague | Intentional |
|---|---|
| "Fix this bug" | "This function returns undefined when the input is empty. Find the cause, fix it, and write a test." |
| "Review this code" | "Review for SQL injection and missing input validation." |
| "Build a form" | "Build a registration form with email validation and inline error messages." |
Map the symptom to the technique.
| Symptom | Fix |
|---|---|
| Too generic or vague | Add specificity & constraints |
| Messy or hard to use | Define output format |
| Missing nuance or edge cases | Add examples (few-shot) |
| Wrong approach or logic errors | Add chain-of-thought |
| Wrong tone or perspective | Assign a role |
You know what to say. But before you say it — think about how to work.
Think of it as the architect phase.
Think of it as the builder phase.
Plan → iterate → execute → review → fresh agent
Don't steer a sinking ship. Start a new one with a better plan.
Tools are external capabilities the AI can call — file system, terminal, browser, APIs.
bash — run any command in your terminalread / edit / write — file operationsgrep / glob — search the codebasegh — push code, create PRs, manage issuesTools are what turn a chatbot into an agent. We'll go deep on MCP in Ch 3 & 4.
25 minutes — rebuild your Ch 1 project with structured prompts
We regroup at 11:40
Need help? Ask your agent.
Next up — Context Engineering: Building the Brain
Open discussion — share with the room
The art of providing all the context for the task to be plausibly solvable by the LLM.
— Tobi Lutke, CEO of Shopify
Context engineering is the discipline of shaping what the AI sees — writing, selecting, compressing, and isolating context for each task.
— Sequoia Capital
A perfect prompt with zero context is still a guess. Context engineering is how you give AI the knowledge to get it right.
— Our definition
Context is not one thing — it's layers. Some are always present, others load on demand.
Context Stack
Context Stack
│
└─ System Prompt ← built into the tool
Context Stack
│
├─ System Prompt ← built into the tool
│
└─ CLAUDE.md ← project root (you write this)
Context Stack
│
├─ System Prompt ← built into the tool
│
├─ CLAUDE.md ← project root (you write this)
│ ├─ src/
│ │ └─ CLAUDE.md ← component rules
│ ├─ tests/
│ │ └─ CLAUDE.md ← test conventions
│ └─ docs/
│ └─ CLAUDE.md ← writing style
Context Stack
│
├─ System Prompt ← built into the tool
│
├─ CLAUDE.md ← project root (you write this)
│ ├─ src/CLAUDE.md
│ ├─ tests/CLAUDE.md
│ └─ docs/CLAUDE.md
│
└─ User Prompt ← your message
Context Stack
│
├─ System Prompt ← built into the tool
│
├─ CLAUDE.md ← project root (you write this)
│ ├─ src/CLAUDE.md
│ ├─ tests/CLAUDE.md
│ └─ docs/CLAUDE.md
│
├─ User Prompt ← your message
│
└─ Tool Results ← lazy-loaded
├─ file reads
├─ search results
└─ web fetches
Context Stack
│
├─ System Prompt ← built into the tool
│
├─ CLAUDE.md ← project root (you write this)
│ ├─ src/CLAUDE.md
│ ├─ tests/CLAUDE.md
│ └─ docs/CLAUDE.md
│
├─ User Prompt ← your message
│
├─ Tool Results ← lazy-loaded
│ ├─ file reads
│ ├─ search results
│ └─ web fetches
│
└─ Conversation History ← prior messages
Not everything is loaded at once — context is lazy-loaded as needed.
Your codebase, way more than the prompt, is the biggest influence on AI's output.
Software quality matters more than ever.
You see architecture. The AI sees tokens.
You know how the pieces connect.
No memory. No map. Every session starts from zero.
What happens when an agent starts a session?
Most agent failures aren't model failures — they're context failures.
Prompt engineering → one message
Context engineering → a system
Every agent follows the same cycle.
1. Read — observe the current state (files, errors, output)
2. Plan — decide what to do next
3. Act — call a tool (write code, run a command, fetch data)
4. Observe — check the result — did it work?
Then repeat. The loop is what makes an agent an agent.
This is why agents can self-correct — they see their own mistakes and try again.
Define what each piece does — and how they connect.
your-project/
└─ ...
~/.claude/
└─ ...
your-project/
└─ CLAUDE.md
~/.claude/
└─ CLAUDE.md ← personal, per employee
your-project/
├─ CLAUDE.md
└─ .claude/
└─ specs/
├─ feature-a.spec.md
└─ feature-b.spec.md
~/.claude/
└─ CLAUDE.md ← personal, per employee
your-project/
├─ CLAUDE.md
└─ .claude/
├─ specs/
│ ├─ feature-a.spec.md
│ └─ feature-b.spec.md
└─ skills/
├─ deploy.md
└─ test.md
~/.claude/
└─ CLAUDE.md ← personal, per employee
your-project/
├─ CLAUDE.md
├─ src/
│ └─ CLAUDE.md ← scoped rules
├─ tests/
│ └─ CLAUDE.md ← test conventions
└─ .claude/
├─ specs/
│ ├─ feature-a.spec.md
│ └─ feature-b.spec.md
└─ skills/
├─ deploy.md
└─ test.md
~/.claude/
└─ CLAUDE.md ← personal, per employee
your-project/
├─ CLAUDE.md
├─ src/
│ └─ CLAUDE.md ← scoped rules
├─ tests/
│ └─ CLAUDE.md ← test conventions
└─ .claude/
├─ specs/
│ ├─ feature-a.spec.md
│ └─ feature-b.spec.md
├─ skills/
│ ├─ deploy.md
│ └─ test.md
└─ settings.json ← tool permissions
~/.claude/
└─ CLAUDE.md ← personal, per employee
From the Claude Code team themselves.
Our team shares a single CLAUDE.md for the entire repo. We check it into Git, and the whole team contributes multiple times a week. Anytime Claude does something incorrectly we add it to the CLAUDE.md, so Claude knows not to do it next time.
— Boris Cherny, Claude Code team @ Anthropic
A skill is a prompt injection — a reusable procedure, loaded on demand when you invoke it.
# Simple skill — single file
.claude/skills/
└─ deploy.md ← one markdown file
You type: /deploy
Claude sees: the full contents of deploy.md
injected into the conversation
# Full skill — folder with resources
.claude/skills/
└─ review-pr/
├─ skill.md ← the prompt (entry point)
├─ checklist.md ← loaded on demand
└─ examples/
├─ good-pr.md ← reference material
└─ bad-pr.md ← anti-patterns
# Inside skill.md
---
description: Review a pull request
tools: [Bash, Read, Grep]
---
You are a code reviewer. Follow these steps:
1. Read the diff with `gh pr diff`
2. Read checklist.md and check each file against it
3. Flag security issues, missing tests
4. Look at examples/good-pr.md for tone & format
5. Post a structured review comment
# ↑ Sibling files aren't auto-loaded.
# The prompt tells Claude when to read them.
# This IS the prompt — injected when you
# type /review-pr
/skill-name — write once, invoke foreverA role + instructions + tools, loaded just-in-time.
You don't have to write every skill from scratch.
# Browse & install from the registry
claude skill install @anthropic/review-pr
claude skill install @company/deploy-aws
claude skill install @community/migrate-db
Skills = what to do. MCP & CLI = the hands to do it with.
An agent without tools is just a chatbot. Tools are how it acts.
Read — read filesWrite / Edit — create & modify codeBash — run shell commandsGrep / Glob — search the codebaseWebFetch — pull content from the webTask — launch subagentsEvery tool call is a turn in the loop. More tools = more capability.
CLI = Command Line Interface — the agent's built-in hands.
git, npm, gh, curl, etc.npm test
gh pr view 42
psql -c "SELECT count(*) FROM users"
fly deploy --app my-app
If it runs in your terminal, the agent can run it too.
MCP = Model Context Protocol — a standard for connecting agents to external services.
The agent is isolated.
The agent can reach the real world.
MCP turns an LLM into a full-stack engineer with access to your entire toolchain.
Both give the agent hands. They cost differently.
Same task. Same agent. MCP: 114K tokens. CLI: 26.8K.
— Playwright team benchmark
Prefer CLI when both can do the job.
CLAUDE.md, spec.md, and skill.md filesCLAUDE.md files can lazy-load specific related spec.md files for that section of the codebaseThese files are your intention — expressed as instructions.
30 minutes — architect your project's context, then one-shot rebuild
We regroup at 13:55
CLAUDE.md for your projectspec fileskill.mdNeed help? Ask your agent.
Your AI has a brain and hands. Can you make them work together?
Next up — Agentic Engineering: Building the System
Open discussion — share with the room
"Agentic" because you are not writing code directly — you are orchestrating agents and acting as oversight. "Engineering" to emphasize there is an art and science to it.
— Andrej Karpathy
Agents plan and operate independently, potentially returning to the human for further information or judgement.
— Anthropic, "Building Effective Agents"
Context engineering gave it a brain. Agentic engineering is how you work together.
— Our definition
Why a single agent isn't enough.
One thread of thought. One task at a time. One set of files it can hold.
Context fills up. Specialization is needed. Parallelism speeds things up.
Multiple agents, each with their own brain, coordinating on a shared goal.
Solo developer → development team. Same shift, same reason.
One agent can only do one thing at a time. Subagents unlock parallelism.
Think of it as a lead developer who delegates to juniors — each focused on their own task.
How agents work together.
One agent plans, delegates tasks to specialized agents, collects results.
Example: "Build this feature" → Research agent + Code agent + Test agent
Launch N agents in parallel, gather all results, synthesize.
Example: "Explore 3 approaches" → 3 agents → best solution wins
Agent A → Agent B → Agent C. Each transforms the output of the previous.
Example: Design → Implement → Review → Deploy
You are the architect. The agents are the builders.
Agents are fast. Your intention is what makes them good.
Everything you learned today — in one picture.
Vibe Coding
Describe what you want. Iterate
Prompt Eng.
Structure, constraints, precision
Context Eng.
Persistent context and tools
Agentic Eng.
Multi-agent coordination
Each layer builds on the last. Together, they turn a chatbot into a collaborator.
30 minutes — coordinate agents and ship a feature
We regroup at 15:25
Tip: Task tool launches subagents. Plan mode coordinates.
Need help? Ask your agent.
You've built the full stack — brain, hands, and system. Time to look back at what you made.
Next up — Takeaways and the Bonus round.
The tool is fast. Your intention is what makes it good.
Now go build something Monday.
For those who want to go deeper.
Scripts are deterministic — same input, same output. Every time.
Agent outputs are stochastic — same prompt, different result. Every time.
So how do we enforce specific outcomes?
Deterministic guardrails for stochastic agents.
Adding rules to CLAUDE.md reduces the chance of bad behavior — but doesn't prevent it.
Every instruction burns your instruction budget.
Hooks run deterministic code at key points in the execution cycle:
pre-tool-use — before a tool call executes (can block it)post-tool-use — after a tool call completessession-start — when a session beginsMove rules from CLAUDE.md → deterministic hooks.
# CLAUDE.md
**Always use `pnpm`, not `npm`.**
**Never run `git push`.**
Burns instruction budget. Still just a suggestion.
# .claude/settings.json
"hooks": {
"pre-tool-use": [{
"matcher": "Bash",
"command": "./hooks/block-npm.sh"
}]
}
Zero instruction budget. Impossible to bypass.
Take instructions out of your instruction budget and enforce them deterministically.