Autocomplete tools are fine for writing standard boilerplate, but they fall short when you need them to follow your team's specific architecture guidelines. Dumping a 500-line style guide into a system prompt consumes tokens and degrades the model's reasoning capability. You need a way to feed context to the LLM only when the task requires it.
Agent Skills are a lightweight, open format for extending AI agent capabilities with specialized knowledge and workflows. The agentskills.io specification is standardizing this so skills remain portable across different AI products.
At its core, a skill is a folder containing a SKILL.md file. This file has YAML frontmatter (name and description) followed by the instructions that tell the agent how to perform a specific task. Skills can also bundle scripts, reference materials, and templates.
my-skill/ ├── SKILL.md # Required: metadata + instructions ├── scripts/ # Optional: executable code ├── references/ # Optional: documentation ├── assets/ # Optional: templates, resources └── ...
The directory depends on which AI tool you're using:
| Tool | Skills directory |
|---|---|
| Cursor | .cursor/skills/<skill-name>/SKILL.md |
| Claude Code | .claude/skills/<skill-name>/SKILL.md |
| GitHub Copilot | .github/skills/<skill-name>/SKILL.md |
| Antigravity | .antigravity/skills/<skill-name>/SKILL.md |
| Generic (agentskills.io) | .skills/<skill-name>/SKILL.md |
AGENTS.md/Claude.md/Cursor rules all solve the same problem: telling the agent how to behave. The difference is when the context loads.
AGENTS.md is read at the start of every session. Every token in that file counts against your context window for every task, whether you're writing docs, debugging a race condition, or refactoring a database layer.
Skills load on demand. The agent reads only the metadata (name and description) at startup. The full skill body is fetched only when your prompt matches the description. This means:
.cursor/skills/ folder works in Cursor today and any future tool that adopts the agentskills.io spec.The right mental model: AGENTS.md is a standing order, a skill is a specialist you call when you need them.
The agent works in three stages:
Stage 1 — Discovery: When the agent starts, it scans the skills directory and reads only the YAML frontmatter (name and description) from every SKILL.md. This is a tiny token cost even with dozens of skills.
Stage 2 — Activation: When you send a prompt, the agent compares your request against the description of each skill. If it matches, the full SKILL.md body is loaded and injected into the LLM context alongside your prompt.
Stage 3 — Execution: The LLM now has the skill instructions and follows them. If the skill references scripts or resource files, those are fetched or run on demand as the task progresses.
First create the directory and file:
mkdir -p .agent/skills/frontend-component touch .agent/skills/frontend-component/SKILL.md
The SKILL.md needs YAML frontmatter for discovery, followed by the instructions:
--- name: frontend-component description: Use this skill when building or modifying a UI component. --- # Frontend component skill example 1. Use our internal CSS Modules system. Do not use Tailwind CSS. 2. Ensure every interactive element includes an `aria-label`. 3. Read `tokens.json` in this directory to get the exact hex codes for our brand colors before styling. 4. After implementation, run `npm run lint:css` to verify compliance.
The description field is what the agent pattern-matches against your prompt at discovery time. Make it specific. "Use this skill when building or modifying a UI component" is better than "frontend skill" because the agent needs enough context to trigger it correctly and to not trigger it when it's irrelevant.
Here are a few examples of skills you can create for each stage of SDLC:
/spec)Instruct the agent to interview you before writing code. It clarifies edge cases, API boundaries, and performance constraints, then outputs a strict technical specification.
/plan)The agent takes the specification and breaks it down into small, verifiable units, mapping out exact file changes.
/frontend or /backend)Enforce use of specific internal component libraries and architectural rules. This prevents the LLM from generating generic, outdated solutions.
/test)Instruct the agent to write unit tests, run the test suite via a shell command, and debug failures before finalizing the feature.
Treat skills like third-party dependencies. Because Agent Skills give LLMs the ability to execute scripts and read files, a malicious .py script could exfiltrate environment variables or run destructive bash commands.
⚠️ Always audit skills from external sources before adding them to your
.agent/skillsdirectory. Check network access settings if you run agents in a sandbox.
Instead of writing everything from scratch, there are many open source repositories that we can use to get started. I have used few from the below recently.
Start with the Anthropic examples to understand baseline capabilities, then layer in the engineering constraints from Addy Osmani's repository.
Find the most popular YouTube creators in tech categories like AI, Java, JavaScript, Python, .NET, and developer conferences. Perfect for learning, inspiration, and staying updated with the best tech content.

Get instant AI-powered summaries of YouTube videos and websites. Save time while enhancing your learning experience.