What is SKILL.md? The Complete Guide to AI Skill Files for Claude Code and OpenClaw
If you've used AI coding assistants like Claude Code or OpenClaw, you've probably noticed something: you keep explaining the same tasks over and over. "Generate a commit message in this format." "Write tests following this pattern." "Review code checking for these issues."
SKILL.md files solve this. They're reusable instruction files that teach AI agents how to perform specific tasks — consistently, every time, without re-explaining.
Think of it this way: skills are to AI agents what plugins are to browsers. They extend capability without modifying the core system.
This guide covers everything you need to know — from anatomy to security to writing your first skill file from scratch.
The Anatomy of a SKILL.md File
A SKILL.md file has two parts: YAML frontmatter (metadata) and a Markdown body (instructions).
Here's a complete, annotated example:
---
name: commit-message-generator
version: 1.0.0
description: Generate conventional commit messages from staged changes
author: Your Name
platforms: [claude-code]
category: development
tools: [Bash, Read]
---
# Commit Message Generator
Generate clear, conventional commit messages by analyzing staged git changes.
## Instructions
When the user asks you to generate a commit message:
1. Run `git diff --cached --stat` to see which files are staged
2. Run `git diff --cached` to read the actual changes
3. Determine the commit type (feat, fix, refactor, docs, test, chore)
4. Generate a message following Conventional Commits format
## Rules
- ONLY read staged changes (--cached flag)
- Do NOT execute any commands other than `git diff` and `git log`
- Do NOT modify any files
- Keep subjects under 72 characters
## Output Format
<type>(<scope>): <subject>
- Bullet points explaining what changed and why
Frontmatter Fields Explained
| Field | Required | Description |
|-------|----------|-------------|
| name | Yes | Unique identifier, kebab-case |
| version | Recommended | Semantic versioning (1.0.0) |
| description | Yes | One-line summary of what the skill does |
| author | Recommended | Creator name or handle |
| platforms | Yes | Which AI agents support this (claude-code, openclaw, or both) |
| category | Recommended | Grouping: development, testing, documentation, devops, etc. |
| tools | Important | Which tools the skill needs access to (Bash, Read, Write, WebFetch) |
Body Structure
The Markdown body is where the real instructions live. Good skill files follow this pattern:
- Title — What this skill does (H1)
- Instructions — Step-by-step workflow the AI should follow
- Rules — Explicit constraints and boundaries (what NOT to do)
- Output Format — Expected structure of the result
The rules section is arguably the most important. Without explicit boundaries, an AI agent will use whatever tools and approaches it deems best — which may include actions you never intended.
Platforms That Support Skill Files
Claude Code (Anthropic)
Claude Code looks for SKILL.md files in your project directory. When it finds one, the skill becomes available as a command. Claude Code skills have access to tools like:
- Bash — Execute shell commands
- Read — Read files from the filesystem
- Write — Create or modify files
- WebFetch — Make HTTP requests
The tool access is what makes skills powerful — and what makes security critical.
OpenClaw
OpenClaw supports a similar skill file format with some differences in frontmatter fields. Skills designed for OpenClaw use the openclaw platform tag and may reference different tool names.
Cross-Platform Skills
You can write skills that work on both platforms by specifying platforms: [claude-code, openclaw] and using tool instructions that are platform-agnostic. The instructions themselves are natural language, so they translate well across platforms.
How Skill Files Work Under the Hood
When you place a SKILL.md file in your project root (or a .skills/ directory), here's what happens:
- Discovery — The AI agent scans for SKILL.md files when it starts a session
- Context Injection — The skill's content is loaded into the agent's system context
- Activation — When your request matches the skill's purpose, the agent follows the skill's instructions instead of improvising
- Tool Execution — The agent uses the tools specified in the skill (Bash, Read, Write, etc.) to complete the task
This is why the tools field matters. A skill that declares tools: [Read] is telling the agent "you only need to read files for this task." A skill that declares tools: [Bash, Read, Write, WebFetch] is giving the agent access to your shell, filesystem, and network.
The tools declaration is a trust boundary. More tools = more capability = more risk.
Writing Your First Skill File
Let's build a practical skill from scratch: a code reviewer that checks staged git changes for common issues.
Step 1: Create the file
Create a file called SKILL.md in your project root (or in a folder like .skills/code-reviewer/SKILL.md).
Step 2: Define the frontmatter
---
name: code-reviewer
version: 1.0.0
description: Review staged code changes for bugs, style issues, and security risks
platforms: [claude-code]
category: development
tools: [Bash, Read]
---
We're only requesting Bash (to run git diff) and Read (to examine files for context). No Write — this skill should never modify code. No WebFetch — it doesn't need network access.
Step 3: Write the instructions
# Code Reviewer
Review staged code changes and provide actionable feedback.
## Instructions
1. Run `git diff --cached` to see all staged changes
2. For each changed file, analyze:
- Logic errors and potential bugs
- Missing error handling
- Security issues (SQL injection, XSS, unsanitized input)
- Performance concerns
3. Format findings by severity
## Rules
- ONLY use `git diff` and `git log` commands
- Do NOT modify any files
- Do NOT run the application or tests
- Do NOT access environment variables
- Provide suggestions, never auto-apply changes
Step 4: Define the output format
## Output Format
For each finding:
- File: path/to/file.ts (line number)
- Severity: HIGH / MEDIUM / LOW
- Category: Bug / Security / Performance / Style
- Issue: Description of what's wrong
- Fix: Suggested resolution
Step 5: Test it
Place the file in your project, start a Claude Code session, and ask: "Review my staged changes." The agent will follow your skill's instructions instead of improvising its own review approach.
5 Best Practices for Production Skills
1. Minimal Permissions
Only request the tools your skill actually needs. A commit message generator needs Bash and Read. It does NOT need Write or WebFetch. Every unnecessary tool is an unnecessary attack surface.
2. Explicit Boundaries
Always include a Rules section with clear "Do NOT" statements:
## Rules
- Do NOT access files outside the current repository
- Do NOT execute commands other than git operations
- Do NOT read .env files or environment variables
- Do NOT make network requests
Explicit negation is more reliable than implicit assumption.
3. Clear Output Formats
Define exactly what the output should look like. Without a format specification, the AI will produce inconsistent results across sessions. Templates with examples work best.
4. Version Your Skills
Use semantic versioning in the frontmatter. When you improve a skill, bump the version. This helps when sharing skills with a team or publishing to community hubs.
5. Test Before Trusting
Run your skill on real tasks before relying on it. Check that it:
- Produces consistent output
- Respects the boundaries you defined
- Handles edge cases (empty input, large files, binary files)
The Security Dimension
Here's the uncomfortable truth: a SKILL.md file has the same access as the AI agent itself.
If a skill requests Bash and WebFetch, it can:
- Execute arbitrary shell commands on your machine
- Read your SSH keys and environment variables
- Send data to any external server
- Modify or delete files
Most skill files are benign. But community-shared skills from unknown authors? Those deserve scrutiny.
We wrote a detailed breakdown of 9 security categories every developer should check before installing third-party skills. The short version:
- Check the
toolsfield — does it request more than it needs? - Read the instructions — do they access sensitive files or directories?
- Look for outbound network calls — does it
curlorfetchanywhere? - Check for obfuscated content — base64 encoding, hex strings, URL-shortened links
If reviewing skills manually sounds tedious, automated scanning tools can help. SkillForge's Security Scanner analyzes skill files across 9 security categories and produces a scored report with reasoning for every finding.
Generating Skills Automatically
Writing skill files manually works fine for simple tasks. But for complex skills with detailed instructions, error handling, and security rules, it can take 30-60 minutes per file.
If you'd rather describe what you need in plain English and get a production-ready skill file back, that's exactly what SkillForge does. You describe the task, pick a format (Claude Code, OpenClaw, or both), and get a complete SKILL.md with frontmatter, instructions, rules, and output format — ready to drop into your project.
There's also a catalog of 21 pre-built skills covering common tasks like commit messages, test generation, code review, PR descriptions, and more. Twelve are free to download.
What's Next
The SKILL.md ecosystem is still early. As more developers adopt AI coding assistants, skill files will become as common as .eslintrc or tsconfig.json — a standard part of every project's configuration.
The developers who start building and curating their skill libraries now will have a significant productivity advantage. One well-written skill file can save hours of repetitive explanation across hundreds of coding sessions.
Start with one skill for your most repeated task. Test it. Refine it. Then build your next one.
Want to generate skills from plain English or scan existing ones for security issues? Try SkillForge — free to start.