Claude AI Get started

Claude Code Skills — Reusable Capability Packs

Claude Code skills package a repeatable workflow as a markdown description and a tool manifest, letting you load the same capability in every session without retyping long instructions — and share it with every engineer on the team.

In practice

If you find yourself typing the same multi-step instruction more than twice in a week, that workflow is a candidate for a skill. The authoring overhead is low — a markdown file and a short JSON manifest — and the payoff is a one-word invocation that every team member can use.

What a Claude Code skill is

A skill is a self-contained capability extension for the Claude Code CLI. It tells the model what the capability does, when to use it, and what actions are available to carry it out. The model reads the skill description during session startup and can invoke its tools during the conversation, the same way it uses built-in tools like file read or shell execute.

The key difference from a plain prompt is persistence and structure. A prompt lives in your head or in a notes file; a skill lives in the repository, has a defined interface, and travels with the code. When a new engineer joins the team and clones the repository, they get the same skills the rest of the team uses. When a skill is updated — say, the release checklist gains a new step — everyone picks up the change on their next pull.

Skills are also inspectable. Because the manifest is a plain JSON file checked in alongside the source, it is reviewable in a pull request, auditable in a compliance check, and testable like any other configuration artifact. That property matters more than it first appears for teams operating in regulated environments where the exact sequence of AI-assisted actions needs to be traceable.

How skills are structured

The two required files are the description document and the tool manifest. A third file, a schema, is optional but recommended for skills that accept structured input from the user.

The description document is a markdown file. Its first section is a short explanation of what the skill does; subsequent sections can include usage examples, prerequisite checks, and notes on edge cases. The model reads this document when it decides whether to invoke the skill and how to use it, so clarity here pays off directly in invocation accuracy. Keep the description specific: a skill that claims to handle "all release tasks" will be invoked in situations where it does not apply; a skill that describes exactly which steps it performs and under what conditions will be invoked precisely.

The tool manifest is a JSON file that declares the tools the skill exposes. Each tool has a name, a description, an input schema, and an implementation — which is typically a shell command the CLI runs when the model calls that tool. A minimal manifest for a changelog-generation skill might declare two tools: one that reads the git log since the last tag, and one that writes the formatted output to CHANGELOG.md. The model calls them in sequence; the CLI executes the shell commands; the results flow back into the session.

Authoring your first skill

Start with a workflow you already run manually and know well. The authoring process is essentially transcription: write down what you do, in the order you do it, and then translate each step into a tool declaration. The description document is the narrative version; the manifest is the structured version. If the two are consistent, the skill will work. If they diverge — the description says the skill sends a Slack notification but the manifest has no tool for that — the model will attempt to describe the gap rather than silently failing, which is a useful early signal during development.

Skill directories conventionally live under .claude/skills/ in the project root, though the path is configurable. Each skill occupies its own subdirectory with the description and manifest files inside. The CLI discovers skills automatically when it starts a session in a repository that contains the skills directory, so there is no registration step beyond placing the files in the right location.

Testing a new skill is as simple as starting a Claude Code session and asking it to perform the workflow the skill covers. Watch whether it invokes the skill's tools or falls back to ad-hoc steps; if it falls back, the description probably needs to be more specific about the trigger condition. Research on specification quality in agentic AI systems — including work published through Stanford HAI — consistently finds that ambiguous capability descriptions are the leading cause of incorrect tool selection. Be concrete.

Skill types and examples

The table below shows common skill categories, their typical use in a development workflow, and a concrete example for each.

Skill typeTypical useExample
Release automationAutomate version bumping, changelog writing, and tag creationA skill that reads git log, drafts CHANGELOG.md, and proposes a semver bump
Code review helperRun pre-review checks before opening a pull requestA skill that lints, runs tests, checks for debug statements, and summarises findings
Migration runnerApply database or config migrations safely with validation stepsA skill that checks for pending migrations, applies them, and verifies the schema diff
Incident playbookWalk through a defined incident response sequenceA skill that collects logs, checks service health endpoints, and drafts a status update
Onboarding guideWalk a new engineer through project setup checksA skill that verifies dependencies, checks env vars, and confirms test suite passes

Sharing skills across a team

Because skills live in the repository, sharing them is a pull request. The team reviews the skill definition the same way they would review application code: is the tool manifest accurate? Does the description match what the tools actually do? Are there edge cases the implementation does not handle?

For organisations that want a cross-repository skills registry — a single source of truth that every project can pull from — the enterprise configuration supports a shared registry path that the CLI checks before the project-local directory. Engineers can still author project-specific skills locally; the shared registry provides the vetted baseline. See the enterprise reference for how that registry is administered and how skill invocations are logged.

"We packaged our entire deployment checklist as a Claude Code skill. What used to be a shared Confluence page that nobody updated is now a versioned tool that runs the checks automatically. First time I ran it on a Friday deploy I caught a missing migration I would have missed manually."
— Camille E. SzaboApplied ML Engineer · Anrova Payments · Toronto

Common questions about Claude Code skills

What is a Claude Code skill?

A skill is a packaged capability extension — a markdown description file and a JSON tool manifest — that the CLI loads on demand. Once loaded, the skill's tools are available in the session just like built-in tools. Skills are stored in the repository, version-controlled, and shareable across a team without any registration step beyond placing the files in the correct directory.

How do I author a Claude Code skill?

Write a markdown description of the workflow and a JSON manifest that declares the tools the skill exposes. Each tool in the manifest has a name, description, input schema, and a shell command implementation. Place both files in .claude/skills/<skill-name>/ in your repository. The CLI discovers skills automatically at session start.

Can Claude Code skills call external APIs?

Yes. A skill's tool manifest can use shell commands as implementations, so a skill can invoke curl, a Python script, or any CLI program that reaches an external service. The model calls the tool, the CLI runs the shell command, and the result returns to the session context. This works for internal REST APIs, database CLIs, and third-party services.

How are skills different from MCP servers?

Skills live in the repository and are project-specific. MCP servers are external processes any Claude Code session can connect to, regardless of which repository is open. Skills suit per-project workflows; MCP suits shared infrastructure. The two complement each other and can be used together in the same session.

Do Claude Code skills work in the enterprise tier?

Yes. In the enterprise configuration, skills can be published to a shared registry so every engineer loads the same approved set. Administrators can restrict skills loadable from untrusted sources, and skill invocations appear in the audit log alongside regular shell commands.

Related topics

The Claude Code overview explains the CLI and where skills fit within the broader capability set. The features reference covers skill loading commands and maturity status alongside MCP and sub-agents. For teams standardising on a shared skill library, the teams page explains shared registries and per-repository configuration. The enterprise reference covers centrally managed skill approval and audit logging for regulated environments.

If you are new to claude code skills and want a hands-on introduction before reading the authoring details, the claude code AI primer gives a fast orientation to the whole product. The install guide is the prerequisite if you have not yet set up the CLI. For the model layer that skills run on top of, the models overview and the API reference provide the necessary background. Find all reference pages through the docs hub.

Start building your first skill

Install Claude Code, identify one workflow you run every week, and spend twenty minutes turning it into a skill. The investment pays back within a day.

Get Claude Code installed