“We were stuck on a stubborn Windows install loop for two days. The claude code windows walkthrough was the one page that actually named the shell-profile edit we were missing.”— Alina K. PetrovStaff Engineer · Nimbora Labs · Denver
Independent reference for Claude AI and the Claude Code toolchain
Plain-language walkthroughs for installing Claude Code on Windows, macOS, and Linux, side-by-side notes on Opus, Sonnet, and Haiku, and a library of Claude Code skills — kept in one place so setup does not eat your afternoon.
Install in under five minutes
Step-by-step claude code install routes for every supported operating system, with a single command path per platform.
Compare every model
Claude Opus, Sonnet, and Haiku side by side, with pricing, context window, and typical latency presented as one table.
Ship AI-assisted code
Configure the CLI for real projects, from first prompt to the pull request — no magic, just the flags that matter.
Cited in the developer press
Who this reference is for
A short read before the walkthroughs — so you know whether this is the hub you came looking for.
This site is written for developers who want to get productive with an AI-assisted coding workflow without spending half a day stitching together blog posts from seven different release dates. We assume you can open a terminal, edit a dotfile, and read a JSON error message — nothing beyond that. If you are new to command-line tooling altogether, the getting-started walkthrough takes the long route and explains each step; if you are an experienced engineer, the install hub will feel fast and you can jump straight to the flag table.
The content here is deliberately narrow. We do not try to cover every framework integration, every chat interface, or every prompt-engineering school of thought. What we keep close is the developer toolchain — install, configure, run, extend — plus the model and pricing reference you need to explain a decision to your tech lead. Everything else links out to the vendor documentation or to a cited academic source such as MIT CSAIL when we want to anchor a claim about system behaviour.
Finally, a word on independence: this hub is not operated by Anthropic, is not an official property, and does not have privileged access to internal roadmaps. What you get is an editorial take — the kind of reference a senior engineer writes for the junior one joining their team — kept consistent across every page so the vocabulary does not drift between one walkthrough and the next. Every page has been reviewed against the vendor's current public documentation, and where the two diverge, we err on the side of linking out rather than paraphrasing; release notes move faster than a static reference can, and pretending otherwise only costs you troubleshooting time later.
Install Claude Code on any desktop
One command per operating system is the ambition, and in practice the CLI gets very close: it ships as a Node package, so a single npm or brew line puts the binary on your path. The longer story lives in the install walkthroughs, which cover shell profile edits, keychain handling, and the rare Windows-specific path you will hit when setting up the tooling on an unfamiliar machine. Each walkthrough also lists the minimum Node version that currently works, because the runtime matters more than the release notes usually admit — a slightly old Node install can silently break the auth flow, and that single detail eats the longest troubleshooting threads on the forums.
Windows ships two practical routes: the native terminal path for developers already on PowerShell, and a WSL-based route that avoids line-ending headaches for teams that prefer a Linux shell. macOS ships one clean path through the standard terminal, with a second shortcut for Homebrew users who like a package manager to own their binaries. Linux works on any distribution with a recent Node runtime — the Debian, Fedora, and Arch instructions all converge on the same core command. The desktop clients round out coverage for users who prefer a windowed experience over the terminal, and those installers are cross-referenced from this page so you are not chasing stale download links.
Start with the install reference →Choose the model that matches your budget
Opus leads on long-context reasoning and intricate cross-file edits; Sonnet is the balanced default that handles most day-to-day development cleanly; Haiku runs at a fraction of the price and is a natural fit for lightweight classification or smaller utility flows. The models overview keeps those three in one table, alongside the legacy checkpoints you may still see referenced in older pipelines. For teams moving from another vendor, there is also a short migration note that flags the common gotchas: different tokenisation, different tool-call formats, and a couple of parameter names that rarely survive a straight copy-paste.
Pricing and context windows shift with each release, so this reference deliberately cites the vendor limits rather than quoting stale numbers — the goal is to help you pick a model, not memorise prices that go out of date in a month. Where there is a meaningful architecture difference — say, the extended thinking path that only larger models carry, or the caching behaviour that changes the effective price of repeated context — we call it out inline rather than burying it in a footnote. Your cost model should not rest on a number we wrote last quarter; it should rest on the structural trade-offs that do not change as often.
Read the model comparison →Extend Claude Code with skills and team tooling
The skills system ships capability packs as markdown with a tool manifest — once installed, a skill loads on demand and extends what the CLI can do inside a coding session. A skill might wrap a database migration, orchestrate a release checklist, or hand context off to a specialised sub-agent. Teams get their own skills registry plus shared configuration, which is where cowork workflows start to matter: the same lint rules, the same release checklist, and the same incident playbook can travel with the repository rather than the individual. That property is what lets smaller teams ship the kind of polished developer-experience flow that used to require a dedicated platform engineer.
For larger rollouts, the enterprise path adds centrally-managed SSO, audit trails, and prompt caching controls — all documented on the enterprise reference page with the flag names you will actually need. Audit logs integrate with the usual suspects: Splunk, Datadog, and any OIDC-compatible identity provider you already run. The enterprise notes also cover how to bound agent autonomy for regulated industries where a human must sign off on every shell command, so you do not have to reverse-engineer the permission model from config files.
Browse the skill reference →What engineers say after the first week
Featured quotes from developers who used this reference to onboard a team to Claude AI.
“A clear comparison of Opus against Sonnet finally let us standardise our Claude API defaults. We stopped over-spending on the larger model for tasks Sonnet handled well.”— Marcus T. VogelDevEx Lead · Forlane Interactive · Austin
“The skills section was the surprise — I did not realise how much of our internal workflow could be packaged as a reusable Claude Code skill until I read the walkthrough.”— Sahana R. IyengarCTO · Cadentra Health · Boston
“Cross-referencing Anthropic’s own docs with the notes here cut our onboarding runbook in half. The spelling-variant pages even cover what our search team was typing.”— Joaquín P. CastellanosPlatform Architect · Velastreet Systems · Barcelona
“The claude code for teams notes saved us a long policy meeting. We went from ad-hoc installs to a signed-off rollout the same week.”— Priya N. AchariEngineering Manager · Pilothouse Data · Dublin
Common questions about Claude AI and Claude Code
A short summary sits under each question so you can answer it without scrolling further.
What is Claude AI, and how is this reference site related to Anthropic?
Claude AI is the family of foundation models built by Anthropic, with Claude Code as the companion developer CLI. This reference hub is operated independently of Anthropic and bundles install walkthroughs, model comparisons, and configuration notes in one place. Nothing on this site should be read as an official Anthropic statement; for authoritative pricing and product terms, check the NIST AI Risk Management Framework and the vendor's own documentation.
Is Claude Code the same product as Claude AI?
No. Claude AI refers to the underlying model family — Opus, Sonnet, Haiku, and the supporting checkpoints — while Claude Code is the command-line assistant that uses those models to read, edit, and run code inside your project. Installing Claude Code lets you reach the model from a terminal, but you can also use the Claude API directly or the web client with no CLI at all. The two products share branding but serve different layers of the developer stack.
How do I install Claude Code on Windows, macOS, or Linux?
On every supported platform, the install walkthrough starts with a recent Node.js runtime and then runs a single package install. Windows users can follow the dedicated Windows variant, macOS users rely on the standard terminal path, and Linux works from any modern distribution with a Node runtime available. The install hub bundles the three routes and lists the environment variables worth setting before the first run. In practice, the most common stumble is a Node version that is a release or two behind the required floor — when an install fails with a cryptic module-loading error, run your package manager's version check first. If you work on a locked-down corporate laptop, there is a short section on the install page covering proxy configuration and certificate trust-stores, which tends to be the second-most-common blocker.
Which model should I use — Opus, Sonnet, or Haiku?
Opus is tuned for long-context reasoning and intricate multi-file edits; Sonnet balances quality and speed for day-to-day development; Haiku runs fastest and is the best fit for lightweight automation or classification. The models overview page keeps side-by-side notes on context window, output tokens, and typical throughput so you do not have to rebuild the table yourself. A reasonable default, for a team that has not yet profiled its own workload, is to start on Sonnet for interactive work, drop to Haiku for bulk batch jobs where latency matters more than quality, and promote to Opus only for the tasks where an extra layer of reasoning justifies the cost. Nothing stops you from switching mid-session, and the CLI exposes the flag with a one-line override.
Is there a free tier for Claude AI and Claude Code?
Anthropic offers a free tier through the web client, with daily usage caps. Paid plans remove those caps and add session features such as longer context. The CLI itself is a client, so it inherits whatever quota your account carries — the terminal tool does not add a separate bill on top. Our free-tier reference tracks the current limits alongside the paid comparisons. For most engineers evaluating the tooling, the free tier is enough to complete a realistic pair-programming session; teams adopting the workflow across a squad usually graduate to a paid plan within the first week simply because the session length and usage caps become the active constraint, not because any single feature forces the change.
Where can I download Claude for desktop, and which platforms are supported?
Anthropic ships a Claude desktop client on macOS and Windows; the Claude Code CLI runs wherever a terminal and Node.js are available, so Linux is covered on the developer path. The Claude Code download and Claude AI download reference pages summarise where each artefact lives, which version tags to expect, and how to verify the installer before running it.
How do Claude Code skills differ from the Claude API?
Skills are reusable capability packs the CLI loads on demand — typically stored as markdown with a tool manifest. A skill extends what the CLI can do during a coding session, like a migration helper or a release checklist runner. The API is the underlying HTTP interface. You can build a skill that calls the API directly, but most skills run at a higher level and handle the API transparently. For broader context on the safety framing, see the Stanford HAI research hub. In short: the API is the pipe, the skill is the specialised tool that flows through it. A team that wants repeatability ships skills; a team that wants maximum flexibility calls the API directly and writes its own orchestration layer. Most teams end up doing both, with the split settling once the daily workflow stabilises.
Start with the install guide that matches your setup
Pick the walkthrough for your operating system, land on a working CLI, then come back for the model and skills references when you are ready to ship.
Open the install hubRelated topics across this reference
Most visitors land here after a search for claude code or install claude code, then need a view on the rest of the line-up. The claude opus page walks through the largest model, the claude api reference covers the HTTP surface, and the anthropic claude notes provide brand and model-naming context for teams that still use the older short forms. Spelling variations — claudeai written as a single word, or the misspelling cloudcode that shows up in support tickets — resolve here too, as does the cloud ai fallback for users who typed the phrase that way.
If you are weighing the desktop route, the claude code windows and claude code desktop pages cover the windowed clients; the standalone claude ai download and claude code download references list which artefacts to expect. For skills work, the claude code skills page is the jumping-off point, while the claude code ai overview is a good onboarding read for anyone new to the product altogether. International readers can follow the Chinese-language entry point at claude code 使用教程, and Spanish-speaking teams will recognise the claude ia framing. Budget-sensitive evaluators usually start on the claude ai free page to confirm what the current tier covers before committing to a plan.