"The cadence table on the about page was the thing that sold my team on using this reference. We knew exactly when to double-check an install step versus when we could trust it cold."— Deon A. LaskarisPrincipal SRE · Loomberg Commerce · Chicago
About This Claude AI Reference
An independent editorial hub for Claude AI documentation, install guides, and model notes — maintained by engineers, not by Anthropic.
Worth reading first
This page explains the scope, maintenance approach, and editorial independence of this hub so you can judge how much weight to give any page here.
Why this reference exists
When the Claude Code CLI launched, developer questions scattered across forum threads, release notes, and blog posts written at different points in the product's life. A single search could surface three conflicting install commands — each correct at time of writing, none labelled with the version they applied to. This hub started as a personal attempt to consolidate the working notes that a small team of engineers kept updating every time a CLI release broke a step in their onboarding checklist. It grew because others had the same problem.
The goal is narrow on purpose. This is not a general AI commentary site. It does not track every LLM vendor or every paper on transformer architectures. What it tracks is the Claude AI developer toolchain: how to install it, how to configure it, what the models can and cannot do relative to each other, and how to extend the CLI with skills. That narrowness is a feature. When a page here says the minimum Node version is X, it means someone checked that recently, not that it was true when another post was published.
What the hub covers
The main sections break down into four areas. Installation covers Windows, macOS, and Linux paths for the CLI, plus the desktop clients. Model notes cover Opus, Sonnet, and Haiku with side-by-side context window, pricing context, and typical latency guidance. Skills and teams covers the CLI extension system and the configuration flags teams need to share a working setup across a repository. Finally, the API and free-tier references help developers who are evaluating pricing or building directly on the HTTP interface rather than the CLI.
What the site does not cover: proprietary integrations that are not publicly documented, speculative roadmap items, or third-party plugins not distributed through the main registry. When an area moves fast enough that a static page would be stale within days, the editors link out to the vendor's own changelog rather than paraphrase. The NIST AI Risk Management Framework is one external anchor this hub cites for risk framing; for software engineering research context, MIT CSAIL is occasionally referenced.
How content stays current
The review cadence varies by section. Install guides sit closest to code, so they are re-verified against the current CLI version on a monthly cycle. A reviewer runs the documented commands in a clean environment — a fresh VM or container — checks that the output matches what the page describes, and updates any flag or step that has diverged. Model comparison tables follow the same monthly cycle when a pricing or context window change is announced; in quiet periods they run quarterly. Editorial pages like this one are reviewed quarterly or when a significant structural change affects the hub.
The data table below lists the current review schedule by section. "Last substantive update" refers to a change that altered instructions or data, not a CSS fix or a typo correction.
| Section | Review cadence | Last substantive update |
|---|---|---|
| Install guides (all platforms) | Monthly | April 2026 |
| Model comparison tables | Monthly / on change | March 2026 |
| Skills and teams reference | Bi-monthly | February 2026 |
| API and pricing notes | Monthly / on change | April 2026 |
| Free-tier summary | Monthly / on change | March 2026 |
| Editorial and about pages | Quarterly | April 2026 |
Who maintains this hub
The editorial team is a small group of developers with backgrounds in infrastructure engineering, developer experience, and technical writing. They are listed individually on the editors and contributors page with their coverage areas and review windows. No member of the team is employed by or affiliated with Anthropic. The team does not receive advance access to releases, does not have a commercial relationship with the vendor, and does not earn revenue from any links to vendor products. The site is funded by standard display advertising unrelated to the content.
If you find an error, the fastest route to a correction is the contact page. Include the specific URL, what is wrong, and what you believe the correct answer is, along with a source if you have one. The editors triage corrections in the order they arrive. A correction that includes a source is prioritized over one that does not, simply because it takes less time to verify.
Independence and editorial standards
Editorial decisions on this hub are made solely by the editorial team. No advertiser, sponsor, or third party has input into which topics are covered, how products are described, or what content is linked. When the team links to a vendor page, it is because that page is the authoritative source for the claim being made, not because of any commercial arrangement. Links to external research, such as academic AI safety work, are included where they help anchor a claim that could otherwise look like unsupported opinion.
The vocabulary used across this site is kept consistent deliberately. When a page says "context window" it means the same thing on every page; when it says "session" it distinguishes that from a "conversation" the same way each time. That consistency is harder to maintain than it looks and is one of the main reasons this hub exists as a curated reference rather than a search-engine summary.
Questions about this reference hub
Who runs this Claude AI reference hub?
This hub is operated by an independent editorial team with no affiliation with Anthropic. The editors review public vendor documentation, developer community feedback, and cited academic sources. For AI research context, the Stanford HAI initiative publishes relevant independent work on AI system design and evaluation that the team uses as a reference point when anchoring safety-adjacent claims.
How often is content on this reference updated?
Install guides are reviewed monthly because CLI flags and runtime requirements shift with each release. Model comparison tables are reviewed on a rolling basis whenever pricing or context window changes are announced publicly. Editorial pages run quarterly. The cadence table in the prose above lists the current schedule by section so you know exactly when each area was last touched.
Is this site affiliated with Anthropic or endorsed by them?
No. This is an independent reference hub. It is not affiliated with, sponsored by, or endorsed by Anthropic. All content is the editorial team's own synthesis of public documentation, community reports, and cited research. For authoritative guidance on AI risk and compliance frameworks, consult NIST or the vendor's own channels directly.
How do I report an error or outdated information?
Use the contact page to reach the editorial team. Include the URL of the page with the error, a brief description of what is wrong, and a link to the source you believe is correct. The editors aim to triage corrections within five business days. Corrections that arrive with a supporting source are prioritized because they can be verified in one step rather than two.
Related topics
The editors and contributors page lists who covers each section and their typical review window. For questions about responsible use of the toolchain, the trust and safety reference covers privacy considerations, prompt-injection basics, and mitigation notes. If you want a curated starting point rather than the full reference, the resource hub organizes entry points by user type. New to the toolchain entirely? The getting started guide takes you through the first ten minutes without assuming prior experience with the CLI.
For the technical content itself: the claude code overview is the right starting point for the CLI, while install claude code is the fastest path to a working setup. The models overview covers Opus, Sonnet, and Haiku side by side. Teams extending the CLI should read the claude code skills reference. For API integrators, the claude api reference covers the HTTP surface and authentication basics. Budget-conscious evaluators usually start on the free tier summary.
Ready to dig into the reference?
Start with the install guide that matches your operating system, or browse the resource hub for a curated path by user type.
Open the resource hub