Most developers treat AI assistants as autocomplete on steroids β you write some code, the model fills in a function, and life goes on. That's leaving most of the value on the table. The workflow described here is different: the LLM is involved before a single line of production code is written, shaping the architecture, the documentation, and the development plan. By the time you open a code editor with intent to implement, the project is already half-built β structurally, intellectually, and organizationally.
This is a personal, battle-tested approach to starting new software projects using a combination of Claude.ai and Claude Code. It is deliberate, document-heavy, and surprisingly fast once internalized.
Phase 1: Extended Exploration in Claude.ai
Everything begins not with code, but with conversation.
Before touching a repository, before writing a single spec, you spend time β sometimes days β talking through the project in Claude.ai. This isn't a quick "what should I build?" prompt. It's a sustained, exploratory dialogue: What problem am I solving? What are the viable approaches? What are the trade-offs between option A and option B? What have other people tried and where did they get stuck? What do I want to avoid?
The goal is to externalize thinking. When you're building something non-trivial, your mental model is full of fuzzy assumptions, half-formed preferences, and unresolved contradictions. Conversation forces you to make those explicit. The model asks clarifying questions, pushes back on weak reasoning, and surfaces options you hadn't considered. Over the course of this dialogue, you naturally arrive at decisions: what the project is, what it isn't, what tech to use, what patterns to follow, what to defer.
By the end of this phase, you've accumulated a long conversation β typically 4,000 to 8,000 lines of raw dialogue.
Phase 2: Exporting and Committing the Raw Dialogue
This might seem counterintuitive, but the messy, unedited conversation is treated as a first-class artifact. You export the entire dialogue as a Markdown file and drop it into an otherwise empty repository.
This serves as the raw material for everything that follows. It captures not just conclusions but reasoning β including the dead ends, the rejected options, and the "I tried X but didn't like it because..." moments that almost never make it into polished documentation. Having this context available to the model in the next step is what makes the subsequent spec so rich.
One practical note: the raw dialogue is an artifact of the design phase and doesn't need to live in main forever. After Phase 3 is complete, it can be archived to a separate branch or moved to an archive/ folder β accessible but out of the way, so it doesn't confuse anyone reading the repository later.
Phase 3: Generating the Specification in Claude Code
Now you switch environments. In a fresh Claude Code session, you point the model at the exported dialogue and give it a single, high-leverage instruction:
Read this entire conversation. Summarize it in exhaustive detail. Create a specification: the decisions made, the reasoning behind them, the rejected alternatives, what I want to build, what I explicitly don't want to build, potential pitfalls, open questions. Be thorough β this will be the source of truth for everything else. And as you work through it, ask me questions: flag any contradictions, anything unresolved, anything I've said that conflicts with something I said elsewhere.
This step produces a dense, structured specification β often 2,000 to 4,000 lines. The interactive element is crucial: because you were doing exploratory thinking in Phase 1, you almost certainly contradicted yourself at some point, or explored a direction only to abandon it without being explicit about why. The model surfaces these, and you resolve them in real time.
A word of caution: summarization is a lossy process. The model decides what is important and what isn't, and subtle nuances β "I considered X but rejected it because of Y" β can be simplified or dropped entirely. After the spec is generated, spend twenty minutes skimming the original dialogue with one question in mind: is there anything I remember as important that didn't make it into the spec? This is a small investment that closes a real gap.
Phase 4: Reviewing the Spec β Adversarially
A separate, fresh Claude Code session is opened with just the specification β not the raw dialogue. The instruction this time is adversarial:
Read this specification. Find the contradictions. Find the gaps. Find the things that were never resolved or specified. Find assumptions that are implicit but should be explicit. Find anything that, if left unaddressed, would cause problems later.
Starting fresh matters. A model that participated in creating the spec has a form of anchoring bias toward it. A clean session approaches it as a reader and critic rather than as an author.
To get more out of this pass, push the model into an uncomfortable role:
You are a senior engineer who thinks this project is overscoped and the tech choices are questionable. Attack the spec.
This surfaces a different class of problems β not logical inconsistencies, but questions of feasibility, scope realism, and whether the stated approach actually serves the stated goals. After addressing the output of this round, the specification is considered stable.
This is also a natural moment to ask yourself: do I still want to build this, in this way? Asking the question before writing code is infinitely cheaper than asking it at M3.
Phase 5: Scaffolding the Repository β No Code Yet
Here's where the approach diverges sharply from how most people work: you don't write any production code yet. Instead, the same second-pass session is used to build out the structural and organizational layer of the repository.
CLAUDE.md β The Model's Operating Manual
The first file created is CLAUDE.md. This lives at the root of the repository and tells Claude Code what it needs to know to operate effectively in this codebase. It is loaded into context at the start of every Claude Code session and eliminates a large class of repeated orientation questions.
A well-structured CLAUDE.md covers:
- What the project is β one or two sentences
- Quick reference β the commands needed to build, test, lint, and format
- Workspace structure β what each crate, package, or module is responsible for
- Development rules β conventions, constraints, and things that are explicitly off-limits
- Testing expectations β what kinds of tests are required, what must be covered, what tools to use
- Performance guidelines β if relevant to the project
- A documentation table β a mapping from topic to file, so the model knows exactly where to look for any given concern
You can add some instructions like that:
## Rules for Development
- Always discuss architectural changes before implementing them.
- **Before touching any crate: read its `docs/` file first.** Use the Documentation table below. Only explore source if docs are insufficient.
- Keep crates focused β do not add unrelated functionality.
- Prefer small, incremental changes over large rewrites.
- Run `cargo check` after changes to verify compilation.
- When adding new functionality, update the corresponding `docs/` file.
- If docs were missing important details or out of sync β update them before finishing the task.
- No shell scripts β all automation via `xtask/` (Rust).
- `chat_protocol` uses `thiserror` for typed errors. `chat_client` and `chat_server` use `anyhow` for application errors.
- All IDs in wire protocol are `i64`. External string user IDs are mapped to internal `i64` on the server.
## Testing
- Write unit tests (`#[cfg(test)]`) for non-trivial pure logic.
- Cover integration scenarios.
- Think in corner cases: empty collections, zero/max values, missing values, race conditions.
- Use `proptest` for codec roundtrip testing and logic that must hold over ranges.
- Do not mock the database β use `:memory:` SQLite so real schema and queries are exercised.
- Property-based tests for all codec encode/decode paths.
## Performance
- Never block the main thread. Use `spawn_blocking` for CPU-heavy work.
- Use batch queries and transactions for database access.
- Wrap shared resources in `Arc` β cloning is just an atomic increment.
- Profile before optimizing. Use `cargo flamegraph`.
- Write `criterion` benchmarks for bulk data processing.
- Avoid heap allocation in hot paths.docs/ β The Specification, Decomposed
The specification from Phase 3 is broken apart and redistributed into a structured docs/ directory. Rather than one large file, it becomes a set of focused references, each covering a specific concern: architecture, protocol, database design, testing approach, performance guidelines, cross-platform considerations, build automation, contribution workflow. The documentation table in CLAUDE.md maps every topic to its file, so neither you nor the model ever has to guess where something lives.
These aren't abstract guidelines. They reflect actual decisions made in Phase 1 and codified in Phase 3 β so when the model is implementing something, it has targeted, project-specific documentation to reference rather than having to infer conventions from the surrounding code.
goals.md β The Vision Document
This file captures ambition without constraint. It's a complete description of what the project is supposed to become: the full feature set, the long-term goals, the motivations behind building it at all. It's the "if everything goes right" picture β written down and not apologized for.
goals.md is not a roadmap. It doesn't specify order or timeline. It's a statement of intent: this is what I want this thing to be. It serves as a north star when making scope decisions later and as context for the model when evaluating whether a proposed approach aligns with the overall vision.
milestones.md β The Roadmap to Pre-Alpha
This is the operational backbone of the project. Development is broken into numbered milestones β M1.1, M1.2, M1.3, M2.1, M2.2, and so on β each representing a self-contained, completable unit of work.
Each milestone entry follows a consistent structure:
- A description of what is being built
- Specific acceptance criteria: which tests must pass, what benchmarks must be met, what integration scenarios must work, what must successfully build and run
The acceptance criteria are non-negotiable. Every milestone must be done, not just written. "The server handles reconnection" is not acceptance criteria. "The integration test test_reconnect_under_load passes on both Linux and Windows" is. Each milestone is a stable checkpoint.
milestones.md is the document you live in during implementation. When starting a new milestone, you copy the milestone block into a new Claude Code session with a sentence or two of additional context, and begin. No lengthy re-orientation β the milestone is self-contained enough to drive a full session. As milestones are completed, their checkboxes get ticked.
Phase 6: Tooling and Automation
Cross-Platform Scripting
All automation is written in the project's primary language β never in shell scripts, Bash, PowerShell, or Batch. For Rust projects, this means xtask. For TypeScript projects, TypeScript scripts. If a script works on Linux, it works on Windows and macOS. Scripts that rely on platform-specific shell features break silently across environments and waste debugging time.
VS Code Integration
VS Code tasks are configured for every common operation, with keybindings assigned to the frequently used ones. Launch configurations (launch.json) are set up for debugging. Workspace settings and linter/formatter rules are configured β often copied from a previous project and adjusted, since these rarely need to be invented from scratch.
The config/ Directory
A config/ directory holds environment configuration, keys, and runtime parameters. Anything sensitive goes into .gitignore. The repository contains only templates: .env.template, config.example.toml, and similar. The committed files show the shape of the configuration; actual secrets never enter version control.
Phase 7: CI and Workflow
GitHub Actions workflows or other CI configurations may be added β source analysis, security audits, automated test runs. This step is explicitly optional and not prioritized early. When it does happen, it's usually by adapting configurations from previous projects.
Phase 8: Workspace Initialization
The final scaffolding step is organizing the repository's dependency structure β a Cargo workspace for Rust, a root package.json for TypeScript. The goal is to establish the relationships between components of the project before any of those components contain real code.
Phase 9: Initial Commit
With all of the above in place, the work is committed. The repository now contains:
- The raw dialogue export (or an archive of it)
- The full specification (
SPEC.md) CLAUDE.md- The complete
docs/directory goals.mdmilestones.md- VS Code configuration
- Cross-platform build and automation scripts
- Configuration templates
- Workspace structure
There is no production code. There is also no ambiguity about what gets built next.
Phase 10: Implementation by Milestone
Development proceeds milestone by milestone. For each one:
- A feature branch is created off of main.
- The milestone block is copied from
milestones.mdand pasted into a new Claude Code session with a sentence or two of additional context. - Implementation happens in that session, guided by the milestone's acceptance criteria and the
docs/references. - When all acceptance criteria are met β tests pass, benchmarks are within bounds, everything builds β the milestone is considered done.
- A pull request is opened, reviewed, and merged.
- The next milestone begins.
The Documentation Update Step β Non-Negotiable
There is one required step before every PR is merged: update any documentation that no longer reflects reality.
This is where discipline matters most. Implementation almost always teaches you something the spec didn't anticipate: a library works differently than expected, a performance characteristic changes the design, an architectural decision has an edge case that requires a different approach. If that knowledge doesn't make it back into docs/, it exists only in the code β and is effectively invisible to every future Claude Code session.
The update doesn't need to be large. It might be a single paragraph added to docs/architecture.md, or a revised section in docs/database.md. What matters is that documentation and code stay synchronized. This rule is already baked into CLAUDE.md as a standing instruction, so the model is already nudged in this direction during every session. The PR review is the final checkpoint.
After merging, it's also worth a brief look at the upcoming milestones: does what you learned during this milestone change anything about M2 or M3? If so, update them now, while the context is fresh.
Why This Works
Thinking before building. The extended exploration phase is not overhead β it's the work. Decisions made through sustained conversation are more considered than decisions made in the middle of writing code.
The model as collaborator, not code monkey. By giving the model access to rich context β the spec, the docs, the milestone β it operates with genuine understanding of the project. The quality of output scales directly with the quality of the context you provide.
Front-loaded documentation. Documentation written after the fact is almost always incomplete, because the author has forgotten the reasoning. Documentation written as part of the design process captures the reasoning while it's fresh. The docs/ directory here isn't retrospective β it is the design.
Documentation that stays alive. Front-loaded documentation only remains valuable if it's maintained. The workflow enforces this through two mechanisms: CLAUDE.md contains an explicit instruction to update docs as part of every task, and the end-of-milestone review makes documentation updates a required step before merging. Without these mechanisms, even the best initial documentation degrades into a liability β the model reads it, trusts it, and produces incorrect output.
Self-contained milestones. Explicit acceptance criteria force clarity about what "done" means. A well-scoped milestone produces better output than an open-ended instruction.
Cross-platform from the start. Writing automation in the project's primary language is a small decision that pays out repeatedly, especially when working across machines and operating systems.
Summary
| Phase | What Happens |
|---|---|
| 1 | Extended dialogue in Claude.ai to explore the project space |
| 2 | Export dialogue as Markdown; commit to empty repository |
| 3 | Claude Code reads dialogue, generates detailed spec with interactive Q&A |
| 4 | Fresh Claude Code session reviews spec adversarially for gaps and contradictions |
| 5 | Repository scaffolded: CLAUDE.md, docs/, goals.md, milestones.md |
| 6 | Tooling: cross-platform scripts, VS Code config, linters, config templates |
| 7 | CI/CD workflows |
| 8 | Workspace structure initialized |
| 9 | Initial commit |
| 10 | Implementation: branch β milestone β tests pass β update docs β PR β merge β repeat |
The code comes last. The documentation never stops.