How I Structure Projects and Repos for AI Agent Collaboration

SYS.BLOG

How I Structure Projects and Repos for AI Agent Collaboration

Best practices for setting up projects so autonomous AI agents can work on them effectively and safely. Covers PRDs, repo guidelines, testing, and CI/CD.

|Aditya Bawankule
AI AgentsDevOpsBest PracticesCI/CD

Recently, I've been letting autonomous agents like the Cursor Background Agent, Google Jules, and OpenAI's Codex contribute more and more code to my projects. It's genuinely impressive how large of tasks these agents can independently tackle, as long as you structure your projects properly. Here is how I set up my projects so I can let these agents work on them effectively and safely.

PRD, MVP Spec, and Build Plan

Whenever planning and brainstorming a project, I start with (assisted by AI of course) writing out a PRD and MVP spec to define the project concretely. Then once I have those, I will have an extremely technical AI (like O3) pull this into a build plan where it will serve as the software architect defining the framework, repository layout, and their high-level details.

Build Plan Prompt

"You are my senior staff engineer. Goal: Produce a complete, self-contained build plan for the project described below. Requirements for your answer:

  1. Context recap — 2-3 sentences that restate what we're building (so I can confirm you understood).
  2. Architecture diagram (ASCII) — high-level components and data flow.
  3. Tech stack choices — front-end, back-end, database, hosting, third-party APIs, with one-line justifications.
  4. Database schema — table / collection definitions with keys, types, relationships.
  5. API surface — list every route or function (method, path, purpose, auth).
  6. Incremental build steps — ordered checklist; each step ends with a test or acceptance criterion.
  7. Risks & mitigations — at least three.
  8. Definition of Done — what user can do, performance targets, success metric.

Formatting: Use markdown headers, code blocks for schemas, and tables where helpful. No external references — everything needed must live in this answer."

Then I'll have an agent autonomously execute this build plan and verify the output. Ideally, it will write tests as it builds every step. Make sure these documents live in the repo, so future agents can reference them.


Repo Guidelines and Rules

Different agents use different storage locations for these. You may see: Agents.md, Cursor Rules, and Claude.md. Readme.md, Contributing.md, Testing.md, Architecture.md.

These all serve as both guidelines and rules for working on the repo.

You can even ask the agent to write this, here is an example prompt to create Agents.md:

"Explain the codebase to a newcomer. What is the general structure, what are the important things to know, and what are some pointers for things to learn next? Save what would be relevant to a coding agent working on this repo to AGENTS.md. (Look in the .cursor/rules folder and CLAUDE.md for more info)"

Example AGENTS.md

A good AGENTS.md covers:

  • Repository Structure — Where pages, components, contexts, and utilities live (app/, components/, context/, lib/).
  • Development Commands — All commands using your package manager (pnpm run dev, pnpm run build, pnpm run test, pnpm run typecheck, pnpm next lint).
  • Testing Guidelines — Always run tests after making edits. Create matching tests for new files and features.
  • Linting — Run lint on changed files before committing. Use --file flag for individual files.
  • Code Style Notes — TypeScript strict mode, 'use client' directive for client components, import grouping conventions, PascalCase for components, camelCase for variables.
  • Important Concepts — How auth works, where AI features live, how state is managed.
  • Best Practices — Never commit secrets, follow routing conventions, use contexts for shared state, keep components modular.

Linter, Tests, and CI/CD Pipelines

It's critical to have a good linter and testing setup to catch regressions and ensure best practices are being followed. You can ask the agents to set this up for you, here are some example prompts:

  • "Install and set up a recommended testing framework for our project, then write extensive test coverage of the codebase"
  • "Write tests for the following areas, extending our existing jest framework:"
  • "Identify important parts of our codebase that don't have test coverage, then write coverage for them"
  • "Setup a Github action that runs the linter and test on every pull request"
  • "Set the linter to a reasonable level then fix all linter errors and warnings"

Make sure the tests and CI/CD pipeline runs before you merge anything in, ensuring minimal regressions.

You can take this further with agentic UI testing, which will be covered in a follow up post.


PR Review Bots

Recently I have set up Cursor BugBot and Qodo Merge to run on every PR. I have found this extremely helpful, and have caught several bugs and edge cases this way.

Recommended tools:

  • BugBot — Cursor's automated bug detection
  • Qodo Merge — AI-powered code review
  • CodeRabbit — Automated PR analysis

General Coding Agent Prompts

Some useful prompts for working with coding agents:

  • "Identify and remove as much duplicated code as you can in our repo"
  • "Find a bug in part of the code that seems critical and fix it"
  • "Make a plan to implement this feature by reviewing our existing codebase:"