Skip to main content
Back to Canvas
BlogApril 6, 2026aiagentscode-auditskills.shstatic-analysisarchitecture

I built a codebase audit skill to clean up AI 'vibe coding' leftovers

How I built skill-codebase-audit — an agent skill for the skills.sh ecosystem that forces AI to read, map, and review your architecture before it makes a mess.

If you’ve been letting AI coding agents loose in your repositories lately, you already know the problem: "vibe coding."

Recently, while trying to untangle some manual Supabase JWT verification logic for CVant, I let an agent just vibe its way through the codebase. Big mistake. It jumped straight into implementation mode without a mental model of the project, hallucinated changes across multiple layers, and left a massive architectural mess. I ran into the exact same issue while building out the tactical grid for Sketchline—agents just don't do a sanity check first.

So, I built skill-codebase-audit to force them into "Staff Engineer Review" mode.

What it is

It's a standard Agent Skill built for Vercel's open skills.sh ecosystem (which means it drops right into Claude Code, Cursor, Gemini CLI, etc.). Instead of generating code, this skill enforces a strict, read-only discovery and audit phase.

You can grab it via the skills CLI:

npx skills add KangweiLIAO/skill-codebase-audit --skill codebase-audit

Once installed, whenever you tell your agent to "review this repo" or "find problems in my project," it triggers the codebase-audit tool instead of blindly grepping your source files and guessing at your architecture.

How it kills the "vibe"

The skill operates under a strict golden rule: understand first, question second, change later. The SKILL.md constraints force the agent through a methodical pipeline:

  1. Discovery (Read Light First): It stops the agent from dumping 50 files into context and burning your tokens. It uses find and ls -R to map the tree, reads the top-level configs (package.json, go.mod, etc.), and infers the architecture structure first.
  2. Dynamic Scoping: Based on the file count, it categorizes the project size. If it's a massive codebase (>80 source files), it restricts itself to core business logic and entry points. It then prompts you to pick from 8 audit categories (Tech Stack Consistency, Security Hygiene, Dependency Health, etc.).
  3. Chunked Analysis: It analyzes the codebase one domain at a time. It actively hunts for mixed design patterns, leaked business logic (like putting network interceptors in UI components), missing test coverage, and insecure defaults.
  4. Clean Code Pass: It flags the top 3 absolute worst files in the repo and proposes DRY/SRP refactors—but explicitly restricts the agent from auto-fixing them so it doesn't wreck your working tree.
  5. Architecture Recommendation: Finally, it evaluates if your current architecture actually makes sense for the scale of the project and outlines an incremental migration path to fix it.

The Output

It doesn't just dump a markdown wall in your terminal. The skill enforces structured data output, generating a findings.json that categorizes issues by severity (confirmed, likely, uncertain). It then uses a bundled python script to spit out an audit-report.html directly in your project root.

By running this first, you get a highly accurate, cached mental model of the repository for the AI to use in subsequent prompts. No immediate token burn on useless code generation, just a solid map of the tech debt.

If you’re wrangling legacy code or trying to clean up the hangover from your last vibe coding session, run npx skills add KangweiLIAO/skill-codebase-audit. If it flags something absolutely unhinged in your architecture, I’m genuinely curious—let me know what edge cases it catches.