Skip to main content
Back to Canvas
BlogApril 7, 2026aiagentscode-auditskills.shstatic-analysisarchitecture

I built a codebase audit skill to clean up AI 'vibe coding' leftovers

How I built vibe-audit — an agent skill that runs after your implementation sessions to map the tech debt and make your codebase readable again.

If you’ve been letting AI coding agents loose in your repositories lately, you already know the hangover of "vibe coding."

Recently, I spent an afternoon vibing out the shade-matching logic for a headless makeup storefront. Modern agents are great—they use Plan Mode, they scaffold well, and they iterate fast. But after a few hours of rapid-fire implementation, I was left with an app that worked perfectly, but a codebase I could barely read. State management was fragmented, async/await was mixed with callbacks, and the architecture had turned into a black box.

Agents build fast, but they don't naturally optimize for human visibility after the fact. So, I built vibe-audit to run after the dust settles, forcing the agent into "Staff Engineer Review" mode to map the mess it just made.

What it is

It's a universal static audit skill for Claude Code, Cursor, and the skills.sh ecosystem. Instead of generating more code, this skill acts as a post-implementation diagnostic tool.

You can grab it via the skills CLI:

npx skills add KangweiLIAO/skill-vibe-audit

(Or just drop the vibe-audit folder directly into your .claude/skills/ directory if you're using Claude Code).

How it brings visibility back

The skill operates under a methodical three-phase process designed to expose the technical debt hidden in your working tree:

  1. Recon & Gating: It maps the file tree and reads top-level configs without dumping all your source files into context. If your post-vibe codebase has grown to more than 80 source files, the skill actively halts and forces you to pick a specific domain or chunk to audit. This keeps the review highly focused and prevents token exhaustion.
  2. The Dual-Lens Approach: It analyzes the code through two distinct lenses:
    • Lens A (Clean Code): Evaluates naming clarity, function design (like SRP and argument count), and cohesive class design based on Robert C. Martin's principles.
    • Lens B (Vibe-Coding Pitfalls): This is the magic. It actively hunts for AI-specific tech debt. It catches "Context Window Amnesia" (when an agent generates duplicate logic with slightly different names across separate sessions) and "Deep Architectural Incoherence" (like mixing multiple design patterns in one module).
  3. Targeted Synthesis: Finally, it evaluates the structure as-found and outlines an incremental migration path to clean it up.

The Output (For Humans and Agents)

The goal of vibe-audit is to create an artifact that makes the codebase highly visible to both you and your AI agent before you start refactoring.

First, it generates a structured findings.json that categorizes issues by severity. By running this, your agent gets a highly accurate, cached mental model of the repository's flaws to use in subsequent prompts.

Then, it renders a beautiful, self-contained audit-report.html right in your project root. This report gives you:

  • A 0–100 health score with a circular gauge.
  • A visual coverage grid showing exactly which paths were audited versus skipped.
  • Categorized findings with severity badges and impact scores, specifically tagged by their (Clean Code) or (Vibe-Coding) origins.

You get no immediate token burn on useless code generation, just a solid, readable map of your tech debt. If you’re trying to understand the hangover from your last vibe coding session, run npx skills add KangweiLIAO/skill-vibe-audit.

If it flags something absolutely unhinged in your architecture, I’m genuinely curious—let me know what edge cases it catches.