WA
Write a PRD Skills Review: PRD Automation via Reverse Codebase Exploration
bestskills rank team
2026-04-15

A deep audit of write-a-prd skills based on the openclaw/hermes agent ecosystem. We explore how it translates business requirements into structured GitHub Issues by exploring codebases and designing modules, evaluating its real-world value.


Skill Quality Report: write-a-prd

Evaluation Time: 2026-04-15 Evaluation Mode: Item-by-item review

Overall Score

DimensionScoreStatus
Standards (20%)10/20WARN
Effectiveness (40%)30/40PASS
Safety (30%)27/30PASS
Conciseness (10%)8/10WARN
Total75/100Good

Level guide:

  • 90-100: Excellent - ready to use
  • 70-89: Good - small but meaningful room to improve
  • 50-69: Fair - needs important revisions
  • <50: Not qualified - requires substantial rewrite

Skill Strengths

  1. [Effectiveness] It forces requirements clarification before writing the PRD - Evidence: Ask the user for a long, detailed description of the problem they want to solve and any potential ideas for solutions.
  2. [Effectiveness] It validates assumptions against the actual repository - Evidence: Explore the repo to verify their assertions and understand the current state of the codebase.
  3. [Effectiveness] It encourages dependency-first design reasoning - Evidence: Walk down each branch of the design tree, resolving dependencies between decisions one-by-one.
  4. [Safety] It reduces stale implementation details in planning docs - Evidence: Do NOT include specific file paths or code snippets. They may end up being outdated very quickly.

Skill Improvement Areas

  1. [Standards] Metadata is not in complete YAML frontmatter form - Evidence: only name and description are provided; Impact: versioning, ownership, and machine indexing are weaker across repositories.
  2. [Effectiveness] Workflow determinism is weakened by optional step skipping - Evidence: You may skip steps if you don't consider them necessary.; Impact: output quality may vary significantly between runs.
  3. [Effectiveness] Missing explicit “Don’t use when” boundaries and verification criteria - Evidence: no section defines inapplicable scenarios or completion checks; Impact: agents can over-apply the skill and stop without confirming submission success.
  4. [Conciseness] The process is concise but lacks structured checklists in the main body - Evidence: narrative instructions are present but no numbered gating checklist; Impact: execution consistency drops in long sessions.

Insights

  1. Asking for module boundaries before implementation decisions keeps PRD discussions technical and testable. - Application: feature planning skills that often drift into abstract product talk.
  2. Explicitly banning file-path and code-level details is a practical way to keep planning artifacts stable over time. - Application: architecture and roadmap writing skills.
  3. Defining “deep module” in plain language improves team alignment on what deserves isolated tests. - Application: skills that bridge product planning with engineering design.

Issue List

[Medium] Standards - Incomplete governance metadata

  • Location: metadata block at the top of SKILL.md
  • Description: the skill lacks standard governance fields such as version, author, license, and structured tags.
  • Suggestion: convert metadata to complete YAML frontmatter and add machine-readable tags plus related skills.

[Medium] Effectiveness - Optional skipping weakens reliability

  • Location: workflow policy sentence near the beginning
  • Description: allowing arbitrary step skipping can bypass repository checks or user confirmation loops.
  • Suggestion: define required core steps and allow skipping only for clearly marked optional branches.

[Low] Effectiveness - Missing completion validation

  • Location: final output instruction
  • Description: the skill says to submit as a GitHub issue but does not specify how to verify success.
  • Suggestion: add a minimal validation step, such as confirming issue URL and key sections are present.

[Low] Safety - No sensitive-information boundary

  • Location: interview and repository exploration instructions
  • Description: the process asks for detailed user context but does not include a reminder to avoid exposing secrets.
  • Suggestion: add a short safety note about masking tokens, credentials, and private data.

Prioritized Recommendations

  1. [Must] Add complete YAML frontmatter governance fields and structured metadata.
  2. [Should] Make core workflow steps mandatory and define optional branches explicitly.
  3. [Should] Add completion validation criteria for GitHub issue submission.
  4. [Could] Add a lightweight safety reminder for handling sensitive project details.

Related Resources

Recommended Reading