Skip to main content

scripts-generate-doc-quality-report

#!/usr/bin/env python3 """

title: "Generate Doc Quality Report" component_type: script version: "1.0.0" audience: contributor status: stable summary: "CODITECT Documentation Quality Report Generator ================================================" keywords: ['analysis', 'doc', 'generate', 'generation', 'quality'] tokens: ~500 created: 2025-12-22 updated: 2025-12-22 script_name: "generate-doc-quality-report.py" language: python executable: true usage: "python3 scripts/generate-doc-quality-report.py [options]" python_version: "3.10+" dependencies: [] modifies_files: false network_access: false requires_auth: false

CODITECT Documentation Quality Report Generator

STATUS: STUB - Not yet implemented VERSION: 0.1.0 (placeholder) AUTHOR: CODITECT Core Team

DESCRIPTION: Generates comprehensive documentation quality reports by analyzing markdown files for completeness, consistency, link validity, and adherence to documentation standards.

PURPOSE: - Score documentation quality on multiple dimensions - Identify missing or incomplete documentation - Validate internal and external links - Check for stale/outdated content - Enforce documentation standards and templates - Generate actionable improvement recommendations

EXPECTED INPUTS: --paths : Directories to scan for documentation --config : Documentation standards config file --output : Output report file path --format : Report format (json, html, markdown) --threshold : Minimum quality score to pass (0-100) --check-links : Validate all links (slower) --check-freshness : Flag docs older than N days

EXPECTED OUTPUTS: - doc-quality-report.json/html/md with: { "overall_score": 85, "summary": { "total_files": N, "passing": N, "warnings": N, "errors": N }, "by_category": { "completeness": 90, "consistency": 85, "links": 95, "freshness": 70 }, "files": [{ "path": "docs/README.md", "score": 92, "issues": [...] }], "recommendations": [...] }

DEPENDENCIES: - markdown-it-py or mistune - for markdown parsing - requests - for link validation - pyyaml - for frontmatter parsing - Jinja2 - for HTML report generation

IMPLEMENTATION REQUIREMENTS: 1. Markdown AST parsing for structure analysis 2. Frontmatter extraction and validation 3. Internal link resolution and validation 4. External link checking with caching 5. Template compliance checking 6. Readability scoring (Flesch-Kincaid, etc.) 7. Code block language detection 8. Image alt-text validation 9. TOC generation and validation 10. Cross-reference consistency

QUALITY DIMENSIONS: - Completeness: Required sections present - Consistency: Formatting, naming conventions - Links: Internal/external link validity - Freshness: Last modified vs content age - Readability: Language complexity - Accessibility: Alt text, heading hierarchy

USAGE EXAMPLES: # Basic quality report python scripts/generate-doc-quality-report.py --paths docs/

# With link validation
python scripts/generate-doc-quality-report.py \\
--paths docs/ \\
--check-links \\
--output reports/doc-quality.html

# CI mode with threshold
python scripts/generate-doc-quality-report.py \\
--paths docs/ \\
--threshold 80 \\
--format json

RELATED COMMANDS: - /lint-docs : Documentation linting command - /doc-generate : Documentation generation

SEE ALSO: - commands/lint-docs.md - docs/DOCUMENTATION-STANDARDS.md """

import argparse import json import sys from datetime import datetime from pathlib import Path

def main(): parser = argparse.ArgumentParser( description='Documentation Quality Report Generator', formatter_class=argparse.RawDescriptionHelpFormatter, epilog=''' Examples: %(prog)s --paths docs/ %(prog)s --paths docs/ --check-links --format html %(prog)s --threshold 80 --format json

Status: STUB - Implementation required ''' )

parser.add_argument('--paths', nargs='*', default=['docs/'],
help='Directories to scan (default: docs/)')
parser.add_argument('--config', default='.doc-standards.yml',
help='Standards configuration file')
parser.add_argument('--output', default='doc-quality-report',
help='Output file base name')
parser.add_argument('--format', default='json',
choices=['json', 'html', 'markdown'],
help='Report format (default: json)')
parser.add_argument('--threshold', type=int, default=0,
help='Minimum score to pass (0-100, default: 0)')
parser.add_argument('--check-links', action='store_true',
help='Validate all links (slower)')
parser.add_argument('--check-freshness', type=int, default=0,
help='Flag docs older than N days')
parser.add_argument('--verbose', '-v', action='store_true',
help='Verbose output')

args = parser.parse_args()

print("=" * 70)
print("CODITECT DOC-QUALITY-REPORT - STUB IMPLEMENTATION")
print("=" * 70)
print(f"\nThis script is a placeholder stub.")
print(f"Full implementation is required.\n")
print(f"Configuration:")
print(f" Paths: {args.paths}")
print(f" Config: {args.config}")
print(f" Output: {args.output}.{args.format}")
print(f" Threshold: {args.threshold}")
print(f" Check Links: {args.check_links}")
print(f" Check Freshness: {args.check_freshness} days")
print()

# Create stub output
output_file = f"{args.output}.{args.format}"

stub_report = {
"status": "stub",
"message": "Documentation quality report not yet implemented",
"timestamp": datetime.now().isoformat(),
"overall_score": 0,
"summary": {
"total_files": 0,
"passing": 0,
"warnings": 0,
"errors": 0
},
"by_category": {
"completeness": 0,
"consistency": 0,
"links": 0,
"freshness": 0,
"readability": 0
},
"files": [],
"recommendations": [
"Implement this script to enable documentation quality tracking"
]
}

with open(output_file, 'w') as f:
json.dump(stub_report, f, indent=2)

print(f"Stub report written to: {output_file}")
print("\nTo implement this script, see the docstring requirements above.")

return 0

if name == 'main': sys.exit(main())