///
The intelligence at the heart of `gh.gg` is powered by a suite of AI analysis modules, primarily utilizing **Google Gemini AI**. These modules, located in the `src/lib/ai/` directory, are responsible
114 views
~114 views from guests
Guest views are estimated from total page views. These include anonymous visitors and users who weren't logged in when they viewed the page.
The intelligence at the heart of gh.gg is powered by a suite of AI analysis modules, primarily utilizing Google Gemini AI. These modules, located in the src/lib/ai/ directory, are responsible for processing raw GitHub data and transforming it into actionable insights, visualizations, and comprehensive documentation. They directly implement the AI Processing described in the Architecture Overview.
Each module is designed for a specific analytical task, taking structured inputs and generating structured outputs along with detailed markdown reports and token usage statistics.
ai-slop.tsfiles (with path and content), and the repoName.AISlopData (structured metrics, overall score, AI-generated percentage, detected patterns) and usage (token count).battle-analysis.tsDeveloperProfile objects across various criteria (e.g., code quality, innovation, architecture). It also calculates ELO rating changes.challengerProfile, opponentProfile, their respective usernames, and an array of criteria for the battle.BattleAnalysisResult (winner, reason, score breakdown, highlights, recommendations) and usage.commit-analysis.tscommitMessage, commitSha, author details, changedFiles (including patch content), repoName, and branch.CommitAnalysisResult (overall score, slop ranking, breakdown for message/code quality/AI likelihood/best practices, and actionable fixes), a formatted markdown report, and usage.developer-profile.tsusername, RepoSummary for their repositories, optional repoFiles for deeper code analysis, and userId for linking generated scorecards.DeveloperProfile (summary, skill assessment, tech stack, development style, top repositories, suggestions) and usage.scorecard.ts module output) to provide scored insights and improvement suggestions. It also handles finding and storing developer emails from commit history.diagram.tsfiles (with path and content), repoName, diagramType, options (e.g., layout), and optional previousResult/lastError for retry attempts.diagramCode (Mermaid syntax) and usage.documentation.tsrepoName, repoDescription, primaryLanguage, a list of files with content, packageJson, and readme content. Supports useChunking and tokensPerChunk for scaling.DocumentationResult (structured JSON for all wiki sections), a full markdown string, and usage.issue-analysis.tsissueTitle, issueBody, issueNumber, author details, labels, repoName, and optional repoDescription.IssueAnalysisResult (overall score, slop ranking, breakdown for clarity/actionability/completeness/AI likelihood/duplicate risk, suggested labels, priority, and improvements), a formatted markdown report, and usage.pr-analysis.tsprTitle, prDescription, changedFiles (including patch content), repoName, baseBranch, and headBranch.PRAnalysisResult (overall score, summary, breakdown for code quality/security/performance/maintainability, and key recommendations), a formatted markdown comment, and usage.scorecard.tsfiles (with path and content) and the repoName.ScorecardData (structured metrics, overall score), a formatted markdown report focusing on business impact, and usage.wiki-generator-streaming.tswiki-generator.ts module, designed to handle large repositories by processing files in chunks and providing real-time progress updates.documentation.ts, it plans and generates comprehensive wiki documentation, but with an added layer of robust, scalable processing for massive codebases.wiki-generator.ts, plus an onProgress callback for streaming updates.ProgressUpdate objects, culminating in WikiGeneratorResult (generated pages, detailed usage including cache tokens).genAI.caches) to store the entire codebase context for efficient multi-stage generation. It performs chunk-by-chunk analysis and a final stitching phase, and incorporates retry logic with exponential backoff to manage API rate limits.These AI modules form the "intelligence layer" of gh.gg, leveraging Google Gemini's advanced natural language understanding and generation capabilities to provide unprecedented insights and automation for GitHub repositories.