LLM Integration
Kairos uses rig-core to communicate with Claude AI for three core functions: skill extraction, JD analysis, and resume tailoring.
Architecture
graph TB
subgraph "kairos-core (traits)"
T0[SkillExtractorService]
T1[JdAnalysisService]
T2[ResumeTailorService]
end
subgraph "kairos-llm (implementations)"
SE[ClaudeSkillExtractor]
A[ClaudeAnalyzer]
T[ClaudeTailor]
P0[Skill Extraction Prompts]
P1[JD Analysis Prompts]
P2[Resume Tailoring Prompts]
RC[rig-core Client]
end
SE --> T0
A --> T1
T --> T2
SE --> P0
A --> P1
T --> P2
SE --> RC
A --> RC
T --> RC
RC --> API[Claude API]
Skill Extraction
Runs once during resume import to build the user’s Skill Profile.
Prompt Strategy
Temperature: 0.0 (deterministic extraction)
The prompt instructs Claude to:
- Analyze each work experience and project section from the resume
- Extract every technical skill mentioned (languages, frameworks, databases, tools, concepts)
- Estimate proficiency (1-10) based on context clues: duration, depth of usage, role seniority
- Estimate years of experience per skill
- Cite evidence — which project/role demonstrates each skill
Output is parsed as a Vec<SkillEntry> via rig-core’s structured output. Users can review and adjust proficiency ratings after generation.
JD Analysis
Prompt Strategy
Temperature: 0.0 (deterministic extraction)
The prompt instructs Claude to:
- Extract structured data from the JD text
- Identify required vs preferred skills with estimated proficiency level needed
- Extract ATS keywords
- Compare against the user’s Skill Profile (proficiency-aware, not just binary)
- Compute a match score (0.0 - 1.0) with per-skill breakdown and reasoning
Output is parsed as JSON matching JdAnalysis struct via rig-core’s structured output.
Resume Tailoring
Anti-Hallucination Rules (enforced in system prompt)
- Only rephrase, reorder, or emphasize existing content
- Never invent skills, experiences, achievements, or metrics
- Never change company names, dates, job titles, education details
- Only modify sections marked as tailorable
- Preserve the original format (LaTeX or Markdown)
Temperature: 0.3 (controlled creativity)
Validation
After LLM response, a validation step checks:
- No new company names or job titles appeared
- No new degree or institution names
- Dates haven’t changed
- Section types match what was sent
If validation fails, retry with stricter prompt or flag to user.
Output Flow
flowchart TD
Input["Base Resume Sections\n+ JD Analysis"] --> LLM[Claude via rig-core]
LLM --> Raw[Raw LLM Response]
Raw --> Parse[Parse modified sections]
Parse --> Validate[Validate: no fabricated content]
Validate --> |Pass| Diff[Generate diff]
Validate --> |Fail| Retry[Retry with stricter prompt]
Diff --> User[Show to user for approval]
Cost
| Operation | Tokens (approx) | Cost at Sonnet | Frequency |
|---|---|---|---|
| Skill Extraction | 1,000-2,000 | ~$0.01-0.02 | Once per resume import |
| JD Analysis | 500-1,500 | ~$0.003-0.01 | Per job |
| Resume Tailoring | 1,000-3,000 | ~$0.01-0.03 | Per job |
Model Configuration
Default: claude-sonnet-4
[api]
claude_model = "claude-sonnet-4"
rig-core’s multi-provider support allows switching to OpenAI or Gemini in the future.