Guide
Core Features

Built for the way interviews actually work

Eight modules. One scoring standard. Each module targets signals real interviewers evaluate — from correctness and structure to communication and trade-offs.

8
Feature Modules
6
AI Agents
Multi-turn
Session Depth
100%
Explainable Scores

Click any feature to explore it. All modules share a unified scoring engine.

CORE 6 AI AGENTS HYBRID SEARCH
HOW IT WORKS

From session start to actionable insight

Six stages that mirror a real interview loop — setup, question selection, response capture, evaluation, follow-ups, and session summary.

KEY DIFFERENTIATOR

Stratax evaluates at the session level, not just per-question. It tracks reasoning patterns, communication signals, and improvement over time — so you can see both single-answer quality and session-level consistency.

Advisory Note — AI feedback is guidance, not a hiring decision. Use the scores to find improvement areas, validate progress over time, and complement with real interviews and peer review.

ARCHITECTURE

Backend Technical Design

A modular FastAPI backend with provider-agnostic LLM orchestration, practice agents, hybrid retrieval, privacy-safe telemetry, and independently toggleable subsystems.

SYSTEM LAYERS

TECH STACK

Technology Stack

A pragmatic stack optimized for low-latency UX, type safety, and modular AI-powered evaluation.

16
Libraries & tools
Async
I/O model
Dual LLM
AI engine
Hybrid
Search strategy
Multi-Agent System

Multi-Agent Architecture

Practice Mode combines specialized agents and services for local STT, speech analytics, adaptive interviewing, deterministic scoring, proctoring state, and final coaching summaries.

  • Parallel signal extraction: content + delivery metrics are computed side-by-side
  • Unified rubric: outputs are merged into consistent scoring dimensions
  • Actionable coaching: feedback is turned into next-step improvements and drills
Agent Responsibility Technology
InterviewerAgent Orchestrates question flow, follow-ups, and micro-feedback Rules + LLM
SpeechAnalyticsAgent Delivery signals: pace, fillers, pauses, and consistency Signal processing + analytics
AdaptiveInterviewerAgent Evaluates answer quality, depth, and trade-offs Gemini / Groq
ConversationalAgent Maintains context and probes assumptions conversationally LLM + session memory
EvaluationAgent Generates structured summaries and coaching recommendations LLM
LocalSTTService Server-side transcription for practice audio submissions faster-whisper
Proctoring Engine Tracks heartbeat staleness, risk level, violations, and termination state Backend session state + rules
MODULE AI COPILOT

AI Copilot

Context-aware conversational assistant with session persistence, SSE streaming, dual-provider routing, and identity protection.

TURN CONTEXT
Persistent memory
2
LLM PROVIDERS
Groq + Gemini
SSE
STREAMING
Server-sent events
0
IMPERSONATION
Identity guard enforced
REQUEST LIFECYCLE
LIVE DEMO — SSE STREAM
ai-copilot session_a3f9
LIVE
Provider: Groq Gemini fallback ready

Walk me through how you'd design a rate limiter at scale.

GROQ STREAMING

Identity guard — no impersonation detected PASS
PROVIDER ROUTING
Request
Groq PRIMARY
Gemini FALLBACK
SSE Stream
ALL CAPABILITIES
Session persistence
Multi-turn context
Resume injection
Groq + Gemini dual LLM
SSE streaming
Identity guard
Provider hot-swap
Post-processing pipeline
MODULE · PRACTICE MODE

Practice Mode

Session-based practice for voice and coding rounds with deterministic scoring, local STT, backend-authoritative proctoring, progress continuity, and targeted next-session recommendations.

Voice + coding Camera + screen consent Heartbeat + risk state Next-session plan
🤖
6
AI AGENTS
Parallel evaluation
🎯
3
ROUND TYPES
Behavioral · Technical · Mixed
🎙️
WPM
SPEECH METRIC
Words per minute tracked
Local
STT ENGINE
faster-whisper on server
LIVE EVALUATION CRITERIA
Correctness
Technical accuracy of your answer
92
Delivery
Speech pace, clarity, and confidence
78
Clarity
Structure and logical flow
85
Structure
How well you organize your response
70
6-AGENT PIPELINE
📝
Local STT Service
Transcribes uploaded practice audio on the server
A1
📊
Speech Analytics Agent
Tracks WPM, pauses, filler words
A2
Adaptive Interviewer
Evaluates correctness, coverage, and trade-offs
A3
💬
Evaluation Agent
Builds final coaching summaries and action steps
A4
🧠
Proctoring Engine
Tracks heartbeat, risk level, and termination thresholds
A5
⚙️
Practice Orchestrator
Coordinates rounds, scoring, progress, and resume context
A6
MODULE · MOCK INTERVIEW

Mock Interview

Multi-question session simulator with progressive hints and 5-criteria LLM evaluation — each answer scored across a full rubric.

🎤
4
INTERVIEW TYPES
Behavioral · Technical · System · Mixed
💡
3
HINT LEVELS
Progressive scaffolding
📊
5
EVAL CRITERIA
Per-question rubric
📄
HTML
EXPORT REPORT
Downloadable session report
5-CRITERIA EVALUATION
Correctness
0–10
Completeness
0–10
Clarity
0–10
Confidence
0–10
Technical Depth
0–10
Correctness
0–10

Factual and technical accuracy of your answer

0 10
PROGRESSIVE HINT SYSTEM
L1
Nudge
A directional hint toward the right concept.
L2
Guided
A partial breakdown of the expected answer structure.
L3
Full Scaffold
A detailed walkthrough — shown only after two failed attempts.
SESSION FLOW
1
Configure session
2
Answer questions
3
Request hints (optional)
4
Receive per-Q scores
5
Download HTML report
MODULE · INTERVIEW INTELLIGENCE

Interview Intelligence

Hybrid retrieval + grounded question generation for company- and role-specific preparation.

🔍
Hybrid
RETRIEVAL
BM25 + vector fusion
🏢
FAANG+
COMPANY PREP
50+ companies indexed
🌐
Web+Vector
GROUNDING
Live web + stored vectors
QUERY DEMO
intelligence-engine · google System Design
>

""

RETRIEVAL PIPELINE
Keyword Retrieval
BM25 sparse
Vector Retrieval
Dense embeddings
Query Expand
LLM rewrite
Rerank
Cohere (optional)
Top Results
Final questions
COMPANY INDEX (FAANG+)
GoogleMetaAmazonAppleNetflixMicrosoftStripeAirbnbUberOpenAIAnthropicDatabricks
How It Works
BM25 keyword retrieval on company question corpus
Dense vector search via sentence-transformers
LLM query expansion for better recall
Cohere reranking for precision (optional)
Final questions grounded in company-specific patterns
MODULE · CODE EVALUATION

Code Evaluation

Hybrid static analysis + LLM critique pipeline for code submissions, executed server-side in an isolated Judge0 sandbox.

solution.py · Judge0 Sandbox Two Sum · Medium
# Two Sum — submitted by user def two_sum(nums, target): seen = {} for i, n in enumerate(nums): diff = target - n if diff in seen: return [seen[diff], i] seen[n] = i # ✓ O(n) time ✓ O(n) space ✓ All edge cases passed ✓ PASSED · LLM Score: 9.2 / 10
⚙️
SANDBOX
Server-side execution
Judge0
💻
LANGUAGES
Python, JS, Go, Java…
10+
📈
COMPLEXITY CHECK
Time & space analysis
O(n)
LLM SCORE (DEMO)
Out of 10 per submission
9.2
EVALUATION PIPELINE
Submit
Code + question sent
Execute
Judge0 sandbox runs it
Static Analysis
Complexity + style check
LLM Critique
Deep code review
Score
0–10 + feedback
SUPPORTED LANGUAGES
PythonJavaScriptTypeScriptGoJavaC++RustRubyKotlinSwift
MODULE · RESUME INTELLIGENCE

Resume & Document Intelligence

LLM-based resume parsing with claim extraction, ATS scoring, skills gap analysis, and tailored interview probing — all in-memory, nothing stored.

📄
PDF
DOCX · TXT
All major formats supported
🔬
4
ANALYSIS DEPTHS
Surface to deep claim extraction
🔒
0
FILES STORED
Processed in memory, never persisted
ATS
SCORING ENGINE
Recruiter-standard rubric
ANALYSIS OUTPUT (DEMO)
ATS Score
How well resume passes automated screening
88
Skills Match
Alignment with target role requirements
74
Claim Strength
How verifiable and specific your claims are
62
Experience Fit
Relevance of experience to the target role
91
CLAIM-BASED PROBING
“Led a team of 8 engineers to deliver…”
Strong
“Improved API latency by 40%”
Strong
“Worked with machine learning models”
Weak
GENERATED PROBE

ANALYSIS PIPELINE
01
Parse Document
PDF/DOCX/TXT extracted and cleaned
02
Claim Extraction
LLM identifies all verifiable claims
03
ATS Scoring
Rubric-based recruiter simulation
04
Gap Analysis
Skills vs. target role compared
05
Interview Probing
Questions targeting each claim

API Reference

Key REST endpoints powering auth, session Q and A, practice orchestration, retrieval, and code execution.

Authentication

POST/auth/registerRegister a new user account
POST/auth/loginAuthenticate and issue JWT credentials
GET/auth/meResolve the authenticated user profile

Chat / Q&A

POST/api/sessionCreate a new chat session
POST/api/questionSubmit a question and receive the assistant response
GET/api/session/{id}/chatLoad stored chat history for a session
GET/api/sessionsList the user's chat sessions
Example Request
{ "question": "Explain distributed systems", "session_id": "abc-123-xyz" }
Example Response
{ "answer": "A distributed system is...", "session_id": "abc-123-xyz" }

Mock Interview

POST/api/mock-interview/sessions/startStart a mock interview session
POST/api/mock-interview/sessions/submit-answerSubmit an answer for the active mock question
POST/api/mock-interview/sessions/{id}/hintRequest the next progressive hint
Example Request
{ "type": "System Design", "difficulty": "Hard", "num_questions": 3 }
Example Response
{ "session_id": "mock-789", "status": "in_progress", "first_question": { "text": "Design a global rate limiter..." } }
GET/api/mock-interview/sessions/{id}/summaryGet session summary

Practice Mode

POST/api/practice/interview/start-roundStart a round-based practice session
POST/api/practice/interview/quick-startStart practice from conversational onboarding
POST/api/practice/session/{id}/startEnforce camera and screen-share gate for Live Practice
POST/api/practice/interview/submit-answerSubmit a voice answer
POST/api/practice/session/{id}/proctoring/heartbeatUpdate backend-authoritative proctoring state
GET/api/practice/progress/summaryFetch attempts, scores, and progress rollups

Interview Intelligence

GET/api/intelligence/searchSearch questions with hybrid retrieval
POST/api/intelligence/search/enhancedRun enhanced search with reranking and expansion
GET/api/intelligence/companiesList supported company filters

Code Execution

POST/api/code/executeExecute code in the backend sandbox with optional trace output
Interactive API Docs Full interactive documentation is available via Swagger UI at /docs and ReDoc at /redoc when running the server.

Deployment

Deploy Stratax AI as a FastAPI service with PostgreSQL, Qdrant, Redis, and optional Kroki or Sentry integrations.

Environment Setup

Deployment Status Production deployments are expected to use PostgreSQL, external Qdrant, and Redis-backed coordination. Local SQLite and file-backed storage remain development-friendly options only.

Environment Variables

Variable Required Description
GROQ_API_KEY One of Groq/Gemini Groq API key for Llama/Mixtral
GEMINI_API_KEY One of Groq/Gemini Google Gemini API key
DATABASE_URL Recommended SQL database for users, usage, telemetry, and rate limits
QDRANT_URL Optional Shared Qdrant endpoint for semantic retrieval and multi-worker scaling
REDIS_URL Optional Redis for distributed caching, rate limiting, and cross-worker coordination
JWT_SECRET_KEY Production Persistent JWT signing key
COOKIE_SECRET Production Cookie encryption secret
STRATAX_SECRETS_ENCRYPTION_KEY Production Encrypts user-stored provider keys with Fernet
REQUIRE_USER_API_KEY Optional Enforces strict BYOK behavior for Interview Intelligence and other key-consuming flows
SENTRY_DSN Optional Enables centralized error tracking and release tagging
Production Notes Use PostgreSQL in production, keep Qdrant and Redis external for multi-worker deployments, set persistent JWT_SECRET_KEY and COOKIE_SECRET, and enable STRATAX_SECRETS_ENCRYPTION_KEY before storing provider keys.

Security & Privacy

Built with privacy-safe design principles from the ground up.

  • Privacy-safe telemetry: Stable identifiers are hashed and raw text is avoided by default
  • No raw audio for learning: Audio files are session-only and excluded from learning loops
  • Resume privacy: Raw uploaded files are deleted after extraction
  • Encrypted provider keys: User-stored keys are encrypted with Fernet
  • JWT + OAuth: Stateless JWT authentication with Google OAuth integration
  • Guest continuity: Stable guest identifiers preserve practice progress across sessions
  • Backend-authoritative proctoring: Heartbeats, violation counters, and termination reasons are tracked server-side
  • Identity guard + post-processing: Responses are sanitized and checked before they reach the UI