Automate Candidate Pre-Screening with a Custom Voice AI
The best voice AI for pre-screening is a custom system using a high-quality speech-to-text model. It asks role-specific questions and scores answers against a predefined rubric, unlike generic SaaS tools.
The system's scope depends on the number of screening questions and the specific Applicant Tracking System (ATS) integration. A simple 5-question screen for a single role that pushes a score to Greenhouse is a standard build. A multi-role system with conditional logic and custom analytics requires more discovery.
We built a voice pre-screening agent for a 12-person recruiting firm processing 400 applicants/month for warehouse roles. We deployed their system in 3 weeks. It cut their pre-screening time from 15 minutes per candidate to a 90-second automated process.
What Problem Does This Solve?
Recruiting teams often start with SaaS interview platforms. These tools are easy to set up but rely on rigid scripts and basic keyword matching for scoring. If your rubric requires judging the context of an answer, a keyword-based system fails. A candidate who says "I'm not certified, but I have operated a forklift" might get incorrectly passed through a simple filter.
A regional logistics company trying to hire 50 warehouse staff from 400 applicants faces this problem. Using a SaaS tool at $5 per interview costs them $2,000 for the initial screen. Worse, the recruiter still has to listen to hours of audio because the keyword search for "forklift certified" misses candidates who say "I operated a stand-up reach truck." The tool creates more review work, defeating the purpose.
Building on raw telecom APIs like Twilio is the next logical step, but it just provides the phone call plumbing. You are still responsible for building the state machine for conversation flow, integrating a separate transcription service, and then piping that text to an LLM for scoring. This quickly becomes a full-scale software project that internal teams lack the time or specific expertise to build.
How Does It Work?
We start with your existing interview script and scoring rubric. We define the 5-7 critical qualifying questions and map them to a data model for your ATS, typically Greenhouse or Lever. This ensures the data we generate fits directly into your existing workflow. We use the Claude API for its sophisticated instruction-following and ability to return structured JSON.
We build a conversational state machine in Python that manages the interview flow. When a candidate enters the pre-screening stage in your ATS, a webhook triggers an AWS Lambda function. This function initiates a call, plays the questions, and sends the candidate's audio to a real-time transcription service. The transcribed text is immediately sent to a Claude API endpoint with a prompt engineered to score the response against your rubric.
The Claude API returns a structured JSON object containing a score from 1-10 for each question, a confidence rating, and a two-sentence summary of the interview. The entire analysis, from the end of the call to the structured output, completes in under 8 seconds. A FastAPI service then writes this data to custom fields in the candidate's ATS profile. Your recruiters see a complete, scored summary without leaving their primary tool.
The entire system runs on serverless infrastructure. We use Supabase to store call logs and manage rubrics, which you can edit without a code change. All application logs are written as structured JSON using structlog, allowing for precise monitoring. If the API error rate exceeds 2% in an hour, we are automatically alerted. Total hosting costs for processing over 500 candidates a month are typically under $50.
What Are the Key Benefits?
Get Candidate Scores in 90 Seconds
The entire process, from call to a scored summary in your ATS, takes less than two minutes. Recruiters review qualified candidates, not raw applications.
Pay for a Build, Not Per Interview
A one-time fixed-price build and low monthly hosting. Avoids per-interview SaaS fees that penalize you for high applicant volume.
You Own the Interviewing Logic
The full Python source code is delivered to your GitHub. You can change questions, adjust scoring, or add new roles without vendor permission.
Monitored Performance, Not Black Box AI
We use Supabase and structured logging to track every call and score. You see exactly why a candidate passed or failed, with alerts for API errors.
Writes Directly to Your ATS
Natively integrates with Greenhouse, Lever, or other recruiting platforms via their APIs. No new dashboard for your team to learn.
What Does the Process Look Like?
Week 1: Scoping and Access
You provide your current interview script, scoring rubric, and read/write API credentials for your Applicant Tracking System. We finalize the question flow and data model.
Week 2: Core Agent Build
We build the Python application that handles call logic, transcription, and scoring via the Claude API. You receive a link to a test environment to try the agent.
Week 3: Integration and Deployment
We connect the voice agent to your ATS and deploy the system on AWS Lambda. You receive the full source code in your GitHub repository.
Weeks 4-6: Monitoring and Handoff
We monitor the first 100 live interviews, fine-tuning prompts as needed. At the end of the period, you receive a runbook for maintenance and a final handoff.
Frequently Asked Questions
- How is a project like this scoped and priced?
- Pricing is fixed based on the number of questions, the complexity of the scoring rubric, and the specific ATS integration. A typical system with 5-7 questions that integrates with Greenhouse takes 3 weeks. After the one-time build fee, hosting costs on AWS Lambda are usually under $50 per month. Book a discovery call at cal.com/syntora/discover for a detailed quote.
- What happens if a candidate hangs up or the call drops?
- The system is designed to be fault-tolerant. If a call drops, it is logged as 'incomplete' in Supabase and a note is added to the ATS. The recruiter is notified to follow up manually. We use retry logic for API calls to Claude and the ATS, so transient network issues do not cause failures. You never lose a candidate record.
- How is this different from using a platform like MyInterview?
- SaaS platforms like MyInterview offer a pre-built interface but use generic scoring models and charge per interview. Syntora builds a custom scoring agent based on your specific rubric. You own the code, the logic is transparent, and you pay a one-time build fee, making it far more cost-effective for screening hundreds of entry-level candidates per month.
- How accurate is the transcription and scoring?
- We use best-in-class speech-to-text APIs that have word error rates below 10% for clear audio. The scoring accuracy depends on the rubric's quality. We use the Claude API, which excels at nuanced understanding over simple keyword matching. For a well-defined rubric, we typically see over 95% agreement with human recruiter scores during testing.
- Can we change the interview questions ourselves later?
- Yes. The questions and scoring logic are stored in a Supabase table, not hard-coded. The handoff includes a runbook that explains how a non-technical user can edit the question text. Changing the fundamental scoring logic requires a few lines of Python, which is also documented. You have full control over the system.
- What is the experience like for the job candidate?
- The candidate receives a call from a local number. A clear, natural-sounding AI voice introduces itself and explains the pre-screening process. It asks one question at a time and waits for them to finish speaking before moving on. The entire call for 5-7 questions typically takes less than 5 minutes, which candidates prefer over scheduling a 15-minute call with a recruiter.
Related Solutions
Ready to Automate Your Small Business Operations?
Book a call to discuss how we can implement ai automation for your small business business.
Book a Call