Automate Candidate Pre-Screening with a Custom Voice AI
The best voice AI for pre-screening is a custom system using a high-quality speech-to-text model. It asks role-specific questions and scores answers against a predefined rubric, unlike generic SaaS tools. Syntora applies its expertise in custom AI development to engineer targeted solutions that integrate directly with existing HR tech stacks.
Syntora offers custom voice AI solutions for candidate pre-screening, leveraging advanced speech-to-text models and large language models like Claude API. Our approach focuses on building bespoke systems that integrate seamlessly with existing ATS platforms, providing structured candidate insights without relying on generic SaaS tools.
The system's scope depends on the number of screening questions and the specific Applicant Tracking System (ATS) integration. A simple 5-question screen for a single role that pushes a score to Greenhouse is a standard build. A multi-role system with conditional logic and custom analytics requires more detailed discovery and architectural planning.
The Problem
What Problem Does This Solve?
Recruiting teams often start with SaaS interview platforms. These tools are easy to set up but rely on rigid scripts and basic keyword matching for scoring. If your rubric requires judging the context of an answer, a keyword-based system fails. A candidate who says "I'm not certified, but I have operated a forklift" might get incorrectly passed through a simple filter.
A regional logistics company trying to hire 50 warehouse staff from 400 applicants faces this problem. Using a SaaS tool at $5 per interview costs them $2,000 for the initial screen. Worse, the recruiter still has to listen to hours of audio because the keyword search for "forklift certified" misses candidates who say "I operated a stand-up reach truck." The tool creates more review work, defeating the purpose.
Building on raw telecom APIs like Twilio is the next logical step, but it just provides the phone call plumbing. You are still responsible for building the state machine for conversation flow, integrating a separate transcription service, and then piping that text to an LLM for scoring. This quickly becomes a full-scale software project that internal teams lack the time or specific expertise to build.
Our Approach
How Would Syntora Approach This?
Syntora's engagement for voice AI pre-screening begins with a detailed discovery phase to understand your existing interview scripts and scoring rubrics. We would work with your team to define 5-7 critical qualifying questions, mapping them to a data model compatible with your Applicant Tracking System (ATS), such as Greenhouse or Lever. This ensures the generated data integrates directly into your workflow. Client input would involve providing current screening materials, access to ATS documentation and relevant API endpoints, and subject matter expertise on desired candidate profiles. The deliverables would include a fully deployed, custom voice AI pre-screening system, comprehensive technical documentation, and basic operational training. Typical build timelines for a system of this complexity, including discovery, development, and integration, range from 6 to 12 weeks, depending on the depth of ATS integration and the complexity of conditional logic.
Leveraging the Claude API for its sophisticated instruction-following and ability to return structured JSON, the core system would be a Python-engineered conversational state machine to manage the interview flow. We've applied similar Claude API-driven document processing pipelines in adjacent domains, such as financial document analysis, and the same robust pattern applies to candidate responses. When a candidate reaches the pre-screening stage in your ATS, an event or webhook would trigger an AWS Lambda function. This function would initiate the call, play the pre-defined questions, and stream the candidate's audio to a high-quality real-time transcription service. The resulting transcribed text would then be sent to a Claude API endpoint with a custom-engineered prompt designed to score the response against your specific rubric.
The Claude API would be configured to return a structured JSON object, including a score (e.g., 1-10) for each question, a confidence rating, and a concise summary of the interview. A FastAPI service would then be developed to securely write this parsed and scored data to custom fields within the candidate's ATS profile. This integration means your recruiters would access a complete, scored summary directly within their primary tools, streamlining their workflow.
The proposed architecture would utilize serverless infrastructure for scalability and cost-efficiency. Supabase would serve as the data store for call logs and to manage rubrics, allowing for non-technical adjustments to screening criteria. For operational visibility, all application logs would be written as structured JSON using structlog, enabling precise monitoring and alerting configuration, should specific thresholds be exceeded. This serverless design also contributes to a lean operational cost profile, making it a sustainable solution for high-volume recruitment.
Why It Matters
Key Benefits
Get Candidate Scores in 90 Seconds
The entire process, from call to a scored summary in your ATS, takes less than two minutes. Recruiters review qualified candidates, not raw applications.
Pay for a Build, Not Per Interview
A one-time fixed-price build and low monthly hosting. Avoids per-interview SaaS fees that penalize you for high applicant volume.
You Own the Interviewing Logic
The full Python source code is delivered to your GitHub. You can change questions, adjust scoring, or add new roles without vendor permission.
Monitored Performance, Not Black Box AI
We use Supabase and structured logging to track every call and score. You see exactly why a candidate passed or failed, with alerts for API errors.
Writes Directly to Your ATS
Natively integrates with Greenhouse, Lever, or other recruiting platforms via their APIs. No new dashboard for your team to learn.
How We Deliver
The Process
Week 1: Scoping and Access
You provide your current interview script, scoring rubric, and read/write API credentials for your Applicant Tracking System. We finalize the question flow and data model.
Week 2: Core Agent Build
We build the Python application that handles call logic, transcription, and scoring via the Claude API. You receive a link to a test environment to try the agent.
Week 3: Integration and Deployment
We connect the voice agent to your ATS and deploy the system on AWS Lambda. You receive the full source code in your GitHub repository.
Weeks 4-6: Monitoring and Handoff
We monitor the first 100 live interviews, fine-tuning prompts as needed. At the end of the period, you receive a runbook for maintenance and a final handoff.
Keep Exploring
Related Solutions
The Syntora Advantage
Not all AI partners are built the same.
Other Agencies
Assessment phase is often skipped or abbreviated
Syntora
We assess your business before we build anything
Other Agencies
Typically built on shared, third-party platforms
Syntora
Fully private systems. Your data never leaves your environment
Other Agencies
May require new software purchases or migrations
Syntora
Zero disruption to your existing tools and workflows
Other Agencies
Training and ongoing support are usually extra
Syntora
Full training included. Your team hits the ground running from day one
Other Agencies
Code and data often stay on the vendor's platform
Syntora
You own everything we build. The systems, the data, all of it. No lock-in
Get Started
Ready to Automate Your Technology Operations?
Book a call to discuss how we can implement ai automation for your technology business.
FAQ
