Build a Custom Voice AI for Pre-Screening Candidates
The best voice AI solutions for pre-screening entry-level job candidates involve custom systems leveraging large language models to ask consistent questions and score responses for relevance and communication skills. These systems are designed to automate initial candidate evaluation efficiently.
Syntora designs custom voice AI solutions for pre-screening entry-level job candidates by engineering systems that apply consistent scoring criteria using large language models. The approach involves defining detailed rubrics and integrating with existing Applicant Tracking Systems to streamline candidate evaluation.
The complexity of such a system depends on the number of screening questions, the specific Applicant Tracking System (ATS) integration requirements, and the depth of conditional logic needed. For instance, a basic three-question screener writing to a standard ATS like Greenhouse would be a more contained engineering project compared to a seven-question screener with complex conditional logic and custom data mapping.
The Problem
What Problem Does This Solve?
Recruiting teams often try off-the-shelf video interview platforms like Spark Hire or MyInterview. These tools are expensive, charging per user or per job posting, and their scoring is generic. The AI gives a vague "strong communicator" score but cannot tell you if a candidate mentioned specific technical skills or followed a STAR-method response format, forcing you to watch the video anyway.
A more technical team might try to connect an audio transcription service like AssemblyAI to an LLM API using a low-code platform. This fails at the logic step. The workflow needs to transcribe audio, send the text to an API with a complex rubric, parse the JSON response, handle API errors with retries, and write structured data back to an ATS. This requires multiple conditional paths and error-handling branches that are brittle and expensive to run on a per-task basis.
For example, a logistics firm screening 150 dispatchers needs to check for experience with specific routing software. A generic tool can't do this. A low-code workflow attempting this becomes a tangled mess of duplicate steps, timing out on large audio files and offering no way to debug a bad score from the LLM.
Our Approach
How Would Syntora Approach This?
Syntora would approach building a voice AI pre-screening system as a focused engineering engagement, starting with a discovery phase to define your specific requirements. The first step involves collaborating to define a detailed scoring rubric based on your essential screening questions. We would translate your ideal candidate answer criteria into a structured prompt for the Claude API, specifying the exact skills, keywords, and response structures for evaluation. This rubric would be designed as a configurable JSON file, allowing your team to update scoring parameters without developer intervention.
The core of the system would be a Python service, likely using FastAPI, exposing an endpoint to receive candidate audio files. This service would send the audio to AWS Transcribe, which provides text transcripts typically with less than a 10% word error rate. The transcript and the defined scoring rubric would then be passed to the Claude API. The Claude API is capable of returning a structured JSON object containing a score for each criterion, alongside a short rationale, usually within a few seconds. We have experience building similar document processing pipelines using Claude API for financial documents, and the same pattern applies to evaluating voice transcripts.
The entire system would be deployed on AWS Lambda, an environment well-suited for cost-effective, event-driven workloads. Candidates could record their answers through a simple web interface or via a Twilio-provisioned number. The engineered system would write the score, rationale, and full transcript to a custom note in your ATS. Syntora would develop custom integrations for specific ATS platforms such as Greenhouse and Lever, ensuring data flows correctly into your existing workflows.
Typical AWS and Claude API costs for processing around 500 candidates per month are projected to be under $50. The system would incorporate `structlog` for structured, queryable logs and configure CloudWatch alarms to provide alerts, for example, sending a Slack notification if API error rates exceed 1% over a five-minute period, to ensure operational stability. A typical build timeline for this level of complexity is 4-6 weeks, with an additional 2 weeks for integration and testing. Your team would need to provide access to your ATS API documentation and internal experts for rubric definition. Deliverables would include the deployed system, source code, and comprehensive documentation.
Why It Matters
Key Benefits
Get Candidate Scores in 90 Seconds
The system processes a recording, transcribes it, scores it against your rubric, and updates your ATS in less time than it takes to load a video.
Own the System, No Per-Seat Fees
This is a one-time build. You pay only for a flat monthly maintenance plan and pennies per candidate in cloud usage, not a recurring SaaS subscription.
Full Source Code in Your GitHub
You receive the complete Python source code and deployment scripts. The system is yours to modify or extend if you bring engineering in-house later.
Alerts When a Score Fails
We configure monitoring in AWS CloudWatch. If the transcription or scoring API fails, the system automatically flags the candidate for manual review.
Writes Directly Into Your ATS
Scores, transcripts, and scoring rationales appear as native notes in your existing ATS. We build integrations for platforms like Greenhouse and Lever.
How We Deliver
The Process
Scoping and Rubric Design (Week 1)
You provide your top screening questions and ideal answer profiles. We deliver a detailed scoring rubric as a JSON file for your approval.
Core System Build (Week 2)
We build the audio processing pipeline using FastAPI and the Claude API. You receive a link to a staging environment to test with sample recordings.
ATS Integration and Deployment (Week 3)
We connect the scoring service to your Applicant Tracking System. You receive a production-ready system and a secure credentials handoff document.
Monitoring and Handoff (Weeks 4-6)
We monitor system performance and scoring accuracy for two weeks post-launch. You receive a final runbook with API documentation and rubric update instructions.
Keep Exploring
Related Solutions
The Syntora Advantage
Not all AI partners are built the same.
Other Agencies
Assessment phase is often skipped or abbreviated
Syntora
We assess your business before we build anything
Other Agencies
Typically built on shared, third-party platforms
Syntora
Fully private systems. Your data never leaves your environment
Other Agencies
May require new software purchases or migrations
Syntora
Zero disruption to your existing tools and workflows
Other Agencies
Training and ongoing support are usually extra
Syntora
Full training included. Your team hits the ground running from day one
Other Agencies
Code and data often stay on the vendor's platform
Syntora
You own everything we build. The systems, the data, all of it. No lock-in
Get Started
Ready to Automate Your Professional Services Operations?
Book a call to discuss how we can implement ai automation for your professional services business.
FAQ
