AI Automation/Technology

Custom Voice AI for Screening Entry-Level Applicants

Off-the-shelf platforms like MyInterview or HireVue work well for standard, high-volume roles. Custom-built systems are better for roles requiring nuanced technical or cultural evaluation.

By Parker Gawne, Founder at Syntora|Updated Mar 5, 2026

Syntora offers bespoke voice AI solutions for screening job applicants, focusing on custom systems for nuanced technical or cultural evaluations. Our engineering engagements build robust architectures using technologies like Claude API and FastAPI to create intelligent, scalable screening pipelines tailored to your unique criteria.

A pre-built platform gets you started in hours but uses generic scoring models that assess tone and confidence. A custom system is tuned to your specific job descriptions and success criteria, scoring candidates on the substance of their answers, not just their delivery.

Syntora designs and engineers bespoke voice AI screening systems tailored to your specific hiring needs. We focus on understanding your unique job requirements and evaluation criteria to build a solution that accurately assesses candidate fit. An engagement with Syntora typically involves an initial discovery phase to define scope, followed by system design, development, and integration. Typical build timelines for a system of this complexity range from 6 to 10 weeks, depending on the number of roles and depth of integration required.

The Problem

What Problem Does This Solve?

Recruiting teams often start with automated interview platforms like Spark Hire or Willo. These tools are great for replacing one-way video screens but fail at accurately assessing technical roles. Their scoring models are black boxes, often grading for generic traits like 'clarity' which has little correlation with a developer's ability to write Python code.

A specialized recruiting firm tried using one of these platforms to screen for 10 different engineering roles. The platform could not differentiate a good answer about AWS services from a bad one. Recruiters had to listen to every 15-minute recording to manually score technical competence, which completely defeated the purpose of automation. They were paying $120 per recruiter per month for a glorified audio file host.

Furthermore, these platforms offer limited integrations. They might push a candidate's overall score to a major ATS like Greenhouse, but they cannot sync detailed, per-question feedback to a custom field or connect to an industry-specific recruiting system. This forces a workflow of manual copy-pasting that introduces errors and wastes time.

Our Approach

How Would Syntora Approach This?

Syntora's approach would begin with a discovery phase to meticulously map your existing screening questions and scoring rubric. This translates into a set of precise prompts for the Claude API, engineered to reflect your specific evaluation criteria for various roles.

The technical architecture would involve a Python and FastAPI endpoint, designed to accept audio files submitted from your careers page or application form. This endpoint would be deployed as a serverless function on AWS Lambda, allowing for scalable handling of unpredictable applicant volume without incurring continuous server costs.

Upon receiving an audio file, the Lambda function would first route it to a transcription API, such as Deepgram. Deepgram is chosen for its robust performance and high accuracy, particularly with technical jargon commonly found in specialized roles like junior developers. For instance, in other document processing pipelines we've built using Claude API (for financial documents), similar transcription services consistently achieve high word accuracy, a pattern that applies to technical screening scenarios. A typical 3-minute audio response can be transcribed into text rapidly, usually in under 4 seconds.

The generated transcript is then sent to the Claude 3 Sonnet API. The custom-designed prompt would instruct the model to score the candidate's answers against the defined rubric, providing ratings from 1 to 10 across criteria such as technical correctness, problem-solving approach, and cultural fit. The Claude API is designed to return a structured JSON object, including scores, a summary, and direct quotes supporting the evaluation, typically within about 9 seconds.

Finally, this structured JSON output would be stored in a Supabase database for historical analysis and reporting. Concurrently, an asynchronous function would make an API call to your applicant tracking system (ATS) to populate custom fields with the per-question scores and summary. The client would need to provide access to their ATS API and define the specific fields for integration. The delivered system would provide the core logic for voice-based applicant screening, with clear APIs for integration into existing career portals and ATS systems. An engagement would include architectural design, API development, cloud deployment, and comprehensive documentation, enabling your team to maintain and extend the system. Typical end-to-end processing for a 3-minute audio file, from submission to ATS update, can be expected to complete within 15 seconds, with hosting costs for the serverless components often under $50 per month for hundreds of applicants, depending on actual usage.

Why It Matters

Key Benefits

01

Go Live in 3 Weeks, Not 3 Quarters

From kickoff to a production-ready system integrated with your ATS in 15 business days. Begin screening candidates automatically next month.

02

Pay Per Candidate, Not Per Recruiter Seat

A one-time build fee and low per-use API costs. Your expenses scale with applicant volume, not how many recruiters are on your team.

03

You Own the Code and the Scoring Model

The complete Python source code and all prompts are delivered to your company's GitHub repository. You have full control and no vendor lock-in.

04

Alerts When Transcription Quality Drops

We build monitoring that sends a Slack message if transcription confidence falls below 90%, flagging the recording for immediate human review.

05

Connects to Any ATS with an API

We write custom API connectors for your specific system, whether it's a major platform like Lever or an in-house recruiting database.

How We Deliver

The Process

01

Week 1: Rubric and Architecture Design

You provide 3 target job descriptions and current screening questions. We deliver a finalized scoring rubric and a complete system architecture diagram.

02

Week 2: Core Pipeline Construction

We build the audio processing pipeline with FastAPI on AWS Lambda and integrate the transcription and analysis APIs. You receive a staging URL for testing.

03

Week 3: ATS Integration and Deployment

We write the custom connector to your Applicant Tracking System and deploy the full system to production. You receive API keys and documentation.

04

Weeks 4-8: Monitoring and Handoff

We monitor the first 200 live screenings, tune prompts for accuracy, and document maintenance procedures. You receive a final runbook.

The Syntora Advantage

Not all AI partners are built the same.

AI Audit First

Other Agencies

Assessment phase is often skipped or abbreviated

Syntora

Syntora

We assess your business before we build anything

Private AI

Other Agencies

Typically built on shared, third-party platforms

Syntora

Syntora

Fully private systems. Your data never leaves your environment

Your Tools

Other Agencies

May require new software purchases or migrations

Syntora

Syntora

Zero disruption to your existing tools and workflows

Team Training

Other Agencies

Training and ongoing support are usually extra

Syntora

Syntora

Full training included. Your team hits the ground running from day one

Ownership

Other Agencies

Code and data often stay on the vendor's platform

Syntora

Syntora

You own everything we build. The systems, the data, all of it. No lock-in

Get Started

Ready to Automate Your Technology Operations?

Book a call to discuss how we can implement ai automation for your technology business.

FAQ

Everything You're Thinking. Answered.

01

How much does a custom voice screening system cost?

02

What happens if an applicant has a heavy accent or bad audio?

03

How is this different from a platform like MyInterview?

04

How is candidate data privacy handled?

05

Can we change the screening questions ourselves after the build?

06

How do you ensure the AI scoring is fair and unbiased?