AI Automation/Technology

Build a Custom Voice AI for Automated Reference Checks

The best voice AI reference checking solution is a custom system using a high-quality transcription model and a large language model like Claude. It calls references, asks structured questions, analyzes responses for key traits, and generates a summary report.

By Parker Gawne, Founder at Syntora|Updated Mar 5, 2026

Syntora develops custom voice AI solutions for automated reference checking, integrating with a client's existing ATS. Our engineering engagements focus on designing a tailored system that standardizes the reference process using advanced AI models for feedback analysis.

Organizations often spend significant time on manual reference calls, leading to inconsistencies and delays in the hiring process. A custom voice AI system can automate this, allowing recruiting teams to focus on candidate engagement rather than administrative tasks.

The complexity of such a system would depend on factors like your existing Applicant Tracking System (ATS) integration requirements and the depth of analysis needed. For example, a typical build involves connecting to a modern ATS like Lever or Greenhouse, executing a set of 8-10 targeted questions, and scoring candidates on core competencies. More advanced systems could include support for multiple languages or specialized sentiment analysis models. Syntora focuses on delivering tailored engineering engagements, designing a system that meets your specific operational needs and integrates with your current workflows.

The Problem

What Problem Does This Solve?

Recruiting teams often start with email-based reference tools like Checkster. These are essentially automated surveys. The problem is that references provide brief, low-effort written answers, and response rates are often below 50%. There is no way to ask a follow-up question or gauge the tone and conviction behind an answer, which is where the real signal is.

A more significant issue is the manual process itself. A recruiter at a small firm trying to fill 15 roles a quarter must conduct over 100 individual reference calls. Each 20-minute call is preceded by 10 minutes of email tag for scheduling. This consumes more than 50 hours per quarter, time that could be spent sourcing new candidates. The process is impossible to scale without hiring more recruiters.

Some firms try to solve this with virtual assistants, but that introduces new problems of consistency and quality control. One VA may be a great interviewer while another just reads the script. This introduces significant bias and noise into the hiring process. The data collected is unstructured and difficult to compare across candidates, making the entire effort subjective.

Our Approach

How Would Syntora Approach This?

Syntora would begin an engagement with a discovery phase, auditing your existing Applicant Tracking System (ATS) and specific hiring workflows. This would define the integration points and requirements for question sets, which would be managed and stored in a Supabase database, allowing for tailored questions per job role.

The core of the system would be a Python application built with FastAPI. This application would utilize Twilio to place outbound calls to references. For real-time speech-to-text transcription, Syntora would integrate Deepgram, chosen for its high accuracy potential on phone lines, enabling the conversational AI. The conversational flow would be managed by the Claude 3 Sonnet API, processing transcripts and dynamically formulating follow-up questions to maintain a natural dialogue.

Upon call completion, the full transcript would be routed to the Claude 3 Opus API for detailed analysis. This model would extract key information, score the reference's feedback against predefined competencies from the job description, and generate a concise summary. This analysis would produce a structured JSON object and a PDF summary report. Syntora has experience building document processing pipelines using Claude API for sensitive financial documents, and the same pattern applies to synthesizing feedback from reference calls.

The entire service would be designed for efficient deployment, often using serverless architectures like AWS Lambda. This approach ensures the system scales with demand and optimizes operational costs. The final PDF report and structured data would be pushed back into the candidate's profile within your ATS via its API, and notifications could be configured for recruiting teams, such as via Slack, to alert them when results are available. A typical build of this complexity could be delivered within 8-12 weeks, depending on the complexity of ATS integration and analysis requirements. The client would provide access to their ATS APIs, define question sets, and participate in iterative feedback sessions.

Why It Matters

Key Benefits

01

From 30-Minute Calls to 3-Minute Reports

The system completes three reference checks in the time it takes a recruiter to schedule one call. Final reports are in your ATS in under 180 seconds.

02

Pay Once, Not Per-Reference

A one-time fixed-price build with minimal monthly API fees. No per-seat license or per-reference charge that penalizes you for growing your pipeline.

03

You Own the System and the Code

You receive the complete Python source code in your company's GitHub repository. There is no vendor lock-in. Extend or modify it with any developer.

04

Consistent Questions, Unbiased Analysis

Every reference gets the exact same questions in the same neutral tone. The Claude API analysis is objective, removing recruiter bias from the summary.

05

Works Inside Your Existing ATS

We integrate directly with platforms like Lever, Greenhouse, or Ashby. Recruiters trigger checks and see reports without leaving their primary tool.

How We Deliver

The Process

01

Scoping & ATS Integration (Week 1)

You provide read-only access to your ATS and the job descriptions for 2-3 key roles. We map the data flow and build the API connection.

02

Voice Agent Build (Week 2)

We build the core Python application using Twilio and the Claude API. You receive a deliverable test number to try the voice agent yourself.

03

Reporting & Deployment (Week 3)

We build the PDF report generation and push data back to your ATS. The system is deployed to AWS Lambda and we run 10 live test cases.

04

Handoff & Monitoring (Week 4+)

We monitor system performance for 30 days post-launch. You receive the full source code, deployment scripts, and a runbook for managing the system.

The Syntora Advantage

Not all AI partners are built the same.

AI Audit First

Other Agencies

Assessment phase is often skipped or abbreviated

Syntora

Syntora

We assess your business before we build anything

Private AI

Other Agencies

Typically built on shared, third-party platforms

Syntora

Syntora

Fully private systems. Your data never leaves your environment

Your Tools

Other Agencies

May require new software purchases or migrations

Syntora

Syntora

Zero disruption to your existing tools and workflows

Team Training

Other Agencies

Training and ongoing support are usually extra

Syntora

Syntora

Full training included. Your team hits the ground running from day one

Ownership

Other Agencies

Code and data often stay on the vendor's platform

Syntora

Syntora

You own everything we build. The systems, the data, all of it. No lock-in

Get Started

Ready to Automate Your Technology Operations?

Book a call to discuss how we can implement ai automation for your technology business.

FAQ

Everything You're Thinking. Answered.

01

What factors determine the cost and timeline for this build?

02

What happens if a reference hangs up or the call drops?

03

How is this better than an off-the-shelf tool like Checkster?

04

What about consent and legal compliance for recording calls?

05

Can we customize the questions for different roles?

06

Can we change the AI's voice or accent?