Build a Custom Voice AI for Automated Reference Checks
The best voice AI reference checking solution is a custom system using a large language model. This approach automates phone calls, transcribes responses, and flags risks based on your specific criteria.
Syntora designs custom voice AI reference checking solutions using large language models and cloud services. This approach automates phone calls, transcribes responses, and flags risks based on specific criteria. Syntora offers expertise in the architecture and development of such systems as an engineering engagement.
A custom build is suitable for businesses that need consistent, unbiased reference data without per-seat SaaS fees. Such a system would be built on your infrastructure, integrate directly with your Applicant Tracking System (ATS), and utilize your specific reference questions. Syntora offers expertise in designing and building custom solutions for teams who recognize reference checks as a critical data source, not merely a checkbox task. Our relevant experience includes building document processing pipelines using Claude API for financial documents, and the same architectural patterns apply to automated voice interactions for recruitment.
The scope of a custom Voice AI reference checking system depends on the complexity of your reference scripts, the number of integrations required with your existing systems, and the desired depth of reporting. Syntora approaches these projects by first understanding your current process and then designing a tailored architecture.
The Problem
What Problem Does This Solve?
Recruiting teams often start with manual phone calls for reference checks. This process is slow and inconsistent. One recruiter might ask probing follow-up questions while another sticks to the script, making it impossible to compare candidates objectively. A single senior candidate requiring five reference calls can consume half a day of a recruiter's time.
Trying to scale this, teams turn to survey tools like Typeform or built-in ATS features. These just send an email with a link to a form. Response rates are often below 30% because it puts the work on the reference. A reference who would happily talk for 10 minutes will ignore a form that looks like a 20-minute task. There is no way to ask dynamic follow-up questions based on an initial answer.
Dedicated reference checking platforms like Checkster exist, but they charge per-candidate or per-seat, which is expensive for a small business. A 5-person recruiting team can face a bill of over $500/month. These platforms also offer limited customization of the analysis logic, flagging generic risks instead of the specific competencies you screen for.
Our Approach
How Would Syntora Approach This?
Syntora's approach to building a voice AI reference checking system begins with a discovery phase to codify your existing reference questions and decision criteria into a structured script. We would use Anthropic's Claude 3 Sonnet API to manage the conversational flow, enabling dynamic follow-up questions that delve deeper into a reference's initial response. This conversational logic would be deployed as a Python service using FastAPI.
When a recruiter triggers a check from your ATS, a webhook would fire an AWS Lambda function. This function would use the Twilio API to place the outbound call. Audio streams in real-time to AWS Transcribe for speech-to-text conversion. The transcribed text is then sent to the Claude API, which generates the next question. The response is converted back to audio using AWS Polly and played to the reference. This question-response loop typically completes within a second.
All call data, including full audio recordings, transcripts, and a final structured summary, would be written to a Supabase database. The summary, which can include flags for hesitation, negative sentiment, or conflicting information, would be posted back to a custom field in your ATS via its API. Recruiters would receive a complete report shortly after the final call ends. Typical cloud services cost for hundreds of calls per month would be modest, often under $50.
As part of the engagement, Syntora would build a simple dashboard showing call completion rates, average call duration, and common keywords from transcripts. The system would include structured logging with structlog, configured to send alerts to Slack via webhooks if a call fails due to a technical issue or if transcript quality scores are low, indicating a bad connection. This allows for immediate review.
A typical engagement to design and build a system of this complexity ranges from 8 to 12 weeks. Clients would need to provide access to their ATS documentation, existing reference questions, and participate in regular feedback sessions. Deliverables would include the deployed system on client infrastructure, source code, detailed architectural documentation, and a user guide.
Why It Matters
Key Benefits
From Kickoff to First Automated Call in 3 Weeks
Your custom system is live and integrated with your ATS in 15 business days, not a full quarter. Start getting structured reference data immediately.
A Fixed-Price Build, Not a Recurring Subscription
You pay a one-time project fee. After launch, you only pay for cloud usage, which is often less than $50/month, with no per-seat or per-check fees.
You Own the Source Code and the Data
We deliver the complete Python codebase to your GitHub repository. Your data resides in your own database, not on a third-party vendor's platform.
Real-Time Alerts for Failed Calls
A health check runs every 5 minutes. If API connections fail or calls do not complete, you get an immediate Slack notification with the error details.
Connects Natively With Your ATS
We use direct API integrations to pull candidate data from and push reports to Greenhouse, Lever, or any ATS with an accessible API. No manual data entry is needed.
How We Deliver
The Process
Discovery and Scripting (Week 1)
You provide your reference questions, ATS API credentials, and examples of good and bad reference checks. We deliver a detailed technical plan and a final conversational script.
Core System Build (Week 2)
We build the FastAPI service, Lambda functions, and Supabase schema. You receive a private link to a staging version to test internal calls and review transcript accuracy.
Integration and Deployment (Week 3)
We connect the system to your ATS and deploy it to your AWS account. You receive credentials, and we process the first batch of 10-20 real candidate references together.
Monitoring and Handoff (Weeks 4-6)
We monitor system performance and data quality for two weeks post-launch. You receive a final runbook detailing the architecture, monitoring dashboards, and maintenance steps.
Keep Exploring
Related Solutions
The Syntora Advantage
Not all AI partners are built the same.
Other Agencies
Assessment phase is often skipped or abbreviated
Syntora
We assess your business before we build anything
Other Agencies
Typically built on shared, third-party platforms
Syntora
Fully private systems. Your data never leaves your environment
Other Agencies
May require new software purchases or migrations
Syntora
Zero disruption to your existing tools and workflows
Other Agencies
Training and ongoing support are usually extra
Syntora
Full training included. Your team hits the ground running from day one
Other Agencies
Code and data often stay on the vendor's platform
Syntora
You own everything we build. The systems, the data, all of it. No lock-in
Get Started
Ready to Automate Your Professional Services Operations?
Book a call to discuss how we can implement ai automation for your professional services business.
FAQ
