Syntora
AI AutomationProfessional Services

Stop Screening Resumes. Start Interviewing Candidates.

AI plays a significant role in high-volume recruiting by automating repetitive tasks like screening and matching. It is used to screen candidates and rank them for human review, not to replace recruiter jobs.

By Parker Gawne, Founder at Syntora|Updated Mar 5, 2026

Syntora offers engineering engagements to build custom AI screening tools for high-volume recruiting. These systems leverage technologies like the Claude API for semantic candidate-job matching and integrate directly with existing Applicant Tracking Systems to surface top candidates for human review.

Building an effective AI screening system depends on the quality of your historical placement data and the complexity of your roles. A firm specializing in software engineers with 24 months of data in a single ATS provides a clearer starting point. A firm that places candidates across sales, marketing, and operations typically requires distinct models for each function, increasing complexity. Syntora designs and builds custom AI screening tools tailored to your specific hiring needs through focused engineering engagements.

What Problem Does This Solve?

Most recruiting teams rely on their Applicant Tracking System's (ATS) built-in keyword filtering. A recruiter using Greenhouse can filter for the keyword 'Python', but this simple search misses a great candidate whose resume says 'built REST APIs with FastAPI' and assumes Python proficiency. This blunt approach creates a high rate of false negatives, burying qualified applicants.

Third-party screening tools that plug into your ATS often use generic, pre-trained models. They might rank a 'Senior Product Manager' for a B2B SaaS company using the same logic as one for a consumer hardware company. They cannot grasp the specific nuances of your clients' needs, like a requirement for experience with subscription billing systems, because they were never trained on your firm's successful placements.

A 25-person tech recruiting agency we worked with faced this exact issue. They used their ATS filter for a 'Senior Backend Engineer' role, setting keywords for 'Java', 'Spring Boot', and 'AWS'. Out of 300 applicants, the filter surfaced 40. After a full day of manual review, only 5 were actually qualified. Ten perfect-fit candidates who listed 'Kotlin' or 'GCP' instead were missed entirely, delaying the search by a week.

How Would Syntora Approach This?

Syntora approaches AI candidate screening as a custom engineering engagement. The initial step would involve a discovery phase to audit your existing ATS data and define the scope of the screening system. We would connect to your ATS API (e.g., Greenhouse, Lever, Ashby) to analyze 18-24 months of historical applicant data, including resumes, job descriptions, and placement outcomes. Resumes would be parsed from their original PDF or DOCX format into structured JSON using Python libraries like pypdf, extracting features such as skills, titles, and years of experience.

For candidate-job matching, we would design a system using the Claude API. This approach involves generating vector embeddings for both job descriptions and incoming resumes, allowing for a nuanced semantic comparison. Syntora has built document processing pipelines using the Claude API for complex financial documents, and the same pattern applies to recruiting documents to understand the relationship between job requirements and candidate qualifications beyond simple keyword matching.

The core scoring logic would be implemented as a FastAPI application, designed for deployment on a serverless architecture like AWS Lambda. This setup provides efficiency and scalability for processing applicant data. When new candidates apply through your ATS, a webhook would trigger the Lambda function to generate a match score. This score would then be written to a custom field within the candidate's profile in your ATS, integrating directly into your recruiters' existing workflows.

To support transparency and enable continuous model improvement, we would propose a review interface built with technologies such as Vercel and Supabase. This interface would allow recruiters to provide feedback on scores, which would be logged and used to refine the underlying models over time. This human-in-the-loop process is crucial for adapting the system to evolving hiring criteria and ensuring accuracy.

A typical engagement for this complexity would span 8-12 weeks for initial build-out. Clients would need to provide access to their ATS and collaborate during the discovery and feedback phases. Deliverables would include the deployed AI screening system integrated with your ATS, documentation, and knowledge transfer for your internal teams.

What Are the Key Benefits?

  • Score Every Applicant in Under a Second

    The system processes and ranks a new resume in less than 500 milliseconds. Your recruiters see the best candidates the moment they apply, not hours later.

  • Pay For The Build, Not Per Recruiter

    A one-time project cost with minimal monthly AWS hosting fees. No recurring per-seat SaaS subscription that penalizes you for growing your team.

  • You Own the Model, Code, and Data

    You receive the full Python source code in your private GitHub repository. There is no vendor lock-in and no black box algorithms.

  • Self-Correcting With Recruiter Feedback

    The system logs every recruiter override in a Supabase database, providing a clear dashboard of model performance and data for periodic retraining.

  • Works Inside Your Existing ATS

    We integrate with Greenhouse, Lever, or Ashby APIs. Recruiters see match scores in a custom field without ever leaving their primary tool.

What Does the Process Look Like?

  1. Week 1: ATS Integration & Data Audit

    You provide read-only API keys for your ATS. We connect and pull historical data, delivering a data quality report and a proposed feature list.

  2. Weeks 2-3: Model & API Development

    We build the core matching model and FastAPI service. You receive a link to a staging environment to test sample resumes against job descriptions.

  3. Week 4: Deployment & Live Testing

    We deploy the system on AWS Lambda and configure the ATS webhook. We deliver a runbook with architecture diagrams and API documentation.

  4. Weeks 5-8: Monitoring & Handoff

    We monitor live performance for 30 days, addressing any issues. You receive the final source code and ownership of the AWS and Supabase accounts.

Frequently Asked Questions

How much does a custom recruiting AI system cost?
The cost depends on the number of job families to model and the cleanliness of your ATS data. A project for a single job family with good data in Greenhouse takes about 4 weeks. A project spanning multiple departments requires more complexity. We provide a fixed-price quote after the initial discovery call and data audit.
What happens if the AI model scores a great candidate incorrectly?
The system is designed for human oversight. Recruiters can override any score, and these overrides are logged for review. If a specific type of candidate is consistently mis-scored, we adjust the model's instructions in the Claude API prompt. This feedback loop is the primary way the system improves over time.
How is this different from an AI sourcing tool like SeekOut or hireEZ?
Sourcing tools find external candidates from platforms like LinkedIn. They are for outbound recruiting. Our system focuses on inbound recruiting by ranking candidates who have already applied to your jobs. It ensures you do not miss the best talent already in your pipeline.
How do you handle sensitive candidate data and PII?
We process data within your own AWS account, which you control. Candidate resumes are passed to the Claude API for scoring and are not stored by Syntora. We follow data minimization principles, only accessing the ATS fields necessary for the model. The entire system architecture is documented for your compliance review.
How do you prevent the AI from introducing bias into our hiring?
We explicitly instruct the model via prompting to ignore demographic information. The model scores based on skills and experience relative to the job description. We also provide a dashboard showing score distributions across roles, allowing you to audit for statistical disparities and ensure fairness in screening.
What happens if we switch from Greenhouse to Lever in a year?
Because you own the code, you are not locked in. The core FastAPI scoring service is independent of the ATS. Migrating involves writing a new connector script to map Lever's API fields instead of Greenhouse's. This is typically a 3-5 day project, not a complete rebuild, as the model logic remains the same.

Ready to Automate Your Professional Services Operations?

Book a call to discuss how we can implement ai automation for your professional services business.

Book a Call