Build a Custom AI System to Screen and Rank Candidates
The best AI recruiting tools for SMBs are custom systems that rank candidates against your specific job criteria. They integrate directly with your Applicant Tracking System (ATS) to score every new applicant automatically. The complexity of building such a system depends on your existing data sources and infrastructure. For organizations utilizing a modern ATS like Greenhouse with structured application data, a direct integration and scoring system can be developed efficiently. If applicant tracking relies on less structured sources like spreadsheets or email inboxes, the initial phase would involve significant data extraction, normalization, and mapping to prepare the information for processing. Syntora specializes in designing and building these custom systems, tailoring the approach to your specific operational context.
Syntora designs custom AI recruiting tools to rank candidates against specific job criteria, integrating with existing ATS platforms. This approach allows SMBs to streamline applicant screening by focusing on architectural clarity and collaborative development rather than generic, off-the-shelf products.
What Problem Does This Solve?
Most small recruiting firms rely on the built-in filtering tools within their ATS. These are simple keyword matchers, not AI. A search for "product manager" will miss a great candidate whose resume says "led product development" because it cannot understand semantics. This forces recruiters to spend hours manually reading through hundreds of resumes for every open role, defeating the purpose of the software.
Third-party screening tools like Ideal or HireVue offer more advanced analysis but come with two major drawbacks for SMBs. First, they charge per-seat, per-month fees that become expensive for a 10-person team. Second, their models are black boxes trained on generic, global data. They cannot learn that for your specific client, experience at one of their three main competitors is the single most important hiring signal.
Consider a 15-person firm using their ATS's built-in "AI matching" for a "Senior Python Developer" role. The system flags 150 resumes containing the word "Python", but the real requirements are experience with FastAPI and payment gateways. A human must still read all 150 resumes to find the 3 qualified candidates. This so-called automation saved zero time and created 4 hours of low-value work.
How Would Syntora Approach This?
Syntora's approach to building a candidate screening system begins with an initial discovery phase to understand your specific hiring criteria and ATS setup. We would start by integrating with your ATS API (e.g., Greenhouse, Lever, Ashby) to understand your data schema and past application data. This historical data, including resumes and hiring outcomes, would be used to inform the development of your custom ranking model.
For document parsing, the system would utilize the Claude API to process each resume. We've built document processing pipelines using Claude API for financial documents, and the same pattern applies to recruiting documents, enabling the extraction of key features like skills, experience, and employers into a structured JSON object. This data would then be stored in a Supabase Postgres database.
The core of the system would be a custom ranking model developed in Python. This is not a generic neural network but a series of weighted rules and heuristics collaboratively defined with your team. This logic would be deployed as an AWS Lambda function, which typically offers cost-effective hosting. The model would first apply essential knock-out rules (e.g., work authorization) and then score remaining candidates based on the job's unique requirements.
The screening logic would be exposed via a FastAPI endpoint. Upon a candidate's application, a webhook from your ATS would trigger the Lambda function. The API call would return a comprehensive output, including a candidate score, confidence level, and a concise summary of strengths and weaknesses. This information would be written back to custom fields within your ATS, making the screening output visible directly within the recruiter's existing workflow.
For ongoing refinement, the system would incorporate a feedback mechanism. Recruiters could flag instances where the model's assessment diverges from their judgment. This feedback data would be logged in Supabase and used during periodic tuning sessions to adjust scoring weights, allowing the system to learn and improve based on real-world hiring outcomes. A typical build timeline for a system of this complexity is 6-10 weeks, depending on the initial data readiness and the depth of custom logic required. Deliverables would include the deployed and integrated scoring system, source code, and documentation for ongoing maintenance. The client would primarily need to provide access to their ATS, define their specific hiring criteria, and participate in feedback loops.
What Are the Key Benefits?
Get Candidate Scores in 900 Milliseconds
From application submission to a ranked score in your ATS in less than a second. Your recruiters see ranked candidates instantly, not at the end of the day.
Pay for the Build, Not Per Recruiter
A one-time project fee and minimal monthly hosting costs. Avoid the $200/seat/month fees of SaaS screening tools that punish you for growing your team.
You Own the Screening Logic
We deliver the complete Python codebase in your private GitHub repository. You are never locked into a proprietary system and can modify the logic as your needs change.
It Learns From Your Recruiters
A feedback loop lets recruiters flag good or bad matches. The model retrains on this data every 30 days, continuously aligning with your team's real-world expertise.
Connects Directly to Your ATS
We build direct API integrations with Greenhouse, Lever, or Ashby. There are no new dashboards or platforms for your team to learn. It works inside the tools you already use.
What Does the Process Look Like?
Week 1: ATS Connection and Data Mapping
You provide read-only API access to your ATS. We audit your application data and provide a data map showing how we will parse resumes into structured fields for your approval.
Week 2: Scoring Model and API Build
We build the core Python scoring application and deploy the FastAPI endpoint. You receive a technical spec outlining the scoring criteria for your review and feedback.
Week 3: Integration and Live Testing
We configure the ATS webhooks to call our API and write back scores. You get a testing sandbox to see scores appear on 10-20 sample candidates before going live.
Weeks 4-8: Monitoring and Handoff
We monitor the system in production for 30 days to ensure accuracy and stability. You receive a runbook with full documentation and monitoring instructions.
Frequently Asked Questions
- How much does a custom screening system cost?
- Pricing depends on the number of job profiles and the quality of your ATS data. A project for a firm with clean data in one ATS typically takes 3-4 weeks. The key variables are the number of distinct roles to model and if resume data needs to be parsed from unstructured PDFs versus existing application fields. We define the exact scope during a free discovery call.
- What happens if the AI mis-screens a great candidate?
- The system is designed with human review gates. Recruiters see the AI-generated score and summary but make the final decision. If they disagree, a simple 'flag' button logs the discrepancy. This feedback is crucial; it is used to retrain the model, making it smarter over time. The goal is to assist, not replace, your recruiters' judgment.
- How is this different from using a sourcing tool like SeekOut?
- Sourcing tools like SeekOut help you find passive candidates. They do not screen or rank the inbound applicants you already have. Syntora builds the engine that takes candidates from all sources, including your career page and job boards, and ranks them for your recruiters. It solves the 'too many applicants' problem, not the 'not enough applicants' problem.
- What happens if the scoring system goes down?
- The system is deployed on AWS Lambda, which is highly available. In the rare event of an outage, the ATS webhook will fail, and no score will be written. New candidates will simply appear in your ATS as they do today, without a score. We set up CloudWatch alerts that notify us of any failures, and we typically restore service within an hour.
- How do you prevent the AI from introducing bias?
- We explicitly exclude demographic data like names and inferred ethnicity from the model features. Scoring is based on skills, experience, and qualifications mapped to the job description. The human-in-the-loop design, where recruiters can override and flag results, acts as a continuous check against biased correlations the model might otherwise learn.
- What kind of support is available after the project is finished?
- The initial build includes a 30-day monitoring period to fix any bugs. After that, you receive a runbook to manage the system. For teams that want ongoing support, we offer a monthly retainer that covers monitoring, alert response, and model retraining. This is optional; you own the code and can support it internally with any Python developer.
Ready to Automate Your Professional Services Operations?
Book a call to discuss how we can implement ai automation for your professional services business.
Book a Call