Syntora
AI AutomationFinancial Services

Build Custom AI Risk Models for Your Insurance Brokerage

A small insurance brokerage can use custom algorithms to analyze unstructured data from applications, emails, and prior claims. This identifies subtle risk patterns that standard underwriting questionnaires miss, improving pricing and loss ratios.

By Parker Gawne, Founder at Syntora|Updated Mar 5, 2026

Syntora develops custom AI algorithms for small insurance brokerages to assess client risk by analyzing unstructured data from applications, emails, and claims. This approach helps identify subtle risk patterns, leading to improved pricing and loss ratios. Syntora's engineering engagements focus on building robust, tailored systems that integrate with existing workflows without claiming prior deployments in this specific vertical.

The scope for such an engagement is typically defined by your data sources. A firm with five years of clean policy data in Applied Epic represents a more straightforward data integration. An agency needing to pull data from Vertafore, email archives, and scanned PDFs requires more complex data extraction and processing. Syntora would work with your team to audit existing data sources and define the optimal approach for data ingestion and preparation.

What Problem Does This Solve?

Most brokerages rely on the reporting features within their Agency Management System (AMS). A tool like Applied Epic or Vertafore can show you policies by class code or premium, but it cannot read the text in an underwriter's notes. It is blind to risk signals buried in free-text fields, email attachments, or supplemental application forms.

A 12-person agency specializing in contractor liability learned this the hard way. They used HawkSoft as their AMS. A new client application looked standard, but an attached email mentioned doing work on multi-story residential projects. The AMS could not parse the email, the broker missed the detail, and the policy was written at a standard rate. A multi-million dollar scaffolding claim three months later caused their carrier to non-renew the entire book of business.

Carrier quoting portals are no better. They provide a black-box score without explanation, forcing you to act as a data entry clerk for their opaque algorithms. Without a way to analyze your own data for your specific client niche, you cannot proactively manage risk or justify pricing decisions to your clients.

How Would Syntora Approach This?

Syntora would approach the development of a custom risk assessment system by first conducting a thorough data discovery phase with your team. This phase would identify existing data sources, including your AMS (Applied Epic, Vertafore, or HawkSoft) and any unstructured documents such as emails, applications, or loss-run reports. Data integration would involve secure API connections or established data export procedures to pull 3-5 years of relevant policy, client, and historical claims data.

For unstructured documents, the Claude API would be used to parse text, extracting key entities and phrases into a structured format suitable for analysis. Syntora has built similar document processing pipelines using Claude API for financial documents, and this pattern directly applies to insurance documents. This combined structured and unstructured data would then be cleaned and prepared using Python and pandas. Our data scientists would engineer a set of 40-60 potential risk features, transforming raw data into numerical representations. A gradient boosting model, such as XGBoost, would then be trained to predict the likelihood of future claims based on these client profiles.

The architecture for the deployed system would typically involve a FastAPI service. This service would be deployed on cloud infrastructure like AWS Lambda, ensuring high availability and cost efficiency. When a new application enters your AMS, a webhook would trigger the FastAPI service. The system would then process the application data, calculate a risk score, and identify contributing factors. This score could then be written back to a custom field within your AMS, integrating with existing workflows.

To maintain oversight and improve model performance over time, every prediction would be logged to a database, for example, Supabase, along with a confidence score. The system could be configured to flag policies exceeding a defined risk threshold or with lower model confidence for manual review by a senior underwriter. This human-in-the-loop design provides essential checks and balances for AI recommendations. Typical build timelines for a system of this complexity, from discovery to a production-ready deployment, range from 12 to 20 weeks, depending heavily on data readiness and client-side integration requirements. The client would be responsible for providing access to data sources, internal subject matter expertise, and resources for integration with their AMS. Deliverables would include the trained model, the deployed API service, detailed documentation, and knowledge transfer to your technical team.

What Are the Key Benefits?

  • Price Risk in Seconds, Not Hours

    The risk model scores a new application in under 600ms, giving your team instant feedback instead of waiting on manual underwriting review.

  • Fixed Build Cost, Not Per-User Fees

    A one-time project cost with minimal monthly AWS hosting fees. You are not penalized with a growing SaaS bill as your brokerage adds staff.

  • You Own the Model and The Code

    You receive the complete Python codebase in your private GitHub repository and the trained model files. There is no vendor lock-in.

  • Alerts When Your Market Changes

    The system monitors for data drift. If new types of claims start appearing in your book, you get an alert to retrain the model on fresh data.

  • Native Scores Inside Your Existing AMS

    The risk score appears as a custom field directly in Applied Epic, Vertafore, or HawkSoft. No need to switch screens or learn a new tool.

What Does the Process Look Like?

  1. Data Audit & Scoping (Week 1)

    You provide read-only API access or a data export from your AMS. We audit data quality and deliver a report defining the specific risk factors to be modeled.

  2. Model Build & Validation (Weeks 2-3)

    We build and train the risk model using Python and XGBoost. You receive a validation report showing the model's predictive accuracy on your own historical data.

  3. AMS Integration & Deployment (Week 4)

    We deploy the FastAPI service on AWS Lambda and connect it to your AMS. You receive documentation on how your team can start seeing live risk scores.

  4. Monitoring & Handoff (Weeks 5-8)

    We monitor the model's live performance for 30 days, tuning as needed. You receive the complete source code and a system runbook for long-term maintenance.

Frequently Asked Questions

How much does a custom risk algorithm cost to build?
The scope depends on the number of data sources and the quality of your historical data. An agency with clean data in a single AMS like Applied Epic is a 4-week build. Integrating multiple systems or messy data can extend the timeline. We provide a fixed-price quote after the initial one-hour data audit.
What happens if the risk assessment API goes down?
The system is designed for resilience. If the AWS Lambda function fails, the webhook from your AMS will fail gracefully and no data is lost. You receive an immediate alert via email or Slack, and service is typically restored within an hour. The maintenance plan covers all such incidents.
How is this different from the analytics modules in Vertafore or Applied?
AMS analytics modules are great for analyzing structured data you have already entered. They cannot read unstructured text in emails or PDFs. Our system uses the Claude API to extract data from these documents, finding risks your AMS reports would miss entirely. It predicts future risk, it does not just report past events.
Is our client's policy and claims data kept secure?
Yes. The entire system is built and deployed in your own private AWS account, which you own and control. Client data is never sent to Syntora servers or any other third party, apart from the Claude API for text processing. This ensures you maintain full data privacy and control over sensitive information.
How accurate is the final model?
We measure accuracy by back-testing on your historical claims data. A typical model will correctly identify over 80% of policies that later file a major claim, while flagging fewer than 15% of safe policies for manual review. The goal is not perfect prediction, but to give your underwriters a powerful signal to focus their attention.
Do I need a technical person on my team to maintain this?
No. The system runs automatically, logs its own performance, and sends alerts if a high-risk client needs review or if model accuracy degrades. We provide a simple runbook covering basic maintenance tasks. Ongoing support plans are also available after the initial 8-week handoff period.

Ready to Automate Your Financial Services Operations?

Book a call to discuss how we can implement ai automation for your financial services business.

Book a Call