Automated Claims Processing Tools: Fraud Detection Architecture for 2026

Marcin Nowak
29 October 2025
Last update:
28 April 2026
Automated Claims Processing Tools: Fraud Detection Architecture for 2026

Section 1: Why fraud detection timing matters more than algorithm sophistication

A claims leader I worked with last year told me a story I’ve thought about ever since. Her team caught a $2.3 million fraud ring - multi-claim, cross-state, sophisticated documentation. The headline for the industry was “AI fraud detection at work.” The reality was different. The ring was caught by a rule someone wrote in 2019, running as a scoring layer at FNOL. The AI hadn’t flagged it. The scoring layer had.

The lesson I keep seeing carriers miss: fraud detection timing matters more than algorithm sophistication. You can have the best fraud model in the industry, but if it runs on week two of case handling rather than at FNOL, you’ve already lost most of the value.

This article is the Core deep-dive on fraud detection as a pain point - specifically, how automated claims processing tools enable fraud scoring at FNOL rather than in week two, what integration patterns work with NICB ForeWarn and ISO ClaimSearch, and how SIU teams codify their expert knowledge into rules that actually run on every claim. If you haven’t read the pillar yet, start with AI Claims Processing: The Complete 2026 Guide for US Carriers - this article assumes that context.

Section 2: The $45B P&C fraud problem

Insurance fraud in the US is a $308.6 billion annual problem across all insurance lines, according to the Coalition Against Insurance Fraud’s 2022 study - the first comprehensive update to the 1995 $80 billion estimate in 27 years [1]. Property and casualty fraud alone accounts for approximately $45 billion annually, with roughly 10% of P&C claims containing some fraud element [1][2].

Those numbers are the headline. The more useful numbers - for anyone building an actual fraud program - are the detection rates.

Deloitte’s analysis of P&C fraud detection, drawing on Coalition Against Insurance Fraud data and industry interviews, found:

  • Soft fraud (inflating legitimate claims, e.g., overstating repair costs or injury severity): detection rate 20-40%. Soft fraud accounts for approximately 60% of fraud incidents but is harder to prove [3].
  • Hard fraud (premeditated false claims, e.g., staged accidents, arson, fake theft): detection rate 40-80%. Accounts for the remaining ~40% of fraud incidents [3].

The range in each category is mostly a function of the detection infrastructure the carrier has in place. Top-quartile operations with mature SIU programs, integrated external databases, and AI scoring at FNOL sit at the high end of each range. Carriers running 2015-era fraud programs sit at the low end.

Deloitte projects that P&C insurers implementing AI-driven multimodal fraud detection across the claims lifecycle could save $80-160 billion in fraudulent claims by 2032, with potential savings of 20-40% depending on implementation sophistication [3]. That’s the scale of operational opportunity sitting on the table.

The executive priority signal

Deloitte’s June 2024 survey of 200 US insurance executives (100 L&A, 100 P&C) found that 35% chose fraud detection as one of their top five areas for developing or implementing generative AI applications in the coming year [4]. That’s the highest-priority AI use case in insurance as measured by executive intent. And it’s happening for two reasons that claims leaders should understand clearly:

  1. The economics are obvious. Reducing fraud leakage goes straight to combined ratio, with near-immediate measurable impact.
  2. The threat environment is changing fast. AI-generated fraud - deepfake damage photos, AI-written claim narratives, synthetic identity attacks - is escalating. Carriers without modern fraud detection infrastructure are becoming soft targets.

The next section explains why the specific shift to FNOL-timed fraud scoring is what actually moves these numbers.

Section 3: Why fraud scoring at FNOL is structurally different

In traditional claims workflows, fraud flags typically surface 7-14 days into case handling. An adjuster reviews the claim, spots inconsistencies, refers to SIU, SIU investigates. By the time the fraud determination is made, three things have happened that complicate the outcome:

  1. Initial reserves are set. Adverse development is now baked into the quarter.
  2. Adjuster and sometimes SIU time is spent. LAE is incurred.
  3. Some disbursements may have gone out. Interim payments, settlement advances, or inspection fees may already be flowing.

Reversing direction at that point is expensive, legally complicated, and operationally costly. The industry average catch rate sitting at 20-40% for soft fraud partly reflects this timing problem - not the difficulty of detection in principle.

What changes at FNOL

Running fraud scoring at FNOL, in parallel with triage and data extraction, changes the economics completely:

  • Before reserves are set. The fraud score informs initial reserve decisions.
  • Before adjuster time is committed. High-risk claims route to SIU from day one; low-risk claims proceed without fraud friction slowing them down.
  • Before disbursements start. No recovery process needed.
  • Before the customer relationship is established. A denied claim at FNOL is a policy conversation; a denied claim after two weeks of adjuster contact is a dispute.

The Decerto Claims AI platform demonstrates this pattern in production: an incoming claim with photos and a handwritten form is scored against fraud signals - image authenticity, policy history, claim pattern, exclusion patterns - before an adjuster opens the file [5]. The adjuster sees the fraud signals on the first-look screen, not after six days of case handling.

The mathematical consequence

If your catch rate on soft fraud is 25% today, moving fraud scoring to FNOL with a well-designed three-layer architecture can realistically lift that to 35-40% within 12 months of deployment. At an industry P&C fraud footprint of $45 billion, that 10-15 percentage point improvement per carrier compounds across the industry into the $80-160 billion cumulative savings projection Deloitte cites [3].

For your operation specifically, the math is carrier-specific. A $500M premium carrier with 10% fraud exposure losing 80% of it to undetected claims is looking at $40M in annual leakage. A shift from 25% to 40% catch rate recovers roughly $6M annually. That’s the business case for FNOL fraud scoring, and it’s why automated claims processing tools have moved from “nice to have” to “standard infrastructure” over the past 18 months.

Section 4: The three-layer FNOL fraud scoring architecture

Strong FNOL fraud scoring is not one model. It’s three parallel checks, all completing in under 30 seconds, each addressing different fraud vectors. In my experience designing these with carriers, getting all three layers working together is what separates effective programs from checkbox compliance.

Layer 1: External database lookups

Every incoming claim should, within seconds of FNOL, be checked against external fraud databases. The core ones for US P&C:

  • NICB ForeWarn and IDEA. National Insurance Crime Bureau’s databases of known fraud patterns, flagged entities, vehicle histories, and claim anomalies. Standard API integration for NICB member carriers.
  • ISO ClaimSearch. Cross-carrier claim database with 1.5+ billion records. Identifies claim duplications across carriers, related claims patterns, and known-fraud flags. Essential for catching ring fraud that crosses carrier boundaries.
  • Industry-specific databases. Varying by line of business - e.g., CARFAX / AutoCheck for auto, state fraud bureaus for workers’ comp, medical review databases for bodily injury.
  • Watchlists. OFAC sanctions, PEP lists, internal carrier blacklists. Often overlooked but critical for compliance and specific fraud types.

What to build: a parallel query engine that fires all applicable database checks simultaneously at FNOL, completing in under 10 seconds. Results are attached to the claim record as structured data, not free text.

Integration pattern I recommend: abstraction layer between your claims system and the external databases. If NICB updates their API (and they do), you change the abstraction layer, not every integration point. The abstraction also handles rate limiting, retry logic, and graceful degradation when databases are down - all of which matter at CAT-scale volume.

Layer 2: Internal pattern matching

Layer 1 catches fraud that’s already been fraud somewhere. Layer 2 catches fraud that matches patterns your own historical data has seen.

This is where machine learning models earn their place in the architecture. Trained on your historical claims data (with fraud outcomes labeled), these models identify claims that pattern-match against known fraud types even when the specific claimant or details are new. Common patterns the models learn:

  • Timing patterns. Claims filed immediately after policy inception, or immediately before renewal, or at specific time-of-month patterns that correlate with your historical fraud.
  • Geographic clustering. Claims concentrated in specific zip codes or near specific repair shops, medical providers, or legal firms at rates inconsistent with population density.
  • Documentation patterns. Specific document formats, handwriting styles, or photo metadata that match known fraud rings.
  • Claim composition patterns. Specific combinations of damage types, severities, or witness patterns that your historical data correlates with fraud.

What to build: a scoring model trained on at least 18 months of your historical claims data, with fraud outcomes labeled, retrained quarterly against new data. Output is a score (0-100) with contributing factors clearly enumerated - not a black box.

Critical design consideration: the model needs to be explainable. NAIC Model Bulletin on AI, adopted December 4, 2023 and now in force in 24 US jurisdictions as of August 2025, requires that AI decisions affecting consumers be explainable to regulators [6][7]. A black-box fraud score that “the model recommended denial” won’t survive a DOI audit. A scored output with specific contributing factors will.

Layer 3: Rules-based scoring from SIU expertise

Layer 3 is often the most valuable layer and the most underinvested. Your SIU team knows things about fraud that aren’t in any public database and aren’t cleanly captured in your historical claims data. That tacit expertise - “when we see X and Y and Z together, it’s almost always fraud” - needs to become scoring rules that run on every claim, not intuition that fires only when adjusters flag something.

This is the rules engine problem. In my experience, carriers with mature fraud programs treat the rules engine as a first-class component of the fraud architecture, not an afterthought. Common characteristics:

  • Business users author rules. Your SIU analysts can write and test rules without filing IT tickets. This is where rule volatility comes in - fraud patterns evolve, and rules need to evolve with them. A six-week IT release cycle for rule changes is a bottleneck that lets fraudsters adapt faster than your defenses.
  • Rules are versioned. Every rule change is traceable. When a rule change affects a claim decision, the specific rule version is part of the audit trail.
  • Rules have weights and composition. Individual rules fire as signals. The scoring engine composes them with learned weights into a final score. This is more useful than either “one hit and it’s fraud” or “needs five hits to count.”
  • Rules are tested against historical data. When an SIU analyst writes a new rule, the system shows how it would have scored the last 12 months of claims. Calibration happens before deployment, not through production false positives.

For the broader pattern of externalizing volatile business logic to rules engines with enterprise-grade safeguards - which applies across claims, underwriting, and pricing - see my thinking on time-to-market principles for insurance IT.

How the three layers compose

The output of the three layers is a unified fraud score with specific contributing signals from each layer. When the adjuster opens the claim, they see:

  • Overall fraud score (0-100 or equivalent)
  • Layer 1 signals: specific database hits and what they matched
  • Layer 2 signals: which pattern matches from internal data fired, and their weights
  • Layer 3 signals: which SIU rules fired, and the business-user-authored explanations

This is what a decision surface looks like for fraud specifically. The adjuster isn’t investigating from scratch - they’re confirming or overriding what the system has already found.

Section 5: Integration patterns - NICB, ISO ClaimSearch, and internal systems

Integration is where 40-60% of fraud program deployment time actually gets spent. The AI models and rules engines are mature. The integration with existing carrier infrastructure is where projects slow down.

NICB integration

NICB membership is table stakes for US P&C fraud programs. Key integration patterns:

  • ForeWarn API. Real-time queries at FNOL for entity checks, vehicle history, claim flags. Typical response time under 2 seconds.
  • IDEA (ISO Insurance Database Exchange). Historical pattern data, referenced alongside ForeWarn for comprehensive checks.
  • Batch reporting. Submit your closed claims back to NICB for database enrichment. This is mandatory for membership and valuable for the industry broadly.

Practical note: NICB API rate limits matter at CAT-scale volume. Your integration needs to handle surge traffic gracefully - queue non-urgent claims, prioritize high-risk pre-flagged ones, have fallback logic for brief API unavailability.

ISO ClaimSearch integration

ISO ClaimSearch sits alongside NICB with complementary coverage. Key patterns:

  • Real-time search at FNOL. Every new claim searched against 1.5+ billion historical records for duplicates and related claims.
  • Pre-filing screening. For high-value or high-risk lines, search before even formal FNOL acceptance.
  • Link analysis. Advanced integrations surface ring patterns across multiple claimants, addresses, or incidents that look unrelated on individual examination.

Internal system integration

Layer 2 (internal pattern matching) and Layer 3 (SIU rules) require deep integration with your claims system and data warehouse:

  • Claims core integration. Guidewire ClaimCenter, Duck Creek Claims, or custom legacy - pattern matches typical for the AI claims processing integrations I covered in the FNOL Revolution Core article.
  • Data warehouse / lake integration. Historical claims data, fraud outcomes, external data enrichment. For ML model training, you need clean historical data going back 18-36 months.
  • Policy administration integration. Policy history, effective dates, coverage types, prior-claim patterns all factor into fraud scoring.

Legacy system considerations

If your claims core is a heavily customized legacy system without modern APIs, the pattern I recommend is the same one I recommend for FNOL AI generally: build an intermediate data layer that normalizes legacy data into a modern shape, and let the fraud scoring work against the normalized layer rather than directly against the legacy core. This is the same approach I’ve recommended for legacy modernization broadly - it’s the one that actually ships on time and lets you replace components without rebuilding the whole stack.

Section 6: SIU rules engines - codifying expert knowledge into scoring rules

I want to spend a section specifically on Layer 3 because it’s the part carriers most commonly underinvest in.

Your SIU team knows things. Years of investigating fraud have given them pattern recognition that doesn’t show up in your historical claims data cleanly. “When I see a water damage claim within 60 days of policy inception on a commercial property that also has a recent mortgage refinance, I investigate.” That’s expert knowledge. The question is whether it’s running on every claim you process, or only the ones your adjusters flag intuitively.

The four characteristics of a strong SIU rules engine

  1. Business-user authoring. SIU analysts can author rules directly, without IT tickets. The interface is a rules editor, not a development environment. Rules look like: “IF claim_type = ‘water_damage’ AND policy_age_days < 60 AND property_type = ‘commercial’ AND [additional condition] THEN fraud_signal = ‘new_policy_water_damage’ WITH weight = 15”
  2. Safe deployment. New rules deploy to a scoring shadow mode first, where they fire on incoming claims and record their output but don’t affect adjuster screens. SIU reviews the shadow-mode output for 2-4 weeks before promoting rules to production. This catches calibration problems before they create adjuster or customer-facing issues.
  3. Historical backtesting. When an analyst writes a new rule, the system shows how it would have scored the last 12-18 months of claims. Did it catch known fraud? Did it generate false positives? What was the fraud-to-FP ratio? Calibration happens before deployment, not through production incidents.
  4. Version control and audit trail. Every rule change is logged with author, timestamp, and business justification. When a rule affects a claim decision, the specific rule version is part of the audit trail. This matters both operationally and for NAIC Model Bulletin compliance.

Why this matters for fraud program effectiveness

In my experience, the carriers with the highest catch rates aren’t the ones with the fanciest ML models. They’re the ones where SIU analysts can codify a newly-spotted fraud pattern as a production rule within 48-72 hours of identifying it. That agility - the ability to adapt faster than fraudsters - compounds into the catch rate advantage over time.

The carriers who still have 6-12 week IT release cycles for fraud rules are functionally running 2019-era fraud programs regardless of what ML models they’ve deployed. The rules engine is the speed layer.

A practical case from my work

In one carrier project, we implemented a proprietary rules engine called Embargo, built on advanced matching logic. Its specific purpose was detecting individuals and entities listed on blacklists - sanctions lists, watchlists, criminal databases. The engine compared customer data against international sources like the Dow Jones Watchlist in real time, detecting non-obvious matches such as name spelling variations or transliterations within milliseconds. This wasn’t a hypothetical capability - it ran on every claim and policy in production. The measurable outcome was catching entities that traditional exact-match searches missed, which had been the carrier’s primary sanctions compliance exposure.

The point is not that this specific engine is the right answer for every carrier. The point is that rules engines worth the name operate at this level of technical sophistication and business-user usability simultaneously.

Section 7: AI-era fraud vectors - the new attack surfaces

Fraud detection in 2026 is fighting a different battle than in 2022. Three new vectors have emerged that automated claims processing tools need to address specifically.

Deepfake damage photos

Generative AI makes fabricating damage photos trivially easy. A photo of actual damage can be altered to expand the damage area, age can be removed, or entirely synthetic damage images can be generated that look convincing to human adjusters and legacy OCR systems.

Deloitte’s claims practice documented this shift in 2025: “As you get more virtual, it increases the probability that you are experiencing some of this [fraud]. And our ability to detect it is going up. AI is really going to be critical because it can find almost the pixel-level variation in a photograph, or detect that the entire photograph itself is generated by artificial intelligence” [3].

What to build: image authenticity checks at FNOL as part of Layer 1 or Layer 2 of the scoring architecture. Modern vision models can detect generative AI artifacts, metadata inconsistencies, and reuse patterns (same photo submitted to multiple claims across carriers). This capability didn’t exist reliably 18 months ago; it does now.

AI-generated claim narratives

Fraudsters are increasingly using LLMs to generate polished, consistent claim narratives that don’t exhibit the inconsistencies human-written fraudulent claims used to show. Traditional NLP-based fraud indicators (grammar errors, inconsistent detail levels, implausible timelines) are less useful against AI-generated text.

What to build: narrative coherence checks that compare claim descriptions against physical evidence, policy history, and external data. AI-generated text can be internally consistent but externally implausible - the fraud indicator moves from “inconsistency” to “implausibility against external ground truth.”

Synthetic identity and ring orchestration

Large-scale fraud rings increasingly use AI to orchestrate claims across multiple synthetic or stolen identities, making detection through traditional identity-based matching less effective.

What to build: link analysis capabilities in Layer 2 that look for non-obvious connections - shared payment methods, adjacent addresses, similar timing patterns - across claims that don’t share obvious identity markers. ISO ClaimSearch provides some of this natively, but carrier-specific link analysis catches patterns specific to your book.

The compounding effect

These three vectors compound. An AI-generated narrative paired with deepfake photos submitted by a synthetic identity is a fraud attempt that would have passed most 2022-era fraud programs. 2026-era automated claims processing tools specifically address this compound attack, but only if all three detection capabilities are built in from the start.

Section 8: Automated claims processing tools - vendor capabilities to evaluate

When evaluating automated claims processing tools with a fraud detection focus, here are the specific capabilities I’d require.

Core capabilities checklist:

  1. FNOL-timed fraud scoring - not post-review, not nightly batch. Real-time parallel execution of all scoring layers completing in under 30 seconds.
  2. All three scoring layers integrated - not just one. External database lookups AND internal pattern matching AND SIU rules, with composition logic that weights them together.
  3. NICB and ISO ClaimSearch integrations ready - out-of-the-box, not custom. Evaluating a vendor without these is starting from 6-12 months behind.
  4. Image authenticity detection - including AI-generated image detection, not just basic metadata checks. This is table stakes in 2026, not a premium feature.
  5. SIU business-user rules authoring - with backtesting, shadow mode deployment, and version control.
  6. Explainable output with contributing signals - for NAIC Model Bulletin compliance. Black-box scores that “the model flagged it” are not acceptable.
  7. Audit trail built into the architecture - every score decision captured with contributing factors, ready for DOI audit response without manual reconstruction.
  8. Integration with your claims core - Guidewire, Duck Creek, or custom. Vendors with production references on your specific core material shorten your timeline.

For vendor comparison across Hyperscience, Shift Technology, Tractable, and platform partners, see Section 6 of the AI Claims Processing pillar.

Fraud detection vendor comparison - at a glance

The table below is how I’d frame the fraud-specific vendor options for a claims leader comparing choices in 2026. It’s not exhaustive - the fraud detection market has dozens of vendors - but these are the ones I see come up most often in US P&C carrier evaluations.

Vendor Model type Specialty Best fit Key limitation
Shift Technology Buy (SaaS) AI-driven fraud detection across claims lifecycle; 3x hit rate vs rules-based P&C carriers with high-volume fraud exposure Fraud-specific; still need separate FNOL, triage, coverage tools
FRISS Buy (SaaS) Real-time fraud detection with native Guidewire ClaimCenter integration Carriers on Guidewire core seeking fast deployment Deepest integration with Guidewire; less mature outside that ecosystem
SAS Fraud Framework Buy (platform) Combined supervised + unsupervised ML, network analysis, SIU case tooling Large global insurers needing extensive audit and reporting Heavy deployment footprint; high cost; long implementation
BAE Systems NetReveal Buy (SaaS) Entity resolution and network visualization for ring detection Carriers fighting organized fraud rings with shared addresses/providers Specialized for network fraud; less effective on individual claim fraud
Verisk ClaimSearch + Analytics Buy (data + analytics) Contributory industry database with 1.5B+ records; overlap and billing pattern detection Every US P&C carrier needs this at some level — baseline industry utility External data layer only; still need internal scoring architecture
LexisNexis Risk Solutions Buy (data + analytics) Identity, device, and regulatory data for scoring and external intelligence ECarriers needing strong identity verification and KYC signals EExternal data layer; complements rather than replaces internal models
Decerto Claims AI Partner (custom platform) End-to-end three-layer fraud architecture (external DB + ML + SIU rules) with Embargo rules engine US P&C carriers 200–2,000 FTE seeking integrated fraud + claims processing Partnership model — longer initial scoping than pure SaaS

A few things worth noting about the table above. First, Verisk ClaimSearch is effectively table stakes for US P&C - almost every carrier uses it, and the question is how deeply it’s integrated into scoring workflows, not whether to use it. Second, Shift Technology and FRISS are both strong specialty vendors - FRISS if you’re on Guidewire and want the tightest native integration, Shift if you want fraud ML strength independent of your core system. Third, SAS and BAE Systems are platform-scale deployments that make sense for the largest carriers but can feel like overkill for mid-market. Mixing and matching is common - I’ve seen carriers run Verisk ClaimSearch + Shift Technology + internal SIU rules engine in combination, which covers all three layers of the architecture from Section 4 without picking a single mega-vendor.

What Decerto Claims AI brings specifically to fraud

Decerto’s Claims AI platform handles all three scoring layers, with demonstrated integration with external databases, vision-based image authenticity checks, and a rules engine (Embargo) designed for business-user authoring. Real product demonstrations show fraud signal detection as part of the FNOL workflow - not a separate module [5]. For P&C carriers in the 200-2,000 FTE range, this is the partnership model that typically produces better five-year fraud program economics than either pure SaaS or in-house build.

Section 9: Measuring fraud program ROI

Finally, the metrics. Fraud programs deserve their own executive dashboard because their metrics are different from general claims operations.

Primary metrics

Fraud catch rate (%). Percentage of fraud attempts detected before payout, measured against estimated total fraud attempts. Industry context from Deloitte: soft fraud 20-40%, hard fraud 40-80% [3]. Target: move up 10-15 percentage points within 12 months of FNOL scoring deployment.

False positive rate (%). Percentage of flagged claims that turn out to be legitimate. Critical to track because too-high false positives damage customer experience and waste adjuster time. Target: under 5% for high-severity flags, under 15% for low-severity flags.

Time to fraud determination (days). From FNOL to fraud determination (or cleared). Industry traditional: 14-21 days. Target with FNOL scoring: under 5 days for clear signals, under 10 days for complex cases needing SIU investigation.

Fraud leakage recovered ($). Annual estimate of fraud payments prevented by the detection program. The hard dollars that justify the program budget.

Secondary metrics

SIU referral rate. Percentage of total claims referred to SIU. Should stabilize; spikes indicate model drift or calibration problems.

SIU substantiation rate. Percentage of referred claims substantiated as fraud. Should be rising as detection quality improves.

Adjuster-generated referrals vs. system-generated. System-generated should rise over time as adjusters trust system output.

Rule effectiveness. Per-rule metrics on fire rate, substantiation rate, and dollar impact. Rules not pulling their weight should be reviewed or retired.

Benchmark references

For the broader ROI metrics dashboard in claims operations, see Section 8 of the AI Claims Processing pillar. For context on how fraud metrics relate to overall claims handling KPIs, see 5 Principles of Effective Claims Handling.

Section 10: FAQ and related reading

Frequently asked questions

How do automated claims processing tools differ from traditional fraud rules engines?

Traditional fraud rules engines typically run one layer: business-user-authored rules firing on structured claim data. Automated claims processing tools in 2026 integrate three layers simultaneously: external database lookups, ML pattern matching on internal data, and rules-based scoring from SIU expertise. The integration and timing (at FNOL, not post-review) are what make the detection rate improvement possible.

Do we need both NICB and ISO ClaimSearch?

For most US P&C carriers, yes. They have complementary coverage - NICB focuses on known fraud patterns, flagged entities, and vehicle histories; ISO ClaimSearch provides cross-carrier claim matching at scale. Running only one leaves gaps that sophisticated fraud rings exploit. The cost of both subscriptions is typically trivial relative to the fraud prevention value.

Can small carriers afford this architecture?

The economics are different at different scales. Carriers below 200 FTE may be better served by targeted SaaS fraud solutions (Shift Technology is the leading option) rather than building the full three-layer architecture in-house or via platform partner. Mid-to-large carriers typically get better five-year economics from an integrated platform approach.

How long does deployment take?

For a full three-layer architecture from scratch: typically 9-14 months. External database integrations (Layer 1) can go live within 3-4 months. ML pattern matching (Layer 2) requires 18+ months of clean historical data and typically 4-6 months to production. SIU rules engine (Layer 3) with backtesting and business-user tooling is 4-6 months. Parallel execution of these workstreams is how you hit the aggressive end of the timeline.

What’s the biggest deployment risk?

Inadequate SIU team involvement. The rules engine is the component most dependent on expert knowledge, and deployments where SIU is treated as “stakeholders to be informed” rather than “primary users to be co-designed with” produce rules engines that don’t reflect how fraud actually operates in the carrier’s book. In my experience, SIU involvement from month 1 of the project is the single biggest predictor of fraud program success.

How does this relate to NAIC Model Bulletin compliance?

The three-layer architecture, properly designed, aligns naturally with NAIC Model Bulletin requirements. Every score has contributing factors documented. Every rule change is versioned and audited. Every external database query is logged. The decision surface shows adjusters the reasoning behind each score. This is compliance built-in rather than bolted-on, and it’s one of the advantages of designing fraud programs for 2026 rather than retrofitting 2019 programs.

Talk to Decerto about your fraud program architecture

If your current fraud program catches fraud in week two rather than at FNOL, or if your SIU team waits 6-12 weeks for IT releases to deploy new rules, you’re running a 2019-era fraud program in a 2026 threat environment. The gap shows up in your fraud leakage numbers and your J.D. Power claims satisfaction scores, even if it’s not labeled as such.

I’ve helped carriers design the three-layer fraud architecture this article describes, and sequence the deployment so value shows up in months rather than years. Book a 45-minute technical session and we’ll walk through the Claims AI platform against your carrier’s specific fraud exposure and existing infrastructure (NDA signed before the call).

Book 45-min Technical Session with Matthew

Sources and citations

[1] Coalition Against Insurance Fraud. “The Impact of Insurance Fraud on the U.S. Economy.” 2022. Reports $308.6 billion total US insurance fraud, with P&C component approximately $45 billion annually and ~10% of P&C claims containing fraud elements. First comprehensive update to the 1995 $80 billion estimate in 27 years.

[2] Insurance Information Institute (III). “Facts + Statistics: Fraud.” Notes that fraud comprises approximately 10% of P&C insurance losses and loss adjustment expenses annually.

[3] Deloitte / Insurance Journal. “As Insurance Execs Eye AI for Fraud Detection, Deloitte Predicts Billions in Savings.” June 2025. Reports Deloitte projections that P&C insurers could save $80-160 billion in fraudulent claims by 2032 through AI-driven multimodal fraud detection. Details soft fraud detection rates of 20-40% and hard fraud detection rates of 40-80%. Includes commentary from Kedar Kamalapurkar, Deloitte Consulting insurance claims leader.

[4] Deloitte. “Are insurers truly ready to scale gen AI?” Based on June 2024 survey of 200 US insurance executives (100 L&A, 100 P&C). Reports 35% of executives chose fraud detection as one of their top five areas for developing or implementing generative AI applications.

[5] Decerto. “Claims AI Product Demonstrations.” YouTube channel: Decerto (@DecertoSoftware). Fraud signal detection as part of FNOL workflow demonstrated in Use Case 01 (Restaurant Fire) published April 9, 2026. https://youtu.be/x1_dEupkNpE

[6] National Association of Insurance Commissioners (NAIC). “Model Bulletin on the Use of Artificial Intelligence Systems by Insurers.” Adopted December 4, 2023.

[7] S&P Global Market Intelligence. “NAIC membership divided on developing AI model law, disclosure standard.” October 2025. Reports 24 NAIC jurisdictions had adopted the Model Bulletin as of August 2025.

Subscribe to newsletter

Subscribe to receive the latest blog posts to your inbox every week.

By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Let’s Build the Future of Insurance Together

Start a conversation with our team and discover how Decerto can accelerate your digital transformation.

Developers working on insurance software.