Your AI Estimate Is Too High. Or Is It?

 

What a 300-estimate study with SmartWrite revealed about why shops are leaving money on the table — and why they don’t know it.

By Collision Hub |  Based on a 2025 multi-state field study of independent collision repair shops

It started the same way in almost every shop that tested the new AI estimating tool, SmartWrite.

The estimate is too high. Way too high. The AI has loaded up line items the estimator would never write. Operations that will get pushed back. Procedures that insurance will never approve. At this rate, every car is going to be a total loss.

So, the estimator starts cutting. Removes what feels excessive. Trims what seems like overreach. Gets the number to something that feels reasonable — something an adjuster might actually agree to pay.

This scene played out 300 times across a multi-state study Collision Hub conducted in 2025 using SmartWrite, an AI-assisted estimating tool that generates comprehensive initial estimates based on the vehicle, the damage, and OEM procedures. And here is what made those 300 estimates remarkable: when we compared the final paid claim amounts to each shop’s historical average severity, the shops that had been trimming and complaining and removing things they thought were overpriced had still been paid more than $1,500 per estimate above their pre-SmartWrite baseline.

They were cutting the estimate down — and still getting paid more. Every time.

Estimators complained that the AI was overcharging. Their paid severity still went up more than $1,500 per estimate.

The Study: What We Did and Why

Collision Hub recruited independent collision repair shops across multiple states, representing a mix of shops with direct repair program relationships and those working primarily as independent. We wanted a cross-section that reflected the real market — not just DRP shops managing to insurer guidelines, and not just boutique independents operating entirely outside carrier influence.

Each shop ran estimates through SmartWrite for a defined study period. SmartWrite generates what the industry would call a subtractive estimate: it starts with a comprehensive scope of everything the vehicle potentially requires based on the damage, the VIN, and OEM repair procedures. The estimator’s job is to review that scope and remove anything that isn’t supported by the actual damage or the repair plan.

We collected the estimates, sat with estimators in interviews and focus groups to understand their experience of the tool, and then — critically — compared actual insurer payments received against each shop’s own historical average severity per estimate. We used paid amounts, not estimate face values, because we wanted to know what actually happened to the shop’s bottom line, not what the AI wrote on paper.

Three hundred estimates. Multi-state. Mixed DRP and non-DRP. Real payments, not projections.

What Estimators Said: The Complaints Were Loud and Consistent

We didn’t have to prompt much in the focus groups. The feedback came fast.

These weren’t fringe reactions from one or two skeptical estimators. This was the dominant response across shops, across markets, across both DRP and non-DRP operations. Estimators looked at the SmartWrite output and felt, with genuine professional confidence, that the AI was writing estimates that were disconnected from reality.

So they edited. They removed line items they judged unlikely to be paid. They trimmed operations that felt like they would generate pushback. They got the estimates to a number that matched their sense of what was appropriate — a number shaped by years of experience negotiating with insurance adjusters, working within DRP guidelines, and learning through trial and error what carriers would and wouldn’t approve.

The experience of the tool, for most of these estimators, was frustrating. They felt like they were fighting it rather than using it.

“I’ve been doing this for fifteen years. I know what insurance will and won’t approve.”

What the Numbers Said: Something Completely Different

When we pulled the actual paid claim data and compared it to each shop’s historical baseline, the picture looked nothing like what estimators had described.

Average severity per estimate was up more than $1,500 across the study group. Not on the estimate — on the payment. Insurers were paying more. Shops were receiving more. And this was happening even though estimators had spent their time cutting the AI’s output down to what they thought was a realistic number.

The AI wasn’t overcharging. The shops were undercharging — and had been for years.

Here is why that matters. The items estimators removed as “unrealistic” were items SmartWrite had included because OEM procedures required them. Diagnostic operations. Corrosion protection steps. Seam sealing. Calibration procedures. ADAS-related operations. These are not padding — they are documented requirements for a complete, safe repair. Estimators had been leaving them off not because the vehicles didn’t need them, but because they had learned, over years of carrier interaction, that writing those items generated friction.

The result was that most shops in the study had been systematically underwriting their own work. Not because they didn’t know how to do the repairs. Not because they lacked the technical knowledge. But because they had internalized a version of their own scope that was shaped more by what they expected insurers to argue about than by what the vehicles actually required.

The $1,500 improvement happened even after estimators actively cut the AI scope down. The real gap is almost certainly larger.

And here is the part that should give every shop owner pause: the $1,500 improvement is a floor, not a ceiling. That number reflects what happened after estimators reduced the AI scope. If they had retained everything SmartWrite included and evaluated it against OEM documentation rather than against anticipated insurer resistance, the improvement would have been larger. We were measuring what survived the cut — not what the full, unedited scope would have returned.

The DRP Shops vs. Independent Shops: Same Result

One of the more striking findings of the study was that DRP-affiliated shops and non-DRP independent shops showed the same suppression behavior. We expected DRP shops to be more conservative — they operate under carrier guidelines that directly constrain estimate scope, so the instinct to cut items down is structurally reinforced by the program itself.

But non-DRP shops, with no formal carrier relationship constraining them, showed the same pattern. They looked at the comprehensive SmartWrite output and reached for the same edit that their DRP counterparts did: trim the line items that feel like they’ll generate pushback.

What this tells us is that carrier influence on how shops think about estimates has become something deeper than a program requirement. It has become professional instinct. Estimators who have never been on a DRP, who have no formal obligation to any carrier, are still internalizing what they believe carriers will accept as the standard against which they measure a complete estimate. The training the industry has given them — through adjuster negotiations, through supplement fights, through years of hearing “we don’t pay for that” — has become their intuition about what a fair estimate looks like.

That intuition is costing them money.

Why This Happens: The Psychology Behind the Cuts

What the Collision Hub study documented has a name in behavioral science: learned suppression bias. It is not a character flaw. It is not a lack of skill. It is a predictable cognitive response to years of operating in an environment where writing a complete estimate generated friction and writing a conservative estimate reduced conflict.

When an estimator removes a legitimate line item because they expect an insurer to dispute it, they are not exercising professional judgment about whether the repair requires that operation. They are making a prediction about insurer behavior based on past experience — and preemptively making the concession before the negotiation even starts.

Over time, this pattern becomes automatic. The estimator stops consciously deciding to remove the item. They simply don’t write it, because their trained sense of what a “normal” estimate looks like has been calibrated by an environment in which the carrier’s comfort level, not the vehicle’s repair requirements, defined normal.

This is why the SmartWrite output felt excessive to these estimators. It wasn’t excessive. It was complete. They had simply lost the reference point for what complete looks like, because their reference point had been shaped by years of systematic underpayment.

What This Means for Your Shop

The SmartWrite study carries three practical implications for shop owners and operators.

1. Your historical averages are not a reliable baseline.

If your shop’s average severity has been consistent year over year, that consistency may reflect a stable pattern of underbilling rather than accurate pricing. The $1,500 improvement in the Collision Hub study was measured against shops’ own historical baselines — which means those baselines were already reflecting years of suppressed scope. If your numbers look steady, the question is whether they are steady because you are writing complete estimates, or steady because you have learned to write the same incomplete estimate consistently.

2. The friction you are avoiding is costing you more than the friction would.

Estimators trim items because they expect pushback. In most cases, the pushback, even when it comes, results in the shop being paid for at least some of what they wrote. The items that generate the most friction — diagnostic operations, calibration requirements, corrosion protection procedures — are also the items that add the most to the final paid amount. Avoiding the conversation means giving up the payment. The Collision Hub data suggests that the conversations are worth having far more often than current shop behavior implies.

3. A comprehensive AI scope is a negotiating floor, not a wish list.

The right way to use a tool like SmartWrite is not to treat its output as a fantasy number that needs to be brought down to earth. It is to treat the output as the documented, OEM-supported scope of the repair — and to use that documentation when the conversation with the insurer happens. When the line item is in the estimate because the OEM procedure requires it, the shop has the procedural backing to defend it. When the line item was never written because the estimator pre-empted the argument, the shop has nothing to defend.

The AI is not overcharging. It is documenting what the repair requires. The question is whether your shop is prepared to stand behind that documentation.

The AI is not overcharging. It is documenting what the repair requires.

What to Do Differently

Changing this pattern requires two things: a different tool workflow and a different mindset about what a complete estimate represents.

On the workflow side, resist the instinct to edit the AI output based on what you think insurance will pay. Instead, review each line item against the OEM procedure for that vehicle and that repair. If the procedure requires it, keep it. Remove items that the actual damage does not support — not items that you predict an adjuster will question. Let the documentation be your guide, not your anticipation of the negotiation.

On the mindset side, talk to your team about where their sense of a “normal” estimate came from. Help them understand that their calibration was set by an industry environment in which systematic underpayment was common, and that the AI’s comprehensive output is not overreach — it is the repair the vehicle actually needs. Estimators who understand this are better positioned to retain legitimate line items and have confident conversations when adjusters push back.

The Collision Hub study showed that even without this mindset shift — even when estimators were actively cutting scope they believed was excessive — the subtractive AI workflow still produced a $1,500 improvement in paid severity. With the mindset shift, with training that helps estimators evaluate scope against OEM requirements rather than insurer expectations, the improvement will be larger.

The Bottom Line

Three hundred estimates. Multi-state. Real payments against real historical baselines. The result was unambiguous: shops that used SmartWrite’s comprehensive subtractive estimating output as their starting point got paid more — even when they spent the whole time complaining about how high it was.

The AI was not overcharging. The shops had been undercharging. The difference between those two things is more than $1,500 per estimate, and it has been accumulating on every job, every month, for years.

The question is not whether your shop can afford to use an AI estimating tool. The question is whether you can afford to keep writing estimates the same way you always have.

About This Study

This article is based on a 2025 field study conducted by Collision Hub examining 300 estimates completed using SmartWrite, an AI-assisted estimating tool with CCC ONE, across a multi-state sample of independent collision repair shops with a mix of DRP-affiliated and non-DRP operations. Severity outcomes were measured using actual insurer payments received compared against each participating shop’s historical average severity per estimate. Qualitative data was gathered through estimator interviews and focus groups conducted during the study period.

 

Collision Hub  |  collisionhub.com

For information about Collision Hub research programs and training resources, visit collisionhub.com.