Field Notes4 min read

We Rolled Out a New System to 200 Locations. It Failed.

Ernest Barkhudarian
Ernest Barkhudarian, Founder

Lessons from scaling a 200-location delivery network — and everything that went wrong

The order management system had to go. Everyone agreed. It was a patchwork of spreadsheets, chat groups, and a database that had been outgrown three times over. Across 200+ delivery locations, people were working around it more than working with it.

So we built the replacement. Genuinely better: cleaner interface, real-time order tracking, automated courier routing, a dashboard for every location manager. Everything the old system should have been.

We built it, documented it, created training materials, scheduled onboarding sessions. Rolled it out to all 200+ locations simultaneously. Launch day felt like a success — everyone logged in, everyone completed the training checklist.

Three months later, I pulled the usage data and my stomach dropped.

The Numbers That Hurt

  • 60% of locations were still running their old process in parallel — the spreadsheets, the chat groups, all of it
  • 25% of locations had stopped using the new system entirely and reverted to paper
  • 15% of locations were using the new system properly — and they were almost all locations where the manager had been personally involved in the feedback process during development

The training was completed. The system was deployed. And the majority of the network was ignoring it.

Why It Failed

Looking back, the failure pattern was textbook. I've since seen the same pattern in every multi-location technology rollout that stumbles:

We built for HQ's needs, not location needs. The dashboard and reporting were great for headquarters. But the day-to-day workflow for a location manager — taking orders, assigning couriers, managing the floor during a rush — was actually slower in the new system than in their familiar spreadsheet. We optimized for visibility and forgot about usability at the point of execution.

Training checked a box but didn't change behavior. Everyone completed the training. Watched the videos, clicked through the modules, passed the quiz. But training completion and actual behavior change are completely different things. When the first busy day hit and the new system felt slower than the old one, people went back to what they knew.

We launched everywhere at once. Two hundred locations on the same day. No pilot phase. No champion locations. No feedback loop between early adopters and the development team. By the time we realized the UX problems at location level, the damage was done — the network had already decided "this system doesn't work."

Nobody at the location level had ownership. The system was something HQ "gave" to locations. Not something locations asked for, contributed to, or had a stake in. When you don't own it, you don't fight for it.

Growth without chaos — launch in 1 day

Training, standards, gamification, and analytics — one operating system for your franchise family

Book a Demo

What Actually Works

After that failure, we rebuilt the rollout approach. The second time around, it worked. Here's what changed:

Start with 5 locations, not 200. Pick five locations with engaged managers. Deploy there first. Watch how they actually use the system during a real work day. You'll learn more in one week of real usage at five locations than in six months of planning at HQ.

Fix the location-level workflow first. Before scaling, make sure the system is actually faster and easier for the person using it 50 times a day. If a location manager has to click six times where they used to click twice, they will abandon the system. Every time. It doesn't matter how much better the HQ dashboard looks.

Create champion locations, not training modules. When those first five locations are genuinely happy with the system, they become your proof. Other location managers trust peer feedback more than HQ directives. "Location 47 says it saves them two hours a day" is more persuasive than any training video.

Roll out in waves, not all at once. After the pilot, expand to 20 locations. Then 50. Then the rest. Each wave gives you a feedback cycle to fix issues before they affect the entire network. The locations that come later get a better product because the earlier waves surfaced the problems.

Measure adoption, not training completion. "100% of locations completed training" means nothing. "85% of locations are using the new system as their primary workflow after 30 days" means everything. Track actual usage, not checkbox completion.

Why This Matters for Franchise Networks

Franchise networks face this exact problem every time HQ mandates a change — new SOPs, new technology, new reporting requirements, new brand standards.

The dynamic is always the same: HQ decides, HQ builds, HQ deploys, locations resist. Not because franchisees are difficult, but because the change was designed from the top without understanding how work actually happens at the bottom.

The franchise networks that successfully deploy new systems are the ones that:

  • Treat franchisees as partners in the rollout, not recipients of it
  • Start small, prove the value at the location level, and then scale
  • Measure whether the change actually made the franchisee's day easier
  • Accept that the first version will be wrong and plan for iteration

The cost of a failed rollout isn't just the development budget. It's the trust deficit it creates. Every failed deployment makes the next one harder — because now the network's default response to any HQ initiative is "here we go again."

Get the first rollout right, and you earn the credibility to make the next one faster. That credibility compounds over time, and it's one of the most undervalued assets a franchisor can build.

Growth without chaos — launch in 1 day

Training, standards, gamification, and analytics — one operating system for your franchise family

Book a Demo
Ernest Barkhudarian

Author

Ernest Barkhudarian

Founder

17 years building tech for multi-location businesses — from flower delivery networks to e-commerce operations. Writes about what he learned scaling operations across hundreds of locations, and why he built Franchise.Family.

Related Articles

Field Notes4 min read

A Broken Address Field Was Costing Us $2M Across 200 Locations

A single form field bug in a 200-location delivery network was silently killing conversions at every location. Nobody noticed — each location thought it was just a slow day. Here's how we found it, and why your franchise network probably has the same class of problem.

Field Notes4 min read

Your Network Will Face a 10x Day. Here's How to Survive It

One holiday turned every location in our 200-site delivery network into a warzone. Same platform, same training — wildly different results. Here's what separated the locations that crushed it from the ones that collapsed, and why franchise networks need to prepare differently.

Field Notes4 min read

One Employee, One Promo Code, $700K Gone

An internal 100% discount code meant 'for testing only' leaked to a partner network. 112 orders at $0 in one hour. No hack, no breach — just a missing access control. Here's what happened, and why franchise networks are even more exposed to this kind of loss.