There's a question that comes up in every growing SaaS company: how do you release a new feature to 5% of your merchants without deploying new code? And how do you kill that feature in 30 seconds if something goes wrong?
If your answer involves a code change, a PR review, a CI pipeline, and a deploy — you're too slow. By the time you've rolled back through that process, merchants have already been impacted.
This is the problem that led me to build the frontend gating logic and shared state models that connect backend configuration classes to merchant-facing UI. The result was a system where deploys and releases became completely independent events. And it changed how our entire product team thought about shipping.
Deploys are not releases
This is the single most important mental model shift for any SaaS engineering team. A deploy puts code into production. A release makes functionality available to users. These should be two separate actions, controlled by two separate mechanisms.
When deploys and releases are coupled, you get a few predictable problems. Releases are batched into sprint cycles because deploys are heavyweight. Rollbacks require redeployments, which are slow and risky. Product managers can't control who sees what without engineering involvement. And every release carries the full risk of a production deploy, even if the actual change is a single UI toggle.
Decoupling them means you can deploy code whenever it's ready — daily, even multiple times a day — with new features sitting behind configuration gates. Then the decision of who sees what, and when, becomes a product decision rather than an engineering event.
How the architecture works
The system I built has three layers, and the key design principle is that each layer has a single, clear responsibility.
The first layer is the backend configuration model. These are typed configuration classes that define the feature surface: what the feature is, what merchant segments it applies to, what the default state is, and what the rollout rules are. This isn't a loose key-value store. It's structured, validated configuration that lives alongside the service code and is version-controlled like any other code artifact.
The second layer is the state synchronization layer. The frontend needs to know, at render time, which features are active for the current merchant context. This means the backend config state needs to be projected into frontend-consumable shared state — efficiently, without adding latency to page loads, and without requiring the frontend to understand the backend's internal configuration model. We built a thin translation layer that resolves the backend config into a flat, typed feature map that the frontend can consume through a shared state model.
The third layer is the frontend gating logic. This is where the UI decides what to render based on the resolved feature state. The design goal here was to make feature gating feel invisible to product engineers. You shouldn't need to understand the configuration system to gate a component. You check a feature key, you get a boolean, you render or don't. The complexity of segment targeting, gradual rollout percentages, and override rules is entirely abstracted away.
Why structured config beats ad-hoc flags
A lot of teams start with feature flags as a simple boolean toggle — SHOW_NEW_DASHBOARD: true/false — and that works fine for a while. But it breaks down as soon as you need segmented rollouts, dependent features, or any kind of lifecycle management.
We made the decision early to use structured, typed configuration classes rather than a flat flag store. This was more upfront work, but it paid for itself almost immediately.
Typed configs give you compile-time safety. If a feature config is malformed, you catch it before it reaches production, not when a merchant sees a broken UI. They also give you expressiveness — a config class can encode rollout rules, segment targeting, mutual exclusivity with other features, and expiration dates. Try doing that cleanly with a boolean.
And critically, typed configs are self-documenting. When a new engineer looks at a feature's configuration class, they can understand the rollout strategy, the target audience, and the current state without asking anyone. A flat flag store tells you nothing about intent.
The product impact: giving PMs a kill switch
The most immediate impact of this architecture wasn't technical — it was organizational. Product managers gained the ability to control feature rollout scope without filing engineering tickets.
Want to enable a new merchant workflow for a specific segment? Change the config. Want to expand from 10% to 50%? Change the config. Something goes wrong and you need to kill a feature across all merchants in 30 seconds? Change the config.
This might sound like a small thing, but it fundamentally changed the relationship between product and engineering. Releases stopped being engineering events. Product could experiment faster, with less risk, and with more granular control than they'd ever had. Engineering could focus on building features rather than managing rollout logistics.
The kill-switch capability alone justified the entire investment. Before this system, rolling back a merchant-facing feature meant reverting code, redeploying, and hoping nothing else in that deploy was affected. After, it was a config change that propagated in seconds. The risk profile of every release dropped dramatically.
Lessons from building config-driven UI
A few things I'd pass along to anyone building a similar system.
Don't let your config system become a dumping ground. Every flag or config should have an owner, a purpose, and an expiration date. Stale flags are tech debt that silently accumulates. We built a cleanup process where configs that hadn't been modified in 90 days got flagged for review.
Invest in observability from day one. You need to know, at any moment, which features are active for which merchant segments, and you need to be able to correlate feature state with error rates and performance metrics. Without this, you're flying blind when something goes wrong.
Test the gating logic, not just the features. It's easy to write tests for the feature behind the gate and forget to test the gating behavior itself. What happens when a config is missing? What happens when the state sync layer returns stale data? What happens when two conflicting configs are both active? These are the edge cases that bite you in production.
And finally: the hardest part isn't the initial build — it's the governance. Once product teams realize they can control rollouts through config, they will create a lot of configs. Having clear naming conventions, ownership rules, and lifecycle management from the start saves you from a tangled mess of flags that nobody understands six months later.
Config-driven UI as a platform capability
What started as a solution to a specific rollout problem became a core platform capability. Today, every new merchant-facing feature at our org ships behind a configuration gate by default. It's not a process we have to enforce — it's just how things are built, because the tooling makes it the path of least resistance.
That's the mark of good platform engineering. Not mandates, but defaults. Make the right thing the easy thing, and teams will do the right thing without being asked.
Config-driven architecture is one of those topics where the details really matter. If you're building something similar or thinking about decoupling deploys from releases, I'd enjoy the conversation — connect with me on LinkedIn.