A monolith that has served its purpose well is easy to underestimate. It shipped the product, scaled the user base to its current level, and gave the engineering team a single, understandable system to reason about. Then, gradually, the signs appear. Deployments stretch to forty-five minutes. A bug in the invoicing module breaks the login screen. The fastest engineers spend the majority of each sprint managing the complexity of a system that was designed for a different era.
This is the moment most engineering leaders start thinking about microservices. It is also the moment most microservices migrations go wrong.
The outcome Nalashaa sees most often in assessments of stalled migrations is not a technology failure. It is a sequencing failure: teams that began extracting services before they had enforced clean domain boundaries inside the monolith and carried every coupling problem into the new architecture. The result is a distributed system that is harder to operate than the monolith it replaced.
This post covers how to avoid that outcome: when migration is the right call, which patterns work, and the phased roadmap that delivers genuine scalability gains without breaking the system that currently pays the bills.
First: Should You Actually Migrate?
Most vendor content about microservices skips this question entirely. It is the most important question to answer before any architecture discussion.
Microservices add real operational overhead: distributed systems complexity, independent deployments per service, separate data stores per service, service discovery, inter-service communication failures, and distributed tracing requirements. The operational overhead of microservices like independent CI/CD pipelines, separate monitoring per service, cross-service debugging, requires a team large enough to absorb it without the architecture consuming more capacity than it frees up.
There are three situations where staying with a well-structured monolith is the right call.
Small Teams
If your engineering team is fewer than ten people, the operational overhead of microservices will consume more engineering capacity than the architecture gains free up. Independent CI/CD pipelines per service, separate monitoring per service, cross-service debugging; these require a team large enough to own services independently. The organizational structure that microservices depend on requires the team size to match.
Rapidly Changing Requirements
Microservices require stable domain boundaries. If your product's core concepts are still evolving. For instance, if the definition of what a "patient" or an "order" or an "account" means in your system is still shifting. Drawing service boundaries now locks you into premature domain definitions that will be expensive to undo later. Wait until the domain model has stabilized.
No Real Scaling Pressure
If your monolith is not actually causing a scaling problem, do not create a distributed systems problem to solve a theoretical one. The right question is not whether microservices are popular. It is whether your specific bottleneck requires it.
For most teams at this crossroads, the right first move is a modular monolith: enforce strict internal domain boundaries, clean up cross-module dependencies, and treat that boundary-enforcement work as the architectural foundation from which service extraction can happen incrementally.
The Four Migration Patterns with Proven Track Records
For teams where migration is the right call, two factors determine whether it succeeds: the pattern chosen for extraction, and the discipline applied to domain boundaries.
The Strangler Fig Pattern
Named after the fig tree that gradually envelops and replaces its host, the Strangler Fig pattern routes new functionality through a new service while the monolith continues handling existing features. Traffic is gradually redirected as features migrate, the monolith shrinks as the service layer grows, and eventually the monolith can be retired.
This is the most widely recommended pattern for large monoliths where a full rewrite is too risky. It allows the team to deliver new features and migrate existing ones in parallel, with each migration independently testable and reversible. The scale at which this approach can operate is illustrated by Amazon's own internal migration: a program spanning 800 services and thousands of microservices, in which the most critical services recorded a 40% reduction in latency while handling twice the previous transaction volume after replatforming.
Domain-Driven Design for Service Boundaries
Domain-Driven Design (DDD) provides the framework for deciding where one service ends and another begins. The key concept is a bounded context: a domain of the business with its own consistent model, its own language, and its own data ownership.
The most common migration mistake is cutting service boundaries along technical lines: "the database service," "the notification service," "the API service." This produces a distributed monolith, services that are physically separate but tightly coupled through shared databases and synchronous API dependencies. Services should always be cut along business domain lines: "the billing service," "the patient service," "the scheduling service." Each service owns its data, its business logic, and its interface.
Branch by Abstraction
This pattern introduces an abstraction layer inside the monolith: an interface that the rest of the codebase uses to communicate with a specific module. The new service is built behind that interface. When ready, a feature flag switches the implementation from the monolith module to the new service. The monolith code can be safely removed after the new service has proven stable in production. For instance, A logistics platform used this pattern to extract their routing engine - the interface stayed unchanged while the implementation switched from a monolith module to a containerized service behind a feature flag.
Branch by Abstraction is particularly useful when the monolith and the new service need to run in parallel without maintaining two full implementations of the same business logic.
Event-Driven Decoupling
This involves synchronous API calls between services to create coupling at the network layer. If Service A must call Service B synchronously to complete an operation, A's availability is tied to B's. Event-driven architecture replaces many of these dependencies with an event bus: Service A publishes an event; Service B consumes it at its own pace.
Kafka and RabbitMQ are the most widely used event streaming platforms for this pattern. Event-driven approaches are particularly valuable for workflows involving notifications, analytics, audit trails, and any operation where the caller does not need an immediate response.
90%
of microservices teams still batch-deploy like monoliths, taking on all the architectural complexity without realizing the deployment independence that was the point of the migration. Independent deployment requires CI/CD pipelines, automated tests, and team culture aligned per service.
Source: DEV Community
A Realistic Five-Phase Migration Roadmap
The following phases apply to a team that has already determined that migration is the right call. Each phase has a clear, deliverable, and measurable exit criterion.
| Phase | Duration | Deliverable |
|---|---|---|
| 1. Modularize | 4–8 weeks | Strict module boundaries enforced inside the monolith. All cross-domain communication through defined interfaces. |
| 2. Extract High-Load Services | 8–12 weeks | Two or three highest-load or most frequently changed modules extracted as standalone services with independent deployment pipelines. |
| 3. Decouple Data | 6–10 weeks | Each extracted service owns its data store. Event-driven patterns synchronize state across services. No shared databases. |
| 4. Containerize and Orchestrate | 4–8 weeks | All services containerized with Docker, deployed on Kubernetes, with a service mesh for traffic management and observability. |
| 5. Complete and Retire | 12–24 weeks | Remaining modules extracted. Monolith retired. Full observability implemented: distributed tracing, centralized logging, service-level alerting. |
WARNING: Phase 1 is the one most teams skip. Enforcing boundaries before extracting services is not groundwork. It is migration.
FROM NALASHAA'S ARCHITECTURE ASSESSMENTS
In Nalashaa's assessments of stalled migrations, the most consistent failure pattern is not technical. It is sequencing. Teams begin extracting services before enforcing clean domain boundaries inside the monolith, carrying their coupling problems directly into the new architecture. The extracted service still calls back to the monolith database. The event schema replicates the monolith data model. The result is a distributed system that is harder to debug than the monolith it was meant to replace, and a team that has taken on all the operational overhead of microservices without any of the isolation benefits. The teams that avoid this outcome almost universally share one characteristic: they treat Phase 1 as non-negotiable, even when it feels like delay.
The Migration Mistakes That Add Months to Your Timeline
Service Boundaries Defined by Technical Layer, Not Business Domain
Cutting a "data service," an "auth service," and an "API service" out of a monolith produces a distributed monolith: services deployed independently but remaining tightly coupled through shared state and synchronous dependencies. The result is all the operational overhead of microservices with none of the isolation benefits. Boundaries should always follow DDD bounded contexts, not technical layers.
Migrating Too Many Services in Parallel
Most teams find that one to two concurrent extractions are the maximum they can validate effectively. The compound integration risk makes it impossible to validate that anything is working correctly. Migrate sequentially, validate fully, then proceed.
Ignoring Distributed Systems Complexity
Microservices introduce challenges that do not exist in a monolith: network latency, partial failures, eventual consistency, service discovery, and distributed transaction management. Teams that underestimate this typically encounter it first in production. These require explicit engineering decisions before services are deployed, not after.
Not Investing in Observability Before You Need It
In a monolith, you can trace a request through the codebase with a debugger. In a distributed system, a single user request may traverse five services. Without distributed tracing and centralized logging, debugging production issues is significantly harder than in the monolith you migrated away from. Build observability before you have incidents to diagnose, not during them.
SCENARIO
A fintech ISV preparing to extract their payment processing module had accumulated twelve years of cross-module database joins, direct table relationships between the billing schema and the user management schema. The team moved straight to extraction without first decoupling those schemas, assuming the data migration could happen in parallel. Three months in, every payment event required a synchronous call back to the user management module to resolve account attributes that had never been migrated into the billing service's own data store. The service was deployed independently but functionally dependent on the monolith for every transaction. The team paused, enforced strict data ownership by migrating the relevant user attributes into the billing service's schema, and re-extracted cleanly. The original approach, continued to completion, was estimated to carry a cost of $2.3 million in delayed delivery velocity and accumulated rework (Full Scale, 2025). The intervention cost four weeks. The alternative costs significantly more.
The Long View
The migration from monolith to microservices is a multi-year program, not a single project. Teams that approach it as a phased architectural evolution consistently deliver better outcomes than those that attempt it all at once.
The architecture matters. The execution discipline matters more. Enforce boundaries before extracting services. Define domain boundaries before writing service code. Build observability before you have incidents to diagnose. Use the pattern that fits your risk tolerance: the Strangler Fig for large existing systems, Branch by Abstraction for bounded modules, event-driven patterns for async workflows.
The tooling is mature. Docker, Kubernetes, Kafka, and service meshes are proven at scale. What determines whether a migration delivers its promised benefits is the quality of decisions made in the first three phases, before a single service reaches production.
Planning a monolith-to-microservices migration? Or trying to decide whether it is the right move? Nalashaa's architecture teams have guided ISVs and enterprises through this transition across healthcare, logistics, and manufacturing — without disrupting live operations.
Frequently Asked Questions
How do you migrate from monolith to microservices?
The most reliable approach uses the Strangler Fig pattern, extracting services incrementally rather than rewriting everything at once. Start by enforcing strict domain boundaries inside the monolith, then extract the highest-load or most frequently changed modules first, give each service its own data store, and build CI/CD pipelines per service. The critical sequencing constraint is that boundary enforcement must come before extraction.
When should you break apart a monolith?
When specific modules need to scale independently, when the engineering team is large enough (10 or more developers) to own services without excessive coordination overhead, when domain boundaries are stable enough to define service contracts, and when deployment velocity is genuinely constrained by the monolithic deployment model. If none of these conditions apply, a well-structured modular monolith is likely the better choice.
What is the Strangler Fig pattern?
The Strangler Fig pattern gradually migrates functionality from a monolith to a new service architecture by routing new features through the new services while the monolith handles existing ones. Traffic is incrementally shifted to the new services until the monolith can be retired. It is the lowest-risk approach for large legacy systems because each migration step is independently testable and reversible.
What is a distributed monolith?
A distributed monolith is what results when services are physically separated but remain tightly coupled through shared databases or synchronous API dependencies. It delivers all the operational overhead of microservices with none of the isolation benefits. It is the most common failure mode in migrations that skip domain boundary enforcement in the early phases.
How long does a monolith-to-microservices migration take?
A realistic full migration for a medium-sized monolith takes 18 to 24 months across all five phases. Some teams reach a meaningfully improved partially migrated state in 6 to 9 months by completing Phases 1 through 3. The timeline is primarily determined by team size, the quality of existing domain boundaries, and how strictly the team enforces the modular-first principle before extracting services.
What happens if we skip the modularization phase?
The coupling problems inside the monolith transfer directly into the new service architecture. Services that depend on shared databases or undeclared cross-service calls will behave like a distributed monolith: harder to debug than the original system and more expensive to unwind. Nalashaa's assessments consistently identify skipped Phase 1 work as the root cause of stalled migrations.