The decision to migrate from a monolith to microservices is one of the most consequential architectural choices a team can make. It will consume engineering months, reshape how teams work, and introduce an entirely new class of operational problems.

It can also be exactly the right call.

This guide is for engineering leaders and architects who need to make that decision with open eyes—and, if they proceed, execute it without burning down their engineering org.

Monolith vs Microservices: What You’re Actually Trading

Before talking about migration, be honest about what monoliths do well. A well-structured monolith has:

  • Simple local method calls instead of network hops
  • ACID transactions across the entire domain
  • One deployment unit to monitor, version, and roll back
  • Straightforward debugging with a single call stack

Microservices give you:

  • Independent deployment of individual services
  • Technology heterogeneity where it genuinely matters
  • Fine-grained scaling of specific bottlenecks
  • Organizational scalability — teams can work without stepping on each other

The problem is that the second list reads like a marketing brochure. The first list describes real operational advantages that many teams give up without replacing them.

When to Use Microservices: The Real Signals

Stop listening to conference talks and look at your actual pain points. Migration makes sense when you’re experiencing these specific problems:

Deployment Bottlenecks

If releasing a change to the payment module requires coordinating with the search team, the notifications team, and the reporting team — and it’s been that way for months — you have a genuine organizational scaling problem. Independent deployment is worth its cost when deployment coordination is your bottleneck.

Signal: "We can only deploy once a week because too many teams need to sign off"
Solution candidate: Extract high-churn domains into independently deployable services

Wildly Different Scaling Profiles

If your image processing pipeline needs 40 instances during a batch job while your user auth service runs fine on 2, sharing a JVM is wasteful. But “we have two different modules” is not a scaling problem — it’s a theoretical one.

Signal: "We're scaling the entire monolith to handle load on 10% of the codebase"
Solution candidate: Extract compute-heavy workloads into separate services

Team Ownership Has Broken Down

When every PR touches shared code that no one owns, when you’ve had three production incidents because a “minor” change in one module broke an unrelated feature — your domain boundaries have collapsed. Microservices can enforce the boundaries that code reviews couldn’t.

When NOT to Migrate

This is the section most guides skip.

Do not migrate if:

  • You have fewer than 20 engineers. The operational overhead will swamp you.
  • Your monolith is the problem, but your team structure isn’t. Microservices won’t fix organizational dysfunction.
  • You don’t have a mature CI/CD pipeline. Deploying 15 services manually is worse than deploying one monolith manually.
  • Your data model is deeply relational. Splitting a highly-normalized database is painful and often produces worse systems.

A useful heuristic: if you can’t articulate which specific team will own which specific service, you’re not ready.

Microservices Migration Strategy: The Strangler Fig Pattern

The strangler fig is named after a tropical vine that grows around a tree until it replaces it. Applied to software: you never rewrite the monolith wholesale. Instead, you extract one capability at a time, routing traffic to the new service while the monolith handles everything else. The monolith shrinks as the services grow.

This is the only migration strategy I recommend. Big-bang rewrites fail at a rate that should be embarrassing to the industry, but isn’t discussed enough.

How It Works in Practice

Step 1: Add a routing layer in front of the monolith

Before extracting anything, introduce a proxy (later, your API gateway) that all client traffic passes through. This is infrastructure investment that pays dividends throughout the migration.

# spring-cloud-gateway route configuration
spring:
  cloud:
    gateway:
      routes:
        # Initially, everything goes to the monolith
        - id: monolith-catchall
          uri: http://monolith:8080
          predicates:
            - Path=/**

Step 2: Identify your extraction candidates

Not all modules are equal. Extract in this order:

  1. High-churn code that changes frequently and independently
  2. Code with distinct scaling needs
  3. Code with clean, limited interfaces to the rest of the monolith

Avoid extracting deeply entangled modules first. Start with the outer leaves of your dependency graph, not the core.

Step 3: Extract the service

Create a new Spring Boot application for the extracted domain. During the transition period, the new service calls back into the monolith for data it doesn’t yet own, or reads from a shared database (temporarily acceptable — see the database section below).

// New UserNotificationService - extracted from monolith
@SpringBootApplication
public class UserNotificationServiceApplication {
    public static void main(String[] args) {
        SpringApplication.run(UserNotificationServiceApplication.class, args);
    }
}

@RestController
@RequestMapping("/api/notifications")
public class NotificationController {

    private final NotificationService notificationService;
    // During transition: calls back to monolith for user data
    private final MonolithUserClient monolithUserClient;

    @PostMapping("/send")
    public ResponseEntity<Void> sendNotification(@RequestBody NotificationRequest request) {
        UserDto user = monolithUserClient.getUser(request.getUserId());
        notificationService.send(user, request.getMessage());
        return ResponseEntity.ok().build();
    }
}

Step 4: Route traffic to the new service

Update your gateway to send the relevant requests to the extracted service:

spring:
  cloud:
    gateway:
      routes:
        # New: notifications go to the extracted service
        - id: notification-service
          uri: http://notification-service:8081
          predicates:
            - Path=/api/notifications/**
          filters:
            - StripPrefix=0

        # Monolith still handles everything else
        - id: monolith-catchall
          uri: http://monolith:8080
          predicates:
            - Path=/**

Step 5: Repeat

Identify the next candidate. Extract. Route. The monolith shrinks; the service mesh grows.

Database Decomposition

This is where most migrations struggle. Teams extract application logic but leave the monolith database shared — and call it done. It isn’t. Shared databases create invisible coupling: schema changes in one service break others, database-level transactions hide business logic that should live in services, and no service can evolve its storage independently.

The path to database-per-service:

Phase 1: Identify bounded contexts in the schema

Map your tables to the domains in your application. You’re looking for clusters of tables that are only joined within a single domain — those are your extraction candidates. Tables that appear in joins across many domains are the problem cases you’ll tackle last.

Phase 2: Separate the schema while still sharing the database

Before moving data to a new server, separate it logically. Use database schemas or schema namespacing, and update your services to only connect to their schema:

# notification-service application.yml
spring:
  datasource:
    url: jdbc:postgresql://shared-db:5432/app_db
    username: notification_service_user  # User has access ONLY to notification schema
  jpa:
    properties:
      hibernate:
        default_schema: notifications

At the application level, create strict boundaries. No service code reaches into another service’s schema — ever. If you need data from another domain, call its API.

Phase 3: Migrate to separate database instances

Once the schema boundaries are clean, moving to separate database instances is a deployment concern, not an application code concern. Use a database migration tool and switch connection strings.

# notification-service after full extraction
spring:
  datasource:
    url: jdbc:postgresql://notification-db:5432/notifications
    username: notification_service

Handling Cross-Service Queries

The hardest problem: you used to do SELECT u.name, n.message FROM users u JOIN notifications n ON u.id = n.user_id. Now users and notifications live in separate databases.

Options:

  • API composition: The client calls both services and joins in memory. Fine for simple cases.
  • Event-driven materialized views: The notification service subscribes to user-changed events and maintains a local projection of the user data it needs. This is the pattern that scales.
  • CQRS with read models: Maintain denormalized read models purpose-built for specific queries.
// Event-driven approach: notification service maintains its own user projection
@Component
public class UserProjectionUpdater {

    private final LocalUserRepository localUserRepository;

    @KafkaListener(topics = "user-events")
    public void onUserEvent(UserEvent event) {
        switch (event.getType()) {
            case CREATED, UPDATED -> localUserRepository.upsert(
                new LocalUser(event.getUserId(), event.getName(), event.getEmail())
            );
            case DELETED -> localUserRepository.delete(event.getUserId());
        }
    }
}

API Gateway with Spring Cloud Gateway

Your API gateway is the front door to the service mesh. It handles the concerns that used to be scattered across your monolith’s filter chains: authentication, routing, rate limiting, and observability.

Spring Cloud Gateway is the natural choice in a Spring ecosystem. Here’s a production-ready configuration:

@Configuration
public class GatewayConfig {

    @Bean
    public RouteLocator routes(RouteLocatorBuilder builder) {
        return builder.routes()
            // Notification service
            .route("notifications", r -> r
                .path("/api/notifications/**")
                .filters(f -> f
                    .requestRateLimiter(c -> c
                        .setRateLimiter(redisRateLimiter())
                        .setKeyResolver(userKeyResolver()))
                    .circuitBreaker(c -> c
                        .setName("notificationCB")
                        .setFallbackUri("forward:/fallback/notifications"))
                )
                .uri("lb://notification-service"))

            // Order service
            .route("orders", r -> r
                .path("/api/orders/**")
                .filters(f -> f
                    .retry(c -> c.setRetries(2).setStatuses(HttpStatus.BAD_GATEWAY)))
                .uri("lb://order-service"))
            .build();
    }
}

Authentication lives at the gateway level — services trust that requests reaching them have already been authenticated:

@Component
public class JwtAuthenticationFilter implements GlobalFilter, Ordered {

    private final JwtValidator jwtValidator;

    @Override
    public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
        String token = extractToken(exchange.getRequest());

        if (token == null) {
            exchange.getResponse().setStatusCode(HttpStatus.UNAUTHORIZED);
            return exchange.getResponse().setComplete();
        }

        return jwtValidator.validate(token)
            .flatMap(claims -> {
                // Propagate user identity to downstream services via headers
                ServerHttpRequest mutatedRequest = exchange.getRequest().mutate()
                    .header("X-User-Id", claims.getSubject())
                    .header("X-User-Roles", String.join(",", claims.getRoles()))
                    .build();
                return chain.filter(exchange.mutate().request(mutatedRequest).build());
            })
            .onErrorResume(e -> {
                exchange.getResponse().setStatusCode(HttpStatus.UNAUTHORIZED);
                return exchange.getResponse().setComplete();
            });
    }

    @Override
    public int getOrder() { return -1; }
}

Common Mistakes That Derail Migrations

1. Nano-services

The most common failure mode. Teams over-decompose, creating a UserPreferenceService, UserAddressService, UserNotificationPreferenceService… and then a simple user profile page requires 12 network calls to render. The latency compounds, error handling becomes a maze, and you’ve traded one complex system for a distributed one that’s harder to understand.

Rule of thumb: A service should be owned by a single team and be deployable without coordinating with other teams. If it’s smaller than that, it’s probably too small.

2. Skipping the observability foundation

Once you have 10 services, debugging a production issue without distributed tracing is miserable. Set up your observability stack before you need it, not after.

Spring Boot with Micrometer and OpenTelemetry gets you there:

<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-tracing-bridge-otel</artifactId>
</dependency>
<dependency>
    <groupId>io.opentelemetry.instrumentation</groupId>
    <artifactId>opentelemetry-spring-boot-starter</artifactId>
</dependency>
management:
  tracing:
    sampling:
      probability: 1.0  # 100% in dev, ~0.1 in high-traffic prod
  otlp:
    tracing:
      endpoint: http://jaeger:4318/v1/traces

3. Keeping the shared database

Already covered above, but worth repeating: splitting the application while sharing the database is not microservices — it’s a distributed monolith with all the downsides of both approaches and benefits of neither.

4. Ignoring Conway’s Law

Your architecture will mirror your team structure. If three teams share ownership of a service, that service will have three competing design philosophies and unclear accountability. Before you draw service boundaries, draw team boundaries. They should match.

5. Migrating everything at once

The strangler fig pattern exists for a reason. Teams that attempt to rewrite the entire monolith as microservices simultaneously routinely spend 18 months in a never-ending migration with no working software to show for it. Extract one service at a time. Ship it. Learn from it. Repeat.

A Migration Roadmap

A realistic migration timeline for a mid-sized system:

Months 1–2: Infrastructure foundation

  • Set up CI/CD pipelines capable of deploying multiple services independently
  • Deploy Spring Cloud Gateway in front of the monolith
  • Establish distributed tracing, centralized logging, and service health dashboards
  • Extract no functionality yet — get the platform right first

Months 3–4: First extraction

  • Identify one module with clean boundaries and a clear owner
  • Extract it using the strangler fig approach
  • Treat this as a learning exercise, not just a delivery

Months 5–12: Systematic extraction

  • Apply lessons from the first extraction to subsequent services
  • Each extraction should be faster than the last
  • Decommission monolith modules as their services stabilize

After 12 months: Evaluate

  • Is the migration delivering the benefits you expected?
  • Are teams actually deploying independently?
  • Is operational overhead manageable?

Honest answers here may lead you to slow down, speed up, or in some cases, conclude that selective migration (some services extracted, monolith handles the rest) is the right long-term architecture.

What Success Actually Looks Like

After a successful migration, teams should be able to:

  • Deploy their service on their own schedule, without a release train
  • Scale their service independently, without touching adjacent teams
  • Make schema changes to their database without breaking anything else
  • Rewrite their service in a different language if it genuinely makes sense

If you achieve those four things, you’ve migrated successfully. If you’ve split into 30 services but still need to coordinate deploys and still share a database, you have a distributed monolith — and the conversation about how you got there is worth having before going further.

The goal was never microservices. The goal was organizational and technical agility. Microservices are one path to that goal, for teams and systems of sufficient scale. Used correctly, with a clear strategy, they’re worth the investment.

For further reading on the patterns in a live Spring Boot microservices system, see Spring Boot Microservices: Architecture Patterns That Actually Work.

Frequently Asked Questions

When should you migrate from monolith to microservices?

Migrate when deployment cycles are blocked by team coordination, when different parts of the system have drastically different scaling requirements, or when a single domain change requires touching too much of the codebase. Do not migrate just because microservices are popular—a well-structured monolith outperforms a poorly-designed microservices system in nearly every operational metric.

What is the strangler fig pattern in microservices migration?

The strangler fig pattern involves incrementally replacing a monolith by extracting functionality piece by piece. New requests are routed to extracted microservices while old functionality stays in the monolith until fully replaced. The monolith gradually shrinks as services grow around it, rather than a big-bang rewrite that carries enormous risk.

How do you handle database migration when splitting a monolith?

Never share a database between microservices as a permanent state. Extract services gradually: first separate application logic, then run with a shared database temporarily, then split the schema. Use the Database-per-Service pattern as the target state. For cross-service data needs, use API calls, events, or materialized views rather than direct database joins.

How does Spring Cloud Gateway work as an API gateway?

Spring Cloud Gateway sits in front of all your microservices and handles cross-cutting concerns: routing, authentication, rate limiting, and request transformation. Configure routes declaratively in YAML or programmatically via the Java DSL. It integrates with Spring Security for JWT validation, Resilience4j for circuit breaking, and Micrometer for observability.

What are the most common mistakes when migrating to microservices?

The biggest mistakes: decomposing too granularly too fast (nano-services with massive network overhead), not establishing shared observability before splitting, keeping a shared database after splitting the application, and ignoring the organizational changes required. Conway’s Law is real—your architecture will reflect your team structure, for better or worse.