Bad logging is invisible until 2am when your system is on fire and you’re grepping through walls of text that don’t tell you anything useful. Good logging is a first-class citizen of your application—not an afterthought bolted on when something breaks.

Spring Boot gives you a solid logging foundation out of the box. Logback is pre-configured, SLF4J is wired in, and you get sensible defaults for development. The problems start when teams carry those development defaults into production, log at the wrong level everywhere, and have no consistent way to correlate logs across requests. This guide covers how to do it right.

How Spring Boot Logging Works

Spring Boot auto-configures Logback as the default logging implementation. You interact with it through SLF4J—a logging façade that decouples your code from the specific logging library underneath. That’s the Logger interface you use in your code, not Logback directly.

The dependency chain looks like this:

Your code → SLF4J API → Logback (via slf4j-api + logback-classic)

When you pull in spring-boot-starter, you get spring-boot-starter-logging transitively, which includes Logback, SLF4J, and the bridge libraries that reroute other logging frameworks (Log4j, JUL, JCL) through SLF4J. This matters because many libraries you depend on use different logging APIs internally. Spring Boot ensures they all funnel through Logback.

In your code, you declare a logger like this:

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

@Service
public class OrderService {

    private static final Logger log = LoggerFactory.getLogger(OrderService.class);

    public Order processOrder(String orderId) {
        log.info("Processing order {}", orderId);
        // ...
    }
}

If you’re using Lombok, @Slf4j generates the logger field for you:

@Slf4j
@Service
public class OrderService {
    public Order processOrder(String orderId) {
        log.info("Processing order {}", orderId);
    }
}

Always use {} placeholders instead of string concatenation. log.info("Order: " + orderId) builds the string whether or not INFO is enabled. log.info("Order: {}", orderId) skips the string construction entirely if the log level is above INFO. For DEBUG statements in hot paths, this matters.

Log Levels: What They Actually Mean

Spring Boot log levels map to SLF4J’s five levels. Misuse is common—teams that log everything at INFO, or sprinkle DEBUG statements they forget to dial back before deployment.

LevelWhen to Use
ERRORThe application cannot proceed. Something failed that requires immediate attention.
WARNSomething unexpected happened, but the application recovered. A retry succeeded. A deprecated API was called. Worth monitoring over time.
INFONormal application events that someone operating the system needs to see: startup, configuration, significant business events (order placed, payment processed).
DEBUGInformation useful for diagnosing problems during development. Method entry/exit, intermediate calculations, conditional branches taken.
TRACEExtremely verbose—wire-level data, every iteration of a loop, full request/response bodies.

The practical rule for production: INFO for business events, WARN for recoverable failures, ERROR for things that page someone. Never leave DEBUG or TRACE enabled globally in production—a single busy endpoint can produce GB/hour of log data.

Setting levels in application.properties:

# Root level (applies to everything not explicitly configured)
logging.level.root=WARN

# Your application packages
logging.level.com.yourcompany=INFO

# Third-party libraries you're debugging
logging.level.org.springframework.security=DEBUG

# Hibernate SQL (use sparingly in production)
logging.level.org.hibernate.SQL=DEBUG
logging.level.org.hibernate.type.descriptor.sql.BasicBinder=TRACE

Or in YAML:

logging:
  level:
    root: WARN
    com.yourcompany: INFO
    org.springframework.web: INFO
    org.hibernate.SQL: WARN

Setting root to WARN and your own package to INFO is a good production baseline. It suppresses the verbose framework output while keeping your application’s own log statements visible.

Logback Configuration

Spring Boot’s default Logback configuration is adequate for development but you’ll need to customize it for anything serious. The configuration file is logback-spring.xml (not logback.xml—the -spring suffix lets you use Spring’s <springProfile> and <springProperty> tags).

Place it in src/main/resources/:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>

    <!-- Import Spring Boot's default Logback configuration as a base -->
    <include resource="org/springframework/boot/logging/logback/defaults.xml"/>

    <springProperty scope="context" name="APP_NAME" source="spring.application.name" defaultValue="app"/>

    <!-- Console appender for development -->
    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <!-- Rolling file appender -->
    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>logs/${APP_NAME}.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
            <fileNamePattern>logs/${APP_NAME}-%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
            <maxFileSize>100MB</maxFileSize>
            <maxHistory>30</maxHistory>
            <totalSizeCap>3GB</totalSizeCap>
        </rollingPolicy>
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <!-- Development profile: console only, verbose -->
    <springProfile name="dev,local">
        <root level="INFO">
            <appender-ref ref="CONSOLE"/>
        </root>
        <logger name="com.yourcompany" level="DEBUG"/>
    </springProfile>

    <!-- Production profile: file output, less verbose -->
    <springProfile name="prod,staging">
        <root level="WARN">
            <appender-ref ref="FILE"/>
        </root>
        <logger name="com.yourcompany" level="INFO"/>
    </springProfile>

</configuration>

Key points about this configuration:

  • SizeAndTimeBasedRollingPolicy rotates by both time and size—important for high-traffic services that can fill a disk in hours
  • totalSizeCap limits total log storage even as individual files roll over
  • springProfile lets you define different logging behavior per environment without multiple config files
  • springProperty pulls values from your Spring configuration, so APP_NAME matches your actual service name

Structured JSON Logging for Production

Human-readable log formats like %d{...} [%thread] %-5level %logger{36} look good in a terminal. They’re painful to query in ELK, Splunk, Datadog, or any log aggregation platform. These tools parse structured data—when your logs are plain text, you’re paying the platform to run expensive regex parsing on every line.

The solution is JSON logging in production. Each log entry becomes a JSON object with typed fields that your log aggregator can index natively.

Add the Logstash encoder dependency:

Maven:

<dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>7.4</version>
</dependency>

Gradle:

implementation 'net.logstash.logback:logstash-logback-encoder:7.4'

Then update your production appender in logback-spring.xml:

<appender name="JSON_CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
    <encoder class="net.logstash.logback.encoder.LogstashEncoder">
        <!-- Include standard fields -->
        <includeCallerData>false</includeCallerData>
        <!-- Add custom fields to every log entry -->
        <customFields>{"service":"${APP_NAME}","environment":"${SPRING_PROFILES_ACTIVE}"}</customFields>
        <!-- Include MDC fields automatically -->
        <includeMdcKeyName>traceId</includeMdcKeyName>
        <includeMdcKeyName>requestId</includeMdcKeyName>
        <includeMdcKeyName>userId</includeMdcKeyName>
    </encoder>
</appender>

<springProfile name="prod,staging">
    <root level="WARN">
        <appender-ref ref="JSON_CONSOLE"/>
    </root>
    <logger name="com.yourcompany" level="INFO"/>
</springProfile>

A log entry now looks like this:

{
  "@timestamp": "2026-03-04T14:32:01.451Z",
  "level": "INFO",
  "logger_name": "com.yourcompany.OrderService",
  "message": "Processing order ORD-9821",
  "service": "order-service",
  "environment": "prod",
  "traceId": "4bf92f3577b34da6",
  "requestId": "req-a4b8c2d1",
  "userId": "usr-7329"
}

Now you can filter by traceId in Kibana and see every log line from that request across all services. You can alert on level:ERROR without parsing text. You can aggregate by logger_name to find which class is generating the most noise.

MDC for Request Tracing

MDC (Mapped Diagnostic Context) is a thread-local map that Logback includes in every log statement automatically. It’s how you attach a correlation ID, user ID, or request ID to every log line within a request without passing those values through every method call.

A filter that populates MDC for every incoming HTTP request:

@Component
@Order(Ordered.HIGHEST_PRECEDENCE)
public class RequestLoggingFilter extends OncePerRequestFilter {

    private static final Logger log = LoggerFactory.getLogger(RequestLoggingFilter.class);

    @Override
    protected void doFilterInternal(HttpServletRequest request,
                                    HttpServletResponse response,
                                    FilterChain filterChain)
            throws ServletException, IOException {

        String traceId = extractOrGenerateTraceId(request);
        String requestId = UUID.randomUUID().toString().replace("-", "").substring(0, 16);

        MDC.put("traceId", traceId);
        MDC.put("requestId", requestId);
        MDC.put("method", request.getMethod());
        MDC.put("path", request.getRequestURI());

        long startTime = System.currentTimeMillis();

        try {
            filterChain.doFilter(request, response);
        } finally {
            long duration = System.currentTimeMillis() - startTime;
            MDC.put("statusCode", String.valueOf(response.getStatus()));
            MDC.put("durationMs", String.valueOf(duration));

            log.info("Request completed");

            // Always clear MDC — threads are reused from the pool
            MDC.clear();
        }
    }

    private String extractOrGenerateTraceId(HttpServletRequest request) {
        // Accept trace IDs from upstream services (e.g., from an API gateway)
        String incoming = request.getHeader("X-Trace-Id");
        if (incoming != null && !incoming.isBlank()) {
            return incoming;
        }
        return UUID.randomUUID().toString().replace("-", "").substring(0, 16);
    }
}

MDC.clear() in the finally block is not optional. Servlet containers reuse threads from a pool. If you don’t clear MDC, the next request on that thread inherits the previous request’s context—trace IDs bleed between requests, and you end up correlating completely unrelated operations.

Now, in any service class called during that request, every log statement automatically includes traceId and requestId:

@Service
public class PaymentService {
    private static final Logger log = LoggerFactory.getLogger(PaymentService.class);

    public PaymentResult charge(PaymentRequest request) {
        log.info("Initiating payment for amount {}", request.getAmount());
        // traceId and requestId from MDC are included automatically
        // ...
    }
}

For virtual threads (Spring Boot 3.2+ with spring.threads.virtual.enabled=true), MDC still works correctly—SLF4J’s MDC is scoped to the carrier thread, not the virtual thread, but Logback handles this correctly in recent versions.

Async Logging

Synchronous logging blocks your application threads. For high-throughput services, this matters. The fix is an AsyncAppender wrapper:

<appender name="ASYNC_FILE" class="ch.qos.logback.classic.AsyncAppender">
    <!-- Queue capacity — increase if you see dropped messages -->
    <queueSize>512</queueSize>
    <!-- Drop messages when queue is 80% full (only affects TRACE/DEBUG/INFO) -->
    <discardingThreshold>20</discardingThreshold>
    <!-- Block instead of drop when queue is full (safer, but adds latency) -->
    <neverBlock>false</neverBlock>
    <!-- Include caller data (method, line number) — expensive, disable in production -->
    <includeCallerData>false</includeCallerData>
    <appender-ref ref="FILE"/>
</appender>

discardingThreshold of 20 means Logback starts dropping TRACE, DEBUG, and INFO messages when the queue is 80% full. WARN and ERROR are never dropped. This is a reasonable tradeoff—you lose verbose logging under load but keep the important signals.

Development vs Production Configuration

The two environments have different needs:

ConcernDevelopmentProduction
FormatHuman-readable (colored, with timestamps)JSON (machine-parseable)
OutputConsoleConsole or file (depending on infrastructure)
Root levelINFOWARN
App package levelDEBUGINFO
Caller dataOptionalDisabled (performance)
Log rotationNot neededRequired

In containerized production environments (Kubernetes, ECS, Cloud Run), log to stdout/stderr in JSON format and let the container runtime collect logs. Don’t write to files inside containers—they don’t persist across restarts, and you’d need sidecar containers or volume mounts to ship them. Let your log aggregation platform handle storage.

In non-containerized production environments (bare metal, VMs), write to rotating files and use a log shipper (Filebeat, Fluentd) to forward to your aggregation platform.

Common Mistakes

Logging exceptions without the stack trace. This loses the most valuable information you have.

// Wrong — loses the stack trace
log.error("Payment failed: " + e.getMessage());

// Correct — includes the full stack trace
log.error("Payment failed for order {}", orderId, e);

The SLF4J convention is: the last argument to a log method, if it’s a Throwable, is treated as the exception to log. Include it always.

Leaving debug logging enabled globally in production. A single Spring Boot application with DEBUG on the root logger can generate hundreds of thousands of log lines per minute. Even if your storage can handle it, the I/O will impact throughput.

Logging sensitive data. Passwords, API keys, full card numbers, session tokens—these should never appear in logs. Audit your log statements at code review time. Consider a custom Logback filter that scrubs known patterns before writing.

Not logging enough at INFO in your own code. Framework noise convinced teams to suppress everything. Your own business events—order created, payment charged, user authenticated, job completed—should be at INFO. These are the events someone needs to see without running a debugger.

Using synchronous appenders for high-volume services. If you’re handling thousands of requests per second, synchronous disk I/O in your logging path adds measurable latency. Wrap file appenders in AsyncAppender.

Forgetting to clear MDC in async operations. If you submit tasks to an ExecutorService or use @Async, the thread pool threads won’t have the request’s MDC context. Pass relevant values explicitly and set MDC at the start of the task.

// Capture MDC before submitting to thread pool
Map<String, String> contextMap = MDC.getCopyOfContextMap();

executor.submit(() -> {
    if (contextMap != null) {
        MDC.setContextMap(contextMap);
    }
    try {
        processAsync(orderId);
    } finally {
        MDC.clear();
    }
});

Actuator and Runtime Log Level Changes

Spring Boot Actuator exposes an endpoint for changing log levels at runtime without restarting the application. This is invaluable when diagnosing production issues—you can temporarily enable DEBUG for a specific package, capture what you need, then dial it back.

Enable the endpoint in your configuration:

management:
  endpoints:
    web:
      exposure:
        include: loggers
  endpoint:
    loggers:
      enabled: true

Query current levels:

curl http://localhost:8080/actuator/loggers/com.yourcompany.OrderService
# {"configuredLevel":"INFO","effectiveLevel":"INFO"}

Change a level at runtime:

curl -X POST http://localhost:8080/actuator/loggers/com.yourcompany.OrderService \
  -H 'Content-Type: application/json' \
  -d '{"configuredLevel":"DEBUG"}'

Reset to default:

curl -X POST http://localhost:8080/actuator/loggers/com.yourcompany.OrderService \
  -H 'Content-Type: application/json' \
  -d '{"configuredLevel":null}'

Secure this endpoint. In production, it should be behind authentication or network controls—anyone who can POST to it can enable verbose logging that impacts performance.

Putting It Together

A production-ready logging setup for a Spring Boot microservice:

  1. Use SLF4J (LoggerFactory.getLogger) everywhere, never import Logback classes directly
  2. logback-spring.xml with <springProfile> blocks for dev (console, human-readable) and prod (JSON, WARN root, INFO for your packages)
  3. A OncePerRequestFilter that sets traceId and requestId in MDC and clears it in finally
  4. Logstash encoder for JSON output in production
  5. AsyncAppender wrapping your production appender if you’re handling significant throughput
  6. Actuator loggers endpoint enabled, secured, available for runtime debugging
  7. Log review as part of code review—catch missing exception parameters, sensitive data, wrong levels

Logging done right is a force multiplier on everything else. When something breaks in production, the difference between “we found and fixed it in 10 minutes” and “we spent 4 hours guessing” often comes down to what was logged and whether you can query it efficiently.