Every application has at least one endpoint that’s too slow and one database query that runs far more often than it should. For a lot of teams, the answer is “add Redis caching.” That’s the right call—but the way you wire it up determines whether it makes things fast and predictable or introduces subtle consistency bugs that only show up under load.
Spring Boot’s cache abstraction makes the wiring straightforward. The tricky parts are serialization, TTL strategy, and understanding what happens when your cache and your database disagree. This guide covers all of it with real configuration, not toy examples.
What the Spring Cache Abstraction Actually Does
Spring’s cache abstraction sits in front of your method calls. You annotate a method with @Cacheable, and Spring intercepts the call, checks the configured cache for an existing result, and either returns the cached value or calls through to the real method and stores the result.
The abstraction is provider-agnostic. The same annotations work with Redis, Caffeine, Ehcache, or even a simple ConcurrentHashMap. Swapping providers is a configuration change, not a code change. That said, each provider has different capabilities—TTL configuration, eviction policies, clustering—so the abstraction has limits.
For most production applications, Redis is the right choice. It’s fast (sub-millisecond reads from the same data center), it survives application restarts, and it scales horizontally. Caffeine is better for in-process caching of data that changes rarely and can tolerate inconsistency across instances.
Dependency Setup
You need two things: the Spring Boot Cache starter and the Redis client.
Maven:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
Gradle:
implementation 'org.springframework.boot:spring-boot-starter-cache'
implementation 'org.springframework.boot:spring-boot-starter-data-redis'
Spring Boot auto-configures Lettuce as the Redis client. You don’t need to add it separately—it’s pulled in transitively by the Redis starter. Jedis is the alternative if you have a reason to prefer it, but Lettuce handles connection pooling and reactive support better and is the right default.
Enabling Caching
Add @EnableCaching to a configuration class. Typically this goes on your main application class or a dedicated cache configuration class.
@SpringBootApplication
@EnableCaching
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
Without this annotation, the @Cacheable annotations on your methods are silently ignored. There’s no error, your code just hits the database every time. I’ve wasted an embarrassing amount of time debugging “why isn’t my cache working” before remembering this step.
Redis Configuration
The minimum configuration is the Redis connection:
spring:
data:
redis:
host: localhost
port: 6379
For production, you’ll want more:
spring:
data:
redis:
host: ${REDIS_HOST:localhost}
port: ${REDIS_PORT:6379}
password: ${REDIS_PASSWORD:}
timeout: 2000ms
lettuce:
pool:
max-active: 8
max-idle: 8
min-idle: 0
max-wait: -1ms
cache:
type: redis
redis:
time-to-live: 3600000 # 1 hour in milliseconds
cache-null-values: false
key-prefix: "myapp:"
use-key-prefix: true
The time-to-live sets a default TTL across all caches. cache-null-values: false prevents caching null results—whether you want this depends on your use case. If your service frequently looks up entities by ID that don’t exist, caching nulls prevents the DB hit but can hide real data availability issues.
Serialization: Use Jackson, Not Java Serialization
By default, Spring Data Redis uses Java serialization for cache values. Don’t use it. Java serialization is brittle (class changes break deserialization), produces large byte arrays, and isn’t human-readable in Redis. You can’t inspect cache contents with redis-cli when everything is serialized Java objects.
Configure Jackson serialization explicitly:
@Configuration
public class CacheConfiguration {
@Bean
public RedisCacheConfiguration redisCacheConfiguration(ObjectMapper objectMapper) {
ObjectMapper cacheObjectMapper = objectMapper.copy()
.activateDefaultTyping(
objectMapper.getPolymorphicTypeValidator(),
ObjectMapper.DefaultTyping.NON_FINAL,
JsonTypeInfo.As.PROPERTY
);
return RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofHours(1))
.disableCachingNullValues()
.serializeKeysWith(
RedisSerializationContext.SerializationPair.fromSerializer(
new StringRedisSerializer()
)
)
.serializeValuesWith(
RedisSerializationContext.SerializationPair.fromSerializer(
new GenericJackson2JsonRedisSerializer(cacheObjectMapper)
)
);
}
}
The activateDefaultTyping call adds type information to the JSON so Jackson knows what class to deserialize back to. Without it, you’ll get a LinkedHashMap instead of your domain object. The copy() is important—you don’t want to mutate the application’s shared ObjectMapper.
If you want different TTLs per cache, configure a RedisCacheManagerBuilderCustomizer:
@Bean
public RedisCacheManagerBuilderCustomizer redisCacheManagerBuilderCustomizer() {
return builder -> builder
.withCacheConfiguration("products",
RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofMinutes(30)))
.withCacheConfiguration("users",
RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofMinutes(5)));
}
@Cacheable: The Basics and the Gotchas
@Cacheable caches the return value of a method. The first call executes the method; subsequent calls with the same cache key return the cached value.
@Service
public class ProductService {
@Cacheable("products")
public Product findById(Long id) {
return productRepository.findById(id)
.orElseThrow(() -> new ProductNotFoundException(id));
}
}
The default cache key is derived from the method parameters. For a single Long id, the key is just the ID value. For multiple parameters, Spring builds a composite key from all of them.
You can define explicit keys using SpEL:
@Cacheable(value = "products", key = "#id")
public Product findById(Long id) { ... }
@Cacheable(value = "products", key = "#category + ':' + #page")
public List<Product> findByCategory(String category, int page) { ... }
// Cache the result only if the product is active
@Cacheable(value = "products", key = "#id", condition = "#id > 0")
public Product findById(Long id) { ... }
// Don't cache the result if the product is out of stock
@Cacheable(value = "products", key = "#id", unless = "#result.stockCount == 0")
public Product findById(Long id) { ... }
The condition is evaluated before the method call; unless is evaluated after, against the result. Use condition to avoid caching altogether for certain inputs; use unless when the decision depends on what the method returned.
One important constraint: @Cacheable only works when called through a Spring proxy. If ProductService.findById() calls another method in the same class that’s also @Cacheable, the second annotation is skipped entirely. This is the Spring AOP self-invocation problem—the proxy isn’t involved in internal calls. The fix is to inject the service into itself or split the class.
@CacheEvict: Keeping the Cache Consistent
When data changes, you need to remove the stale cached values. @CacheEvict handles this:
@CacheEvict(value = "products", key = "#product.id")
public Product updateProduct(Product product) {
return productRepository.save(product);
}
// Remove all entries from the cache
@CacheEvict(value = "products", allEntries = true)
public void clearProductCache() { }
// Evict before the method executes (ensures cache is clear even if method throws)
@CacheEvict(value = "products", key = "#id", beforeInvocation = true)
public void deleteProduct(Long id) {
productRepository.deleteById(id);
}
The beforeInvocation = true option is worth understanding. By default, eviction happens after the method returns successfully. If the method throws an exception, the stale entry stays in the cache. With beforeInvocation = true, the entry is removed regardless. Use this when you’d rather serve a cache miss than stale data after a failed operation.
allEntries = true clears the entire named cache. Use it sparingly—it’s a blunt instrument and causes a thundering herd when all cache misses hit your database at the same time.
@CachePut: Update Without Evict
@CachePut always executes the method and stores the result in the cache. Unlike @Cacheable, it doesn’t skip the method call. Use it when you want to update the cache with fresh data after a write:
@CachePut(value = "products", key = "#result.id")
public Product createProduct(CreateProductRequest request) {
return productRepository.save(new Product(request));
}
@CachePut(value = "products", key = "#product.id")
public Product updateProduct(Product product) {
return productRepository.save(product);
}
This pattern—@CachePut on write, @Cacheable on read—gives you a write-through cache. It’s more consistent than evict-on-write because the cache always has the latest value instead of serving a miss on the next read.
Testing Cached Methods
Testing cached behavior requires a bit more setup than regular unit tests. The simplest approach is to use @SpringBootTest with an embedded Redis instance via Testcontainers:
@SpringBootTest
@Testcontainers
class ProductServiceCachingTest {
@Container
static GenericContainer<?> redis = new GenericContainer<>("redis:7-alpine")
.withExposedPorts(6379);
@DynamicPropertySource
static void redisProperties(DynamicPropertyRegistry registry) {
registry.add("spring.data.redis.host", redis::getHost);
registry.add("spring.data.redis.port", () -> redis.getMappedPort(6379));
}
@Autowired
private ProductService productService;
@Autowired
private CacheManager cacheManager;
@MockitoBean
private ProductRepository productRepository;
@BeforeEach
void clearCache() {
cacheManager.getCacheNames()
.forEach(name -> cacheManager.getCache(name).clear());
}
@Test
void findById_shouldCacheResult() {
Product product = new Product(1L, "Widget");
when(productRepository.findById(1L)).thenReturn(Optional.of(product));
productService.findById(1L);
productService.findById(1L);
productService.findById(1L);
// Repository should only be called once — other two hits came from cache
verify(productRepository, times(1)).findById(1L);
}
}
The key test pattern: verify that the repository is only called once across multiple service calls for the same ID. If caching is working, subsequent calls return the cached value without touching the repository.
Clear the cache in @BeforeEach to prevent test state from leaking between tests. This is easy to forget and causes intermittent failures when tests run in different orders.
For unit tests without Redis, you can swap the cache manager for a simple ConcurrentMapCacheManager in a test configuration. This tests the caching behavior without the Redis infrastructure, which runs faster but won’t catch serialization issues.
Production Considerations
TTL strategy matters more than people realize. Setting all caches to 1 hour is a starting point, not a strategy. Short-lived reference data (product categories, config values) can safely live for hours. User-specific data and inventory levels need shorter TTLs or explicit eviction. Data that must be current needs eviction on write, not time-based expiry.
Cache stampede under load. When a popular cache entry expires and many requests arrive simultaneously, they all hit the database at once. The fix is probabilistic early expiration: start refreshing the cache slightly before the TTL expires, so only one request does the DB hit. Redisson provides this out of the box; rolling it yourself is error-prone.
Redis cluster configuration. In production you likely have a Redis cluster or Redis Sentinel. The Lettuce client handles both, but the configuration differs significantly from single-node:
spring:
data:
redis:
cluster:
nodes:
- redis-node-1:7000
- redis-node-2:7001
- redis-node-3:7002
max-redirects: 3
Test your failover behavior in staging before you discover it in production. When a Redis node fails, Lettuce reconnects automatically, but there’s a window where cache operations throw exceptions. Make sure your services degrade gracefully (cache miss, not 500) when Redis is unavailable.
Key prefix collisions. If multiple applications share a Redis instance, use distinct key prefixes per application. The key-prefix setting in the Spring configuration handles this, but verify that prefixes are actually applied by inspecting keys with redis-cli --scan --pattern 'myapp:*'. I’ve seen environments where two services accidentally shared cache keys and silently served each other’s data.
Monitor cache hit rates. A cache hit rate below 80% usually means your TTLs are too short, your keys aren’t granular enough, or you’re caching things that don’t repeat often enough to benefit. Spring Boot Actuator exposes cache metrics at /actuator/caches and via Micrometer. Track hit rate over time—a sudden drop often signals a bug in your eviction logic.
What Not to Cache
Not everything benefits from caching. Avoid caching:
- Results that include the current time or are unique per request
- Data that must be strongly consistent (financial transactions, inventory reservations)
- Very large objects that will exceed your Redis memory budget
- Results of queries that are already fast (sub-10ms DB queries with proper indexes)
Caching adds complexity. It’s worth it when the benefit—latency reduction, DB load reduction—justifies that complexity. Profile first, cache second.
The Spring cache abstraction does a good job of keeping caching concerns out of your business logic. The annotations are clean, the configuration is straightforward, and the Redis integration is production-ready. The work is in choosing the right TTLs, handling serialization correctly, and understanding the consistency tradeoffs. Get those right and your cache is an asset; get them wrong and it’s a source of hard-to-reproduce bugs.
Related Articles
- Spring Boot REST API Best Practices for Production — Caching is one piece of a production-ready REST API—here’s the full picture.
- Spring Boot Testcontainers Integration Testing — Use Testcontainers to spin up a real Redis instance for integration tests.
- Spring Boot Microservices Architecture Patterns — How distributed caching with Redis fits into microservices architecture.
- Java Virtual Threads with Spring Boot — Combine Redis caching with virtual threads for maximum throughput in high-concurrency scenarios.