Caching in APIs: Basics and Implementation in Spring

    Caching in APIs: Basics and Implementation in Spring

    30/08/2025

    Introduction

    While building an API, performance is one of the key factors that we need to consider. Users expect fast response times, and even a few hundred milliseconds of delay can impact the user experience which can lead to a bad user experience and ultimately a loss of users. One of the most effective techniques to improve API performance is caching. Caching is the process of storing copies of data in a temporary storage location with faster access so that they can be accessed more quickly.

    When an API receives a request for data, it first checks if the data is available in the cache. If it is (a "cache hit"), the API returns the data from the cache without having to perform expensive operations like database queries or calls to other services. If the data is not in the cache (a "cache miss"), the API retrieves the data from the source, returns it to the client, and stores a copy in the cache for future requests.

    In this blog post, we will explore two common caching strategies for APIs and how to implement them in a Spring Boot application:

    • In-Memory Caching: Storing cache data directly in the application's memory.
    • Distributed Caching: Using an external, shared cache like Redis that can be accessed by multiple instances of an application.

    Usecase

    Let's consider a simple API that retrieves product details. Without caching, every request to this API would result in a database query to fetch the product information. This can be slow and inefficient, especially if the product data doesn't change often.

    Here's a simple Spring Boot controller and service for this use case:

    @RestController @AllArgsConstructor public class ProductController { private final ProductService productService; @GetMapping("/products/{id}") public Product getProduct(@PathVariable Long id) { return productService.getProductById(id); } } @Service @AllArgsConstructor @Slf4j public class ProductService { private final ProductRepository productRepository; public Product getProductById(Long id) { log.info("Fetching product with id {} from database", id); // Simulate a slow database query try { Thread.sleep(2000); } catch (InterruptedException e) { e.printStackTrace(); } return productRepository.findById(id) .orElseThrow(() -> new RuntimeException("Product not found")); } }

    Every time we call the /products/{id} endpoint, we'll see a 2-second delay and a log message indicating that the data is being fetched from the database.

    In-Memory Caching

    In-memory caching is the simplest way to get started with caching. The cache data is stored in the application's heap memory. This is very fast, but it has some limitations:

    • The cache size is limited by the application's memory.
    • The cache is not shared between different instances of the application. If you have multiple instances of your service running behind a load balancer, each instance will have its own separate cache.

    Let's see how to implement in-memory caching in our Spring Boot application.

    1. Enable Caching

    First, we need to enable caching in our application. We can do this by adding the @EnableCaching annotation to our main application class:

    @SpringBootApplication @EnableCaching public class CachingApplication { public static void main(String[] args) { SpringApplication.run(CachingApplication.class, args); } }

    2. Add the @Cacheable Annotation

    Now, we can use the @Cacheable annotation on our getProductById method. This annotation tells Spring to cache the result of this method.

    @Service @AllArgsConstructor @Slf4j public class ProductService { private final ProductRepository productRepository; @Cacheable(value = "products", key = "#id") public Product getProductById(Long id) { log.info("Fetching product with id {} from database", id); // Simulate a slow database query try { Thread.sleep(2000); } catch (InterruptedException e) { e.printStackTrace(); } return productRepository.findById(id) .orElseThrow(() -> new RuntimeException("Product not found")); } }

    The value attribute specifies the name of the cache ("products"), and the key attribute is a Spring Expression Language (SpEL) expression that defines the key for the cache entry. In this case, we're using the id of the product as the key.

    Now, if we call the /products/{id} endpoint for the first time, it will take 2 seconds, and we'll see the log message. If we call it again with the same id, the response will be almost instantaneous, and we won't see the log message because the result is being served from the cache.

    In-Memory Caching Flow

    Here is a diagram illustrating the flow of a request with in-memory caching:

    In-Memory Caching

    Distributed Caching with Redis

    In-memory caching is great for simple use cases, but for scalable, production applications, a distributed cache is often a better choice. A distributed cache is an external service that is shared by all instances of your application.

    Redis is a popular choice for a distributed cache. It's an in-memory data store that can be used as a database, cache, and message broker.

    Let's see how to configure our Spring Boot application to use Redis for caching.

    1. Add Dependencies

    First, we need to add the Spring Data Redis dependency to our pom.xml:

    <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency>

    2. Configure Redis

    Next, we need to configure the connection to our Redis server in application.properties:

    spring.redis.host=localhost spring.redis.port=6379

    Spring Boot will automatically configure a RedisCacheManager for us if it detects the Spring Data Redis dependency and the Redis connection properties.

    That's it! With these changes, our @Cacheable annotation will now use Redis as the cache store instead of the default in-memory store.

    Distributed Caching Flow

    Here is a diagram illustrating the flow of a request with a distributed cache like Redis:

    Distributed Caching with Redis

    With this setup, if you have multiple instances of your application running, they will all share the same Redis cache. If one instance fetches a product and caches it, the other instances can benefit from that cache entry.

    Other Caching Annotations

    Spring provides other useful caching annotations:

    • @CachePut: This annotation is used to update the cache without interfering with the method execution. It's useful for updating an existing cache entry.
    • @CacheEvict: This annotation is used to remove an entry from the cache. It's useful for scenarios where data has been deleted or updated.

    Here's an example of how you might use these annotations:

    @Service @AllArgsConstructor @Slf4j public class ProductService { private final ProductRepository productRepository; @Cacheable(value = "products", key = "#id") public Product getProductById(Long id) { // ... } @CachePut(value = "products", key = "#product.id") public Product updateProduct(Product product) { log.info("Updating product with id {}", product.getId()); return productRepository.save(product); } @CacheEvict(value = "products", key = "#id") public void deleteProduct(Long id) { log.info("Deleting product with id {}", id); productRepository.deleteById(id); } }

    Conclusion

    Caching is a powerful technique for improving the performance and scalability of your APIs. In this blog post, we've seen how to implement two common caching strategies in a Spring Boot application:

    • In-memory caching is simple to set up and provides a significant performance boost for single-instance applications.
    • Distributed caching with Redis is a more robust solution for scalable, multi-instance applications, providing a shared cache that all instances can use.

    By using Spring's caching abstractions and annotations, you can easily add caching to your application with minimal code changes.

    To stay updated with the latest updates in Java and Spring follow us on youtube, linked in and medium. You can find the code used in this blog here

    Summarise

    Transform Your Learning

    Get instant AI-powered summaries of YouTube videos and websites. Save time while enhancing your learning experience.

    Instant video summaries
    Smart insights extraction
    Channel tracking