Go Redis Caching: Boost REST API Performance

What is Redis Caching in Go?
Redis caching in Go stores frequently accessed data in Redis — an in-memory data store — so your server can return results in under a millisecond without hitting the database. The standard approach is the Cache Aside pattern: check Redis first, fall back to the database on a miss, then store the result in Redis for future requests. Using the go-redis client library, you can implement this in a Go handler in under 20 lines of code.
Every time a user visits your "Top Tasks" page, your Go server queries the database. As your user base grows from ten to ten thousand, these repeated database hits will slow down your API and eventually crash your database.
The solution is Caching. By storing frequently accessed data in a fast, in-memory store like Redis, you can serve requests in milliseconds rather than seconds.
What is Redis?
Connecting Go to Redis
The most popular Go client for Redis is go-redis. It's robust, well-maintained, and easy to use.
The Cache Aside Pattern
The "Cache Aside" pattern is the industry standard for web applications.
- When a request comes in, check Redis first.
- If the data is in Redis (a Cache Hit), return it immediately.
- If not (a Cache Miss), query the database, store the result in Redis, and then return it to the user.
Caching Strategies
10 * time.MinuteHow long data should stay in the cache. Short TTLs (like 1m) are great for dynamic data; long TTLs (1h) for static data.
h.rdb.Del(ctx, "task_1")The most difficult part of caching. You must delete the cached version as soon as the database record is updated.
json.MarshalRedis stores strings or bytes. You must serialize your Go structs into JSON before saving them to the cache.
| Task / Feature | Direct DB Access | Redis Caching |
|---|---|---|
| Latency | Slow (Disk I/O involved) | Fast (RAM access) |
| Durability | High (Safe after power loss) | Low (RAM is volatile; can be lost on reboot) |
| Cost | Low per GB (Storage is cheap) | High per GB (Memory is expensive) |
Cache Invalidation: The Hard Part
Cache invalidation — knowing when to remove stale data from the cache — is widely considered one of the hardest problems in computer science. There are two primary strategies:
1. TTL-Based Expiry (Passive Invalidation)
Set a Time-To-Live on every key. When the TTL expires, Redis automatically removes the key. The next request after expiry will be a cache miss, triggering a fresh database query.
This is the simplest approach and is appropriate when slightly stale data is acceptable — for example, a leaderboard that updates every 60 seconds is fine with a 60-second TTL.
2. Event-Driven Invalidation (Active Invalidation)
When a record is updated or deleted, explicitly remove or update its cache key:
Active invalidation keeps the cache fresh in real time but adds complexity. You must ensure every write path invalidates the correct keys — missing one means users see stale data.
Structuring Redis Keys Effectively
As your application grows, key naming becomes critical for maintainability and avoiding collisions. Use a consistent namespace convention:
This naming scheme makes it easy to:
- Flush all tasks-related cache keys in one go using
SCAN+DELwith a pattern. - Understand the cache contents when debugging without guesswork.
- Avoid two different parts of the application accidentally overwriting each other's keys.
Setting Up Redis for Local Development
If you have Docker installed, spinning up a local Redis instance takes one command:
You can then connect your Go application to localhost:6379. To inspect the cache contents interactively, use the Redis CLI:
For production deployments, managed Redis services like AWS ElastiCache, Redis Cloud, and Upstash eliminate the operational overhead of running Redis yourself.
Measuring Cache Effectiveness
A cache that rarely gets hits is wasting memory and adding complexity for no benefit. Monitor these two key metrics:
- Cache Hit Rate = (cache hits / total requests) × 100. A healthy API cache should achieve 70–90% hit rate on eligible endpoints.
- Cache Miss Latency — the time spent on the database query when the cache misses. Ensure this is acceptable even without the cache.
You can track hits and misses by incrementing Redis counters in your handler:
Internal Links & Further Reading
For the broader context of this caching module, see the full Go REST API project guide and the Go deployment with Docker and CI/CD tutorial — the next step after optimising performance is shipping reliably. For security considerations when building Go backends, see Go security best practices.
External resources:
Next Steps
Caching is a massive step toward building production-grade systems. But how do you actually get your code into the hands of real users? In our final phase, we will explore Deployment, learning how to package your Go application using Docker and ship it using modern CI/CD pipelines.
Common Redis Caching Mistakes in Go
1. Not setting a TTL on every key
Keys without an expiry live forever. A cache that never evicts stale data becomes a memory leak and serves incorrect data. Always call SET key value EX seconds or use client.Set(ctx, key, value, ttl) with a non-zero TTL.
2. Cache stampede on cold start
When a cached value expires, many concurrent requests may all miss the cache simultaneously and hammer the database. Use a single-flight pattern (golang.org/x/sync/singleflight) to ensure only one request fetches the value while others wait.
3. Caching errors If a database call fails, do not cache the error response. Only cache successful results. Caching a 500 error or an empty result means all users see the failure until the TTL expires.
4. Not handling Redis unavailability Redis is an external dependency — it can go down. Always have a fallback path: if the cache is unavailable, fall through to the database. Never make your application entirely dependent on Redis availability.
5. Using string keys without namespacing
SET user 123 in a shared Redis instance collides with any other service using the same key. Always namespace keys: myapp:users:123. The Redis documentation on key naming covers conventions.
Frequently Asked Questions
Which Go Redis client should I use? go-redis is the most feature-complete client with support for Redis Cluster, Sentinel, and pipelining. rueidis is a newer, faster alternative with client-side caching support. Both are production-grade choices.
What is the Cache Aside pattern? Cache Aside (also called Lazy Loading) is the most common caching pattern: read from cache first; on a miss, read from the database, write the result to cache, then return it. The application code manages the cache explicitly rather than a transparent caching layer.
When should I use Redis vs an in-memory cache like sync.Map?
Use sync.Map or a local cache (e.g. github.com/patrickmn/go-cache) for single-instance applications where the cache is local to the process. Use Redis when you have multiple application instances that need to share cached data, or when you need cache persistence across restarts.
