Debugging Go Redis

Connection pool size

To improve performance, go-redis automatically manages a pool of network connections (sockets). By default, the pool size is 10 connections per every available CPU as reported by runtime.GOMAXPROCS. In most cases, that is more than enough and tweaking it rarely helps.

rdb := redis.NewClient(&redis.Options{
    PoolSize: 1000,
})

#redis: connection pool timeout

You can get that error when there are no free connections in the pool for Options.PoolTimeout duration. If you are using redis.PubSub or redis.Conn, make sure to properly release PuSub/Conn resources by calling Close method when they are not needed any more.

You can also get that error when Redis processes commands too slowly and all connections in the pool are blocked for more than PoolTimeout duration.

#Timeouts

Even if you are using context.Context deadlines, do NOT disable DialTimeoutReadTimeout, and WriteTimeout, because go-redis executes some background checks without using a context and instead relies on connection timeouts.

If you are using cloud providers like AWS or Google Cloud, don’t use timeouts smaller than 1 second. Such small timeouts work well most of the time, but fail miserably when cloud is slower than usually. See Go Context timeouts can be harmfulopen in new window for details.

#Large number of open connections

Under high load, some commands will time out and go-redis will close such connections because they can still receive some data later and can’t be reused. Closed connections are first put into TIME_WAIT state and remain there for double maximum segment life time which is usually 1 minute:

cat /proc/sys/net/ipv4/tcp_fin_timeout
60

To cope with that, you can increase read/write timeouts or upgrade your servers to handle more traffic. You can also increase the maximum number of open connections, but that will not make your servers or network faster.

Also see Coping with the TCP TIME-WAITopen in new window for some advices.

#Pipelines

Because go-redis spends most of the time writing/reading/waiting data from connections, you can improve performance by sending multiple commands at once using pipelines.

#Cache

If your app logic does not allow using pipelines, consider adding a local in-process cache for the most popular operations, for example, using TinyLFU.

#Hardware

Make sure that your servers have good network latency and fast CPUs with large caches. If you have multiple CPU cores, consider running multiple Redis instances on a single server.

See Factors impacting Redis performanceopen in new window for more details.

#Sharding

If nothing helps, you can split data across multiple Redis instances so that each instance contains a subset of the keys. This way the load is spread across multiple servers and you can increase performance by adding more servers.

Ring is a good option if you are using Redis for caching. Otherwise, you can try Redis Cluster.

#Monitoring

See Monitoring Redis Performance using OpenTelemetry.

Monitoring Go Redis Performance and Errors

OpenTelemetry Tracing

go-redis relies on OpenTelemetry to monitor database performance and errors using distributed tracingopen in new window and metricsopen in new window.

OpenTelemetryopen in new window is a vendor-neutral API for distributed traces and metrics. It specifies how to collect and send telemetry data to backend platforms. It means that you can instrument your application once and then add or change vendors (backends) as required.

#OpenTelemetry instrumentation

go-redis comes with an OpenTelemetry instrumentation called redisotelopen in new window that is distributed as a separate module:

go get github.com/go-redis/redis/extra/redisotel/v8

To instrument Redis client, you need to add the hook provided by redisotel:

import (
    "github.com/go-redis/redis/v8"
    "github.com/go-redis/redis/extra/redisotel/v8"
)

rdb := redis.NewClient(&redis.Options{...})

rdb.AddHook(redisotel.NewTracingHook())

For Redis Cluster and Ring you need to instrument each node separately:

rdb := redis.NewClusterClient(&redis.ClusterOptions{
    // ...

    NewClient: func(opt *redis.Options) *redis.Client {
        node := redis.NewClient(opt)
        node.AddHook(redisotel.NewTracingHook())
        return node
    },
})

rdb.AddHook(redisotel.NewTracingHook())

To make tracing work, you must pass the active trace contextopen in new window to go-redis commands, for example:

ctx := req.Context()
val, err := rdb.Get(ctx, "key").Result()

#Trace example

As expected, redisotel creates spansopen in new window for processed Redis commands and records any errors as they occur. Here is how the collected information is displayed at Uptrace tracing toolopen in new window:

Redis trace

You can find a runnable example at GitHubopen in new window.

#Monitoring Redis Server performance

To monitor Redis Server performance, see Monitoring Redis Performance using OpenTelemetryopen in new window.

#See also

Golang Redis Cache

Redis config

To start using Redis as a cache storage, use the following Redis config:

# Required
##########

# Set a memory usage limit to the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
maxmemory 100mb

# Optional
##########

# Evict any key using approximated LFU when maxmemory is reached.
maxmemory-policy allkeys-lfu

# Enable active memory defragmentation.
activedefrag yes

# Don't save data on the disk because we can afford to lose cached data.
save ""

#go-redis/cache

go-redis/cacheopen in new window library implements a cache using Redis as a key/value storage. It uses MessagePackopen in new window to marshal values.

You can install go-redis/cache with:

go get github.com/go-redis/cache/v8

go-redis/cache accepts an interface to communicate with Redis and thus supports all types of Redis clients that go-redis provides.

rdb := redis.NewClient(&redis.Options{
    Addr: "localhost:6379",
})

mycache := cache.New(&cache.Options{
    Redis: rdb,
})

obj := new(Object)
err := mycache.Once(&cache.Item{
    Key:   "mykey",
    Value: obj, // destination
    Do: func(*cache.Item) (interface{}, error) {
        return &Object{
            Str: "mystring",
            Num: 42,
        }, nil
    },
})
if err != nil {
    panic(err)
}

You can also use local in-process storage to cache the small subset of popular keys. go-redis/cache comes with TinyLFUopen in new window, but you can use any other cache algorithmopen in new window that implements the interface.

mycache := cache.New(&cache.Options{
    Redis:      rdb,
    // Cache 10k keys for 1 minute.
    LocalCache: cache.NewTinyLFU(10000, time.Minute),
})

#Cache monitoring

If you are interested in monitoring cache hit rate, see the guide for Monitoring using OpenTelemetry Metricsopen in new window.

Go Redis Lua Scripting

Getting started

The following articles should help you get started with Lua scripting in Redis:

#redis.Script

go-redis supports Lua scripting with redis.Scriptopen in new window, for exampleopen in new window, the following script implements INCRBY command in Lua using GET and SET commands:

var incrBy = redis.NewScript(`
local key = KEYS[1]
local change = ARGV[1]

local value = redis.call("GET", key)
if not value then
  value = 0
end

value = value + change
redis.call("SET", key, value)

return value
`)

You can then run the script like this:

keys := []string{"my_counter"}
values := []interface{}{+1}
num, err := incrBy.Run(ctx, rdb, keys, values...).Int()

Internally, go-redis uses EVALSHAopen in new window to execute the script and fallbacks to EVALopen in new window if the script does not exist.

You can find the example above at GitHubopen in new window. For a more realistic example, check redis_rateopen in new window which implements a leacky bucket rate-limiter.

#Lua and Go types

Underneath, Lua’s number type is a float64 number that is used to store both ints and floats. Because Lua does not distinguish ints and floats, Redis always converts Lua numbers into ints discarding the decimal part, for example, 3.14 becomes 3. If you want to return a float value, return it as a string and parse the string in Go using Float64open in new window helper.

Lua returnGo interface{}
number (float64)int64 (decimal part is discarded)
stringstring
falseredis.Nil error
trueint64(1)
{ok = "status"}string("status")
{err = "error message"}errors.New("error message")
{"foo", "bar"}[]interface{}{"foo", "bar"}
{foo = "bar", bar = "baz"}[]interface{}{} (not supported)

#Debugging Lua scripts

The easiest way to debug your Lua scripts is using redis.log function that writes the message to the Redis log file or redis-server output:

redis.log(redis.LOG_NOTICE, "key", key, "change", change)

If you prefer a debugger, check Redis Lua scripts debuggeropen in new window.

#Passing multiple values

You can use for loop in Lua to iterate over the passed values, for example, to sum the numbers:

local key = KEYS[1]

local sum = redis.call("GET", key)
if not sum then
  sum = 0
end

local num_arg = #ARGV
for i = 1, num_arg do
  sum = sum + ARGV[i]
end

redis.call("SET", key, sum)

return sum

The result:

sum, err := sum.Run(ctx, rdb, []string{"my_sum"}, 1, 2, 3).Int()
fmt.Println(sum, err)
// Output: 6 nil

#Loop continue

Lua does not support continue statements in loops, but you can emulate it with a nested repeat loop and a break statement:

local num_arg = #ARGV

for i = 1, num_arg do
repeat

  if true then
    do break end -- continue
  end

until true
end

#Error handling

By default, redis.call function raises a Lua error and stops program execution. If you want to handle errors, use redis.pcall function which returns a Lua table with err field:

local result = redis.pcall("rename", "foo", "bar")
if type(result) == 'table' and result.err then
  redis.log(redis.LOG_NOTICE, "rename failed", result.err)
end

To return a custom error, use a Lua table:

return {err = "error message goes here"}