rss twitter gitlab github linkedin linkedin instagram
Microservices in Go: Caching using memcached
Jan 30, 2021

Understanding the nuances behind when and how to use caching is important when building microservices, in this post I will discuss general concepts about caching, some concrete details about memcached and I will cover the de-facto package for using memcached in Go: github.com/bradfitz/gomemcache.

Why is caching necessary?

The idea behind caching is no new concept, in Microservices specifically is a concept we have to clearly understand to properly use it; to avoid its overuse and to know when and how to react accordingly to possible deficiencies caused by excessive caching.

When used properly caching allows returning results back to our users faster. Examples of those results could be database records, rendering pages or any other expensive computation.

On the other hand, if caching is not used correctly it could lead to extra latency (in case of distributed datastores like memcached), running out of memory (in case of local in-process caching), stale results or even internal errors that make our services fail.

That’s why before considering to cache something we need to have clear answers to the following questions:

  1. Can we speed up the results in some other way?
  2. Do we know exactly how to invalidate the results?
  3. Are we using distributed caching or in-process caching? Are the pros/cons clear?

Let’s expand those questions a bit more.

Can we speed up the results in some other way?

This depends on what exactly we are trying to cache, for example in cases where we talk about computations, perhaps we could pre-compute those values in advance, save them in a persistent store and then query those values as needed.

If we are talking about a complex algorithm, let’s say, a call that requires sorting results maybe we could change the algorithm itself instead.

In more concrete cases, like when building an HTTP service that returns assets maybe using a CDN (Content Delivery Network) makes more sense.

Do we know exactly how to invalidate the results?

When caching the last thing we want to do is to return stale results, that’s why knowing when to invalidate them is important.

The usual route to take when determining this is using time-based expirations, let’s say we are caching values that are calculated daily at 10AM, with that time as reference we can determine the expiration value using the time left before the next calculation happens.

In more complex architectures this could be done on-demand using events, where producers of those changes emit events to be used for invalidating current cached values.

In the end what is important is to always have a way to invalidate those results.

Are we using distributed caching or in-process caching? Are the pros/cons clear?

Distributed caching is a good solution when a microservice has multiple instances, that way those can refer to the same results, however this adds another network call to our service that could slow things down, measuring those calls as well as knowing what keys are being used helps us determine what to change if hot keys are present.

Specifically in memcached, hot keys could really hinder our microservice, this happens when using a cluster of memcached servers and some of keys are so popular that are only redirected to the same instance all the time. This increases network traffic and slows down the whole process, some ways to fix this problem include replicating the cached data or using in-process caching.

In-process caching is another way to handle caching, however because of the nature of those cached values we have to clearly know how much memory we have and therefore how much data we can store, with this solution we have no way to invalidate results across the board without interacting with the instances directly; but we know for sure the extra network call will not happen.

Caching using memcached

The code of the examples below are available on Github.

According to official website (emphasis mine):

Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering.

The memcached API is so simple yet so powerful, there are two methods we will be using most of the time: Get and Set; the important thing to keep in mind before using those methods is to convert the data to []byte, for example, assuming we have the struct type Name:

// server.go
type Name struct {
	NConst    string `json:"nconst"`
	Name      string `json:"name"`
	BirthYear string `json:"birthYear"`
	DeathYear string `json:"deathYear"`
}

When using NConst as the key for the cached record, we set the value converting first using encoding/gob:

// memcached.go
func (c *Client) SetName(n Name) error {
	var b bytes.Buffer

	if err := gob.NewEncoder(&b).Encode(n); err != nil {
		return err
	}

	return c.client.Set(&memcache.Item{
		Key:        n.NConst,
		Value:      b.Bytes(),
		Expiration: int32(time.Now().Add(25 * time.Second).Unix()),
	})
}

Then, a similar process is used when getting it where the value is converted if it exists:

// memcached.go
func (c *Client) GetName(nconst string) (Name, error) {
	item, err := c.client.Get(nconst)
	if err != nil {
		return Name{}, err
	}

	b := bytes.NewReader(item.Value)

	var res Name

	if err := gob.NewDecoder(b).Decode(&res); err != nil {
		return Name{}, err
	}

	return res, nil
}

In the code example we have a hypothetical HTTP server that is returning those values coming from a persistent database, the actual use of it looks like this:

// server.go
router.HandleFunc("/names/{id}", func(w http.ResponseWriter, r *http.Request) {
	id := mux.Vars(r)["id"]

	val, err := mc.GetName(id)
	if err == nil {
		renderJSON(w, &val, http.StatusOK)
		return
	}

	name, err := db.FindByNConst(id)
	if err != nil {
		renderJSON(w, &Error{Message: err.Error()}, http.StatusInternalServerError)
		return
	}

	_ = mc.SetName(name) // XXX: consider error

	renderJSON(w, &name, http.StatusOK)
})

The workflow is always the same:

  1. Get the value from memcached, if it exists return it.
  2. If it does not exist, query the original data store and store it in memcached

Final thoughts

Caching is a great way to improve the user experience of our services because it allows us to return results faster to our costumers. Specifically about memcached we need to measure the usage to determine when to scale out or perhaps add extra caching mechanisms to keep the experience ideal.

Cache all the things, but make sure you know why!


Back to posts