rss resume / curriculum vitae linkedin linkedin gitlab github twitter mastodon instagram
Software Architecture in Go: Extensibility
Feb 07, 2025

Disclaimer: This post includes Amazon affiliate links. Clicking on them earns me a commission and does not affect the final price.

Welcome to another post in the series covering Quality Attributes / Non-Functional Requirements. This time, let’s talk about Extensibility.



What is Extensibility?

According to Neal Ford, Mark Richards, Pramod Sadalage and Zhamak Dehghani in their book Software Architecture: The Hard Parts, Extensibility is:

The ability to add additional functionality as the service context grows.

In other words, Extensibility means adding or updating features without significant changes in the existing codebase.

A typical example of Extensibility is a payment service that supports multiple payment methods. This means the service supports credit cards, gift cards, and PayPal transactions and because of a well implemented codebase, it can be updated to add support for other payment services, such as ApplePay or SamsungPay, without too much effort.

Extensibility - Payment Method Example

Hands-on Extensibility

Extensibility is a non-functional requirement that requires some professional experience before perfecting. This is because you need to envision changes that may occur in the future. Build software that manages to deal with patterns, best practices, and deadlines.

The critical bit about Extensibility is knowing how and when to abstract types out, how much Instability the implement packages have, and the different techniques to achieve Extensibility.

In Go, two foundational techniques, Interfaces and Dependency Injection, allow us to build extensible architectures. We also take advantage of the Go toolchain and other tooling to build our final artifacts. Let’s put all the concepts into practice and update our Microservice Example to take advantage of the Extensibility non-functional pattern.

Step 1: Refactor “Search” to become multi-broker enabled

The code used in this section is available on Github.

The Search feature works as follows: once a record is created or updated, an event is published via a message broker. The event is then consumed by an asynchronous process in charge of indexing the values in ElasticSearch.

The current implementation uses Kafka.

Extensibility - Search feature

To refactor Search we need to start by defining a new interface type MessageBrokerPublisher in main.go:

39type MessageBrokerPublisher interface {
40	Publisher() service.TaskMessageBrokerPublisher
41	Close() error
42}

This type defines the methods we need to interact with by different brokers, in this case it will be used in a new kafka.go to initialize the concrete Kafka implementation:

18func NewMessageBrokerPublisher(conf *envvar.Configuration) (MessageBrokerPublisher, error) {

This change paves the way for future brokers. Next we will break apart the original compose.yml file into two:

  • compose.common.yml: defines the services that are always used and could be extended via the extends directive, and
  • compose.kafka.yml: defines the Kafka-only services.

With those changes in place, let’s assume our leadership asks us to support different ways to produce and consume events, such as using RabbitMQ or Redis.

How do we update our service so it works nicely with either of those three options: Kafka, RabbitMQ, or Redis, to generate artifacts that could be deployed as needed?

In Go, we’ll take advantage of Interface types, Dependency Injection and build constraints; and because we are using Docker Compose to build the images, we can write our compose files so they are reused to share some standard functionality and to separate the different workflows via the Docker’s arguments and the docker compose extends directive.

But, what are build constraints? According to the official documentation:

A build constraint, also known as a build tag, is a condition under which a file should be included in the package. Build constraints are given by a line comment that begins

//go:build

Step 2: Enable “Search” to use RabbitMQ

The code used in this section is available on Github.

To enable RabbitMQ we need to make the following changes:

  • Use build constraints for both the Kafka implementation and the new RabbitMQ version, and
  • Create a new docker compose file to use RabbitMQ, and reuse the extends directive like before.

For the first step we update the kafka.go file to not build when the rabbitmq flag is present:

1//go:build !rabbitmq
2package main
3
4import (

We also need to update the rest-server.Dockerfile to support a docker ARG that then is used as the build constraint:

 1FROM golang:1.23.4-bookworm AS builder
 2
 3# Explicitly NOT setting a default value
 4ARG TAG
 5
 6WORKDIR /build/
 7
 8COPY . .
 9
10ENV CGO_ENABLED=1 \
11    GOOS=linux
12
13RUN go mod download && \
14    go build -a -installsuffix cgo -ldflags "-extldflags -static" -tags=$TAG \
15		github.com/MarioCarrion/todo-api/cmd/rest-server

In the new rabbitmq.go file we indicate the rabbitmq so it only builds only for RabbitMQ:

1//go:build rabbitmq
2package main
3
4import (

And like we did before we implement a corresponding a new type, in this case RabbitMQMessageBroker and the NewMessageBrokerPublisher function in the rabbitmq.go file.

14type RabbitMQMessageBroker struct {
15	producer  *cmdinternal.RabbitMQ
16	publisher service.TaskMessageBrokerPublisher
17}
18
19// NewMessageBrokerPublisher initializes a new RabbitMQ Broker.
20func NewMessageBrokerPublisher(conf *envvar.Configuration) (MessageBrokerPublisher, error) { //nolint: ireturn
21	client, err := cmdinternal.NewRabbitMQ(conf)
22	if err != nil {
23		return nil, internal.WrapErrorf(err, internal.ErrorCodeUnknown, "internal.NewRabbitMQ")
24	}
25
26	return &RabbitMQMessageBroker{
27		producer:  client,
28		publisher: rabbitmq.NewTask(client.Channel),
29	}, nil
30}

For the new compose.rabbitmq.yml we will do the same we did before in the Kafka change, but using concrete names to avoid conflicting with the kafka versions:

1services:
2  rest-server-rabbitmq:
3    extends:
4      file: compose.common.yml
5      service: rest-server-common
6    build:
7      args:
8        TAG: rabbitmq
14  elasticsearch-indexer-rabbitmq:
15    extends:
16      file: compose.common.yml
17      service: elasticsearch-indexer-common

Step 3: Enable “Search” to use Redis

The code used in this section is available on Github.

This “Step 3” is the last one, and its goal is to enable us to run all services locally quickly. Of course, I do not recommend Redis as a broker for a production service because it’s not the right tool for that job; however, it is a good way to test new changes and quickly develop.

Extensibility - Kafka, RabbitMQ and Redis

This implementation is similar to the one we did for RabbitMQ in “Step 2”, however it will default to using redis instead, to do so:

  • Set specific build constraints for the kafka implementation.
  • Create new redis implementation, make it default by excluding both rabbitmq and kafka.
  • Implement new compose.redis.yml file.

To set the kafka constraints, we update kafka.go to set this build flag:

1//go:build kafka
2
3package main
4
5import (

Next, the new redis.go implementation excludes the other two:

1//go:build !kafka && !rabbitmq
2
3package main
4
5import (

And finally the new compose.redis.yaml sets the new services using concrete names like we did before:

1services:
2  rest-server-redis:
3    extends:
4      file: compose.common.yml
5      service: rest-server-common
6    build:
7      args:
8        TAG: redis
14  elasticsearch-indexer-redis:
15    extends:
16      file: compose.common.yml
17      service: elasticsearch-indexer-common

Conclusion

Extensibility is not an easy non-functional requirement to implement from the get-go. It takes professional experience and design time to consider what may or may not happen in the future. However, learning what the language provides, the toolchain and the external tooling features will allow us to implement a codebase that supports extending it without too many changes.


If you’re looking to sink your teeth into more Software Architecture and Testability, I recommend the following content:


Back to posts