rss resume / curriculum vitae linkedin linkedin gitlab github twitter mastodon instagram
Microservices in Go: Accessing PostgreSQL (Part 2) - ORM/ORM-like
Feb 10, 2021

This post is part 2 in a series:

The full code example mentioned in this post is available on Github, please make sure to read the README for specifics.

Similar to the previous post, the examples below use a database table with the following structure:

-- db/migrations/20210126023417_create_names_table.up.sql
	nconst              varchar(255),
	primary_name        varchar(255),
	birth_year          varchar(4),
	death_year          varchar(4) DEFAULT '',
	primary_professions varchar[],
	known_for_titles    varchar[]

Using ORM/ORM-like packages for accessing PostgreSQL databases

Object Relational Mapping tools are chosen because of our familiarity in other programming languages, lack of SQL knowledge, not wanting to write SQL statements manually or not enough time because we do need to build the actual software; all of those are fair reasons and it’s a way to speed up initial development for sure.

Understanding the pros and cons of any ORM is important because depending on the length of service of our microservice of the complexity of the instructions we execute we could face performance issues in the long term.

I will be covering two of the most popular ORM packages out there, both of them are not PostgreSQL-specific and they do support other databases:


go-gorm/gorm is one of the most popular ORMs in Go, mature and with good documentation, to date it supports sqlite, mysql, postgres and sqlserver.

The way gorm works is by implementing struct types meant to represent database tables, or models; there are a few conventions they suggest that should allow you to quickly implement the minimum needed for interacting with the database; nothing is really enforced and there are options to override those conventions.

For example, if the struct type defines an ID field that one is assumed to be the primary key, however there’s the option to explicitly indicate gorm the field to use and even the column name as well, this is better demonstrated with the code below:

// postgresql_gorm.go
type gormNames struct {
	NConst    string `gorm:"primaryKey;column:nconst"`
	Name      string `gorm:"column:primary_name"`
	BirthYear string
	DeathYear string

func (gormNames) TableName() string {
	return "names"

Above we declared our struct type gormNames, it represents the database table names via the explicitly implemented TableName method, and we are telling gorm some field names (NConst and Name) happen to have different column names and those field names shouldn’t be used to infer the final column names.

Using it is as easy as:

var result gormNames

if tx := p.db.Where("nconst = ?", nconst).First(&result); tx.Error != nil {
	return Name{}, tx.Error

return Name{
	NConst:    result.NConst,
	Name:      result.Name,
	BirthYear: result.BirthYear,
	DeathYear: result.DeathYear,
}, nil


volatiletech/sqlboiler is another ORM available for Go, it’s database-first instead of code-first like gorm, what this means in practice is that sqlboiler generates type-safe Go code to act as an ORM, to date it supports postgres, mysql, sqlserver, sqlite and cockroach db.

The way sqlboiler works is better explained with the following code:

// postgresql_boilerl.go

//go:generate sqlboiler --wipe --no-tests psql

func (p *PostgreSQLboiler) FindByNConst(nconst string) (Name, error) {
	result, err := models.FindName(context.Background(), p.db, nconst)
	if err != nil {
		return Name{}, err

	return Name{
		NConst:    result.Nconst,
		Name:      result.PrimaryName.String,
		BirthYear: result.BirthYear.String,
		DeathYear: result.DeathYear.String,
	}, nil

sqlboiler uses a configuration file to generate models specific to the tables available in the database, you can see this in action in the call referencing models.FindName, this models package contains all the autogenerated code that sqlboiler builds for us.

As you can see, compared to gorm, there’s no need to manually define the struct types meant to represent database tables, sqlboiler generates all that code, however it enforces a few rules to make everything work.


ORMs are a way to abstract accessing databases, making the decision to use them depends on different things, they clearly have their pros (like speeding up development) but also have their cons (like learning a concrete package to talk to the database), what is important when choosing to use ORMs (or not) is to know why, and more importantly make a note somewhere of that decision, because perhaps it’s needed to revisit that decision in the future.

Talk to you later.

Keep it up. Don’t give up.

Back to posts