Anda di halaman 1dari 9

3/5/2019 Advanced Redux Patterns: Normalisation – Brains & Beards

Advanced Redux Patterns:

Wojciech Ogrodowczyk Follow
Feb 1 · 7 min read

The term normalisation comes from the database world. It refers to
transforming the schema of a database to remove redundant
information. Also, redundant information means the same data that is
stored in more than one place.

Why is it important? There are many possible reasons, the one that I
consider the most important is about o ering a single point of truth. It
means that there’s exactly one place in the database that contains the
true value of something.

Single source of truth

For example let’s say we have a site with a list of articles and every
article has an author. A normalised state would mean that each article
would contain a reference to its author and all the author’s data would
be in a separate place, in some author store / DB table. In case of a
denormalised (not normalised) state / store / DB we could have all the
author data bundled alongside the article. Sometimes it might make
sense, especially for performance reasons (that could be a form of
caching). However, what if we have two articles, both with the same
author (we know it’s the same, because the ID is the same), but in our
denormalised store we have di erent names (one says “Joe” and the
other says “Bob”) listed under di erent articles. This could potentially
happen, when there’s been an update to the author’s name that hasn’t
been propagated well across our whole store.

That’s why the single source of truth is important. If we had it, we’d
know where to look to gure out which name is the correct one and
update all the redundant (cached) copies.

Another reason to go for a normalised state in Redux is performance. If
you have deeply nested structures, it’s di cult to traverse them. It’s 1/9
3/5/2019 Advanced Redux Patterns: Normalisation – Brains & Beards

much easier to nd your data if you have it all led by their ID in a

dictionary / hash / map structure.

Imagine the performance of nding an article by its ID in a big array of

articles (you’d have to go through the whole collection and check IDs
along the way until you nd it) compared to nding it in a dictionary
structure where you can just fetch it directly by the ID.

Normalisation is good, they said.

Creating a normalised Redux store

Often, the structure of the stores in our applications re ects the format
of the data that we receive from the API. However, it doesn’t have to be
this way.

Usually, the format of the data that you receive in your app from your
REST API re ects how backend treats this data, which probably will be
di erent how this data should be structured for you. For example,
imagine a blogging platform where we use a mobile app to write new
articles and browse what we already wrote.

Backend 2/9
3/5/2019 Advanced Redux Patterns: Normalisation – Brains & Beards

The backend side needs to handle thousands of di erent users, so when

one of them signs in using the mobile app and fetches their articles
here’s what they do: they look for this particular user, nd a collection
of all of their articles and send it back to the app. Probably, as an array.
Maybe something like that:

1 {
2 articles: [
3 {
4 id: 1,
5 title: "Dagon",
6 tags: [{ id: 1, name: "old ones" }, { id: 2, name: "s
7 },
8 {
9 id: 2,
10 title: "Azathoth",
11 tags: [{ id: 1, name: "old ones" }, { id: 3, name: "n
12 },
13 {

What would happen in a lot of apps, they’d just store those articles in
the same form as they received it — as an array. This would make it easy
to store (you can just save the JSON object that you received), but (a bit
more) di cult to fetch the articles afterwards.

Instead of doing that, we should think about what would be the best
schema for us. Probably, we’d do a lot of accessing by the article ID, so
it would be nice to store them in a dictionary that allows us direct
access. Also, we might notice that there’s some data redundancy with
the tags. Backend returned to us a bunch of strings that get repeated a
lot. We could do better by assigning them IDs, storing them in a
separate table / dictionary and just reference them in articles. For
example, our store could look like that: 3/9
3/5/2019 Advanced Redux Patterns: Normalisation – Brains & Beards

1 {
2 articles: {
3 1: { title: "Dagon", tags: [1, 2] },
4 2: { title: "Azathoth", tags: [1, 3] },
5 3: { title: "At the Mountains of Madness", tags: [3, 4]
6 },
7 tags: {
8 1: "old ones",
9 2: "short story",

This store optimises what our needs are and how we want to access the
data inside the mobile app that we’re running. But I can hear you

It’s such a bore to do this data massaging manually… I don’t want to code
it by hand!

Automating normalisation
Of course, we should automate and speed up this process (and avoid
bugs that might pop up in our custom data massaging code). We could
use normalizr for that. Here’s how that could look like.

Normalising data using a schema

First, we’ll need to de ne the schema that we want to use. In our case
we have two objects (entities): tag and article . We’d have to
describe the schema of the object that we originally receive. For the
example above, our schema would be the following: 4/9
3/5/2019 Advanced Redux Patterns: Normalisation – Brains & Beards

1 import { normalize, schema } from "normalizr";

3 const tag = new schema.Entity("tags", {});
4 const article = new schema.Entity("articles", {
5 tags: [tag]
6 });

The above code de nes what a tag (a {} object means that it doesn’t
have any nested objects inside) and an article (something that
contains an array of tags ) look like. Then we try to normalize the
data by passing the original object we got from the API and a schema
that describes it ( { articles: [article] } tells us we’re dealing with
an array of articles).

Here’s how the normalised state would look like:

1 {
2 entities: {
3 tags: {
4 "1": { id: 1, name: "old ones" },
5 "2": { id: 2, name: "short story" },
6 "3": { id: 3, name: "novel" },
7 "4": { id: 4, name: "insanity" }
8 },
9 articles: {
10 "1": { id: 1, title: "Dagon", tags: [1, 2] },
11 "2": { id: 2, title: "Azathoth", tags: [1, 3] },

At rst glance, it looks a bit weird with the two main keys: entities

and result , so let’s explain what those are:

• Entities are all the objects that are referenced in our data. We
keep them sorted rst by their type ( tags and articles ) and
then by their IDs. This lets us to look them up easily. We can think
of it as a dictionary of all the objects in our world.

• Result is the simpli ed version of what we passed into the

normalising function. In our case we passed a list of articles with
all of their nested objects, so we get back… a list of articles, just
simpli ed to use the dictionary / entities references. That’s why it
looks so simple: { “articles”: [1, 2, 3] } . It’s up to us to use
this result in a reasonable way in our app. 5/9
3/5/2019 Advanced Redux Patterns: Normalisation – Brains & Beards

Normalising state for Redux

So, the question arises — how do we use this normalised state to keep
track of things in our app? We should probably separate two things:

Keeping track of what things are

One thing that we absolutely need to store in our Redux is the
dictionary of all the objects in our world. That’s the entities part
described above. Every article or tag that we want to reference in our
app needs to be stored there and easily accessible by ID.

Keeping track of everything else

Of course, it’s an arbitrary case-by-case choice whether a particular
piece of data should be stored as an entity, or as something else. In a lot
of apps we store data that is only used to power the user experience.
For example, we might need to store an ID of the article that has been
edited last, or an error message that we received from a recent API call,

Sometimes we can refactor it to use a local component state for such

things (and it’d be better to do so), but not always. Sometimes we have
a good reason to store it in the globally accessible state. In such case it
might not make much sense to try to force it into a normalised form.

For example, that’s how our entire Redux state could look like for our
article publishing app:

1 {
2 tags: {
3 "1": { id: 1, name: "old ones" },
4 "2": { id: 2, name: "short story" },
5 "3": { id: 3, name: "novel" },
6 "4": { id: 4, name: "insanity" }
7 },
8 articles: {
9 "1": { id: 1, title: "Dagon", tags: [1, 2] },
10 "2": { id: 2, title: "Azathoth", tags: [1, 3] },

We can see here the entities part where we describe all the tags and
articles that we’re dealing with and apart from that we can see a
couple of elds where we keep track of some values that help us with
the application’s UI. 6/9
3/5/2019 Advanced Redux Patterns: Normalisation – Brains & Beards

For example we have an errorMessage variable that we could use to

provide a oating error message that follows a user around through
di erent screens and only disappears after you acknowledge its
existence. And a lastArticleWorkedOn eld that helps the app to re-
open the last article that the user was working on when we restart the

Updating normalised state

At the rst glance, this might seem like a huge hassle — do I really have
to convert all this data that comes from the backend API calls?
Fortunately, the normalizr schema that we set up earlier comes to the

Whenever we get new articles fetched from the API we could use our
schema, convert them to our entities, and trigger a Redux action to
update the entities:

1 // We assume that we already receive some `apiResponse`

2 // and defined our schema as `articleSchema`
3 const normalizedArticles = normalize(apiResponse, { articles

Then, in the Redux reducer, we could handle the state update as simple
as that:

1 export function reducer(state = {}, action) {

2 switch (action.type) {
4 return {
5 ...state,
6 ...action.payload.articles
7 };
8 default:

Using a normalised state in your Redux-powered mobile application
doesn’t need to include a lot of manual (and error-prone) data
massaging and can be automated. There’s no need to let the backend
data format in uence the way we store the application state on the 7/9
3/5/2019 Advanced Redux Patterns: Normalisation – Brains & Beards

frontend side and we can choose to go with formats and schemas that
are better optimised for read performance and store size.

I hope you enjoyed this introduction to the topic of using a normalised

state in a Redux application and you will give it a shot next time you’re
deciding on your Redux store schema.

We’re working on presenting more of advanced Redux usage patters,

so watch this space to see them published soon. Of course, if you
enjoyed reading this article, make sure you share it with your fellow
developers! 8/9
3/5/2019 Advanced Redux Patterns: Normalisation – Brains & Beards 9/9