It's all about relations

In FP we talk a lot about algebras functions and types. But types are sets and functions are also sets. Algebras are types with operations, that fulfill some conditions, which means also sets. So how low should we get if we want to start from the beginning? I would say we should start with finitary relations.


So, let us begin with the question: what is a relation? We usually assume that it is a bound between 2 things, e.g. brother-sister, father-son, mother-daughter. If we take a look at father-son relation, we can say, that this relation has an information of some kind of direction or difference between both sides of relation (if A is a son of B, B is not a son of A). In more general sense relation might connect more than 2 things, e.g. if Alice is the youngest sibling of Bob, and they both have also the oldest sibling Charlie, we could say that Bob is in a-middle-child relation with his siblings.

So, our definition of relation must allow sides of a relation to not be interchangeable and potentially any number of sides involved. There is already a construction, that allows that: nn-tuple.

But can we just take a Cartesian product and it will be our relationship? Not necessarily. In parenthood relationship (father,son)(father, son) is a valid definition (fatherfather is a parent to the sonson), it belongs to People×PeoplePeople \times People set, but Cartesian product also allows (son,parent)(son, parent) (which here means that the sonson is a parent of the fatherfather). Therefore, we cannot demand our relationship to be a Cartesian product, but a subset will do. As a matter of the fact, that is how mathematicians define it:

A (finitary) relation is a subset of a Cartesian product.

Now, that we know how we define them, let us learn some properties a relation can have and how we could name them.

n-ary relations

The first thing we can notice is a number of sets that were used to create a Cartesian product we cherry-picked to define our relation (arity). Later on, we’ll see a nice correspondence between relations and functions (and their arities).

Unary relation

Unary always implies single input. Here it is a subset of a Cartesian product of one set… which is basically a normal subset. There are no benefits to calling it a unary relation if we can call it just a subset, but it’s worth knowing just in case.

Binary relation

Most commonly used one a relation between 2 things. It’s worth mentioning that there is a special notation for saying that something belongs to our relation set.

RA×B\mathbb{R} \subseteq \mathbb{A} \times \mathbb{B} (a,b)R    a R b(a, b) \in \mathbb{R} \iff a \ \mathbb{R} \ b

Ternary relation

Relation of 3 things.

RA×B×C\mathbb{R} \subseteq \mathbb{A} \times \mathbb{B} \times \mathbb{C} (a,b,c)R(a, b, c) \in \mathbb{R}

An example we used earlier was a middle-child-relation. Other could be joins (as in an edge x, joins points y and z).

Hardly ever I saw anyone using a distinct name for a relationship (or a function) with more than 3 arguments. For anything bigger expect a name like 4-ary relation, 5-ary relation, etc.

Relation’s properties

Next thing we can examine when we take a closer look at our relation is what properties it has. I described here some more commonly used.

Reflexive relation

Let’s say we have a group of narcissists (N\mathbb{N}) and we are doing a research on them: who likes who. Alice might like Bob, Bob might like Charlie, etc. But since they are all narcissists each of them like him/herself. We could describe it mathematically as:

nNn likes n\forall_{n \in \mathbb{N}} n \ likes \ n

Such relation where we can always say, that any x is in relation with itself, we call a reflexive relation.

Just to be sure: in reflexive relation each element is always in relation with itself, but it can also be in relation with something else. Just think of a relation between cars has the same color to imagine it.

Symmetric relation

We want our relation definition to allow both sides to be on unequal positions e.g. the father is a parent of the son. But sometimes we want some particular relation to have the ability to switch sides and still be true. E.g. if Alice is a sibling to Bob, Bob is a sibling to Alice.

a,ba R b    b R a\forall_{a, b} a \ \mathbb{R} \ b \iff b \ \mathbb{R} \ a

If we tried to draw it as a table, where we would put elements as labels for rows and columns (and if we put them in rows and columns in the same order), and if we marked x for cells that show which pairs are in a relation, then we could see that x-es would create a symmetrical figure.

That is why a relation where a R ba \ \mathbb{R} \ b implies b R ab \ \mathbb{R} \ a for each aa, bb we call a symmetric relation.

Antisymmetric relation

Sometimes we have a relation which has a sense of is the same as or something. For instance, we could compare sound frequencies and we want to say that one frequency is the same as or higher than the other. Such a relation would not be symmetric. It would be reflexive thought. Additionally, if we found 2 elements that are interchangeable, we could assume that they are one and the same element.

a,ba R bb R a    a=b\forall_{a,b} a \ \mathbb{R} \ b \land b \ \mathbb{R} \ a \implies a = b

Such relation we call a antisymmetric relation.

Transitive relation

Sometimes relation can be kind of inherited. For example, if we say that the father is an ancestor of the son, and the grandfather is an ancestor of the father, we know that the grandfather is also an ancestor of a son.

a,b,ca R bb R c    a R c\forall_{a, b, c} a \ \mathbb{R} \ b \land b \ \mathbb{R} \ c \implies a \ \mathbb{R} \ c

We call such relation a transitive relation. Be careful: we can only combine relations we know about, we cannot split one relation into two, by adding an intermediate point without any additional assumption, which allows us to do it.

Now, that we know some terminology, we can take a look at some more common relations.


Preorder (not to confuse with pre-order) is a binary relation that is both reflexive and transitive.

Quite a lot of relations can be considered preorders: is-the-same-as (==), is-greater-or-equal (\geq), is-lesser-or-equal (\leq), is-divisible-by (\mid), is-superset (\supseteq), is-subset (\subseteq)…

As a matter of the fact, just by adding symmetry or antisymmetry condition we could turn it into one of the next two relations.


Equivalence relation is a binary relation that is reflexive, transitive and symmetric - in other words, a symmetric preorder.

Most commonly known equivalence relation is equality.


4=4    (4,4)=4 = 4 \iff (4,4) \in =

(44 is equal to 44 means, that (4,4)(4,4) belongs to == relation).


2+2=44=4+0    2+2=4+02+2 = 4 \land 4 = 4+0 \implies 2+2 = 4+0


(2+2,4)=    2+2=4    (2+2, 4) \in = \iff 2+2 = 4 \iff     4=2+2    (4,2+2)=\iff 4 = 2+2 \iff (4, 2+2) \in =

Intuitively, all equivalence relations will take a look at objects and look for some properties to check if they are equal. This is what we need to keep in mind, when implementing contracts like .equals in Java or == in C++ (or to understand why == is broken in JavaScript).

Equivalence class and quotient set

If we have some set SS with an equivalence relation \sim, this relation can partition the whole set into smaller disjoint sets of equivalent elements. Each of such sets is called an equivalence class and a set made of such classes is called quotient set. We denote it with S/S/\sim.

Additive group

For instance: let us take a set of integers. Now, let us take some positive natural number e.g. 77. Then, let us multiply all integers by that number:

nZ={na:aZ}n\mathbb{Z} = \{ na: a \in \mathbb{Z} \} 7Z={7a:aZ}7\mathbb{Z} = \{ 7a: a \in \mathbb{Z} \}

We created a set of all numbers divisible by n=7n=7. What would happen if we took all possible integers bb and calculated sets 7Z+b7\mathbb{Z}+b?

nZ+m={na+m:aZ}n\mathbb{Z} + m= \{ na + m: a \in \mathbb{Z} \}

We would end up with:

...... 7Z1={7a1:aZ}={7a+6:aZ}=7Z+67\mathbb{Z} - 1 = \{ 7a-1: a \in \mathbb{Z} \} = \{ 7a+6: a \in \mathbb{Z} \} = 7\mathbb{Z} +6 7Z+0={7a+0:aZ}7\mathbb{Z} + 0 = \{ 7a+0: a \in \mathbb{Z} \} 7Z+1={7a+1:aZ}7\mathbb{Z} + 1 = \{ 7a+1: a \in \mathbb{Z} \} 7Z+2={7a+2:aZ}7\mathbb{Z} + 2 = \{ 7a+2: a \in \mathbb{Z} \} ...... 7Z+5={7a+5:aZ}7\mathbb{Z} + 5 = \{ 7a+5: a \in \mathbb{Z} \} 7Z+6={7a+6:aZ}7\mathbb{Z} + 6 = \{ 7a+6: a \in \mathbb{Z} \} 7Z+7={7a+7:aZ}={7a+0:aZ}=7Z+07\mathbb{Z} + 7 = \{ 7a+7: a \in \mathbb{Z} \} = \{ 7a+0: a \in \mathbb{Z} \} = 7\mathbb{Z} + 0 ......

Sets are equal if they have the same elements. If we add (or remove) exactly 7 from each element of one of the sets above, we don’t have a choice, we end up with one of the existing sets: 7Z7\mathbb{Z}, 7Z+17\mathbb{Z}+1, 7Z+27\mathbb{Z}+2, 7Z+37\mathbb{Z}+3, 7Z+47\mathbb{Z}+4 ,7Z+57\mathbb{Z}+5, 7Z+67\mathbb{Z}+6 -every other set we’ll create will be identical to one of these 7!

Someone who ever did division with a remainder can notice, that we basically created sets of numbers with the same remainder in the division by 7.

a=nb    rem(a,n)=rem(b,n)a =_{n} b \iff rem(a,n) = rem(b, n)

Calculating such remainder is often called operation modulo, and an equivalence relation is called equality modulo nn:

a=b (mod n)    a mod n=b mod na = b \ (mod \ n) \iff a \ mod \ n = b \ mod \ n

Can we do something interesting about these classes? Well, we could e.g. define ++ operator for them:

ei+ej:={a+b:aeibej}e_i + e_j := \{ a+b: a \in e_i \land b \in e_j \}

Basically: take every element from one set and add to by every element from another set. Interestingly, the way remainders work will guarantee us, that each number in a set we’ll create that way, will have the same remainder.

e2+e3={a+b:ae2be3}={a+b:rem(a+b,7)=5}=e5e_2 + e_3 = \{ a + b: a \in e_2 \land b \in e_3 \} = \{ a+b: rem(a+b, 7)=5 \} = e_5 e3+e3={a+b:ae3be3}={a+b:rem(a+b,7)=6}=e6e_3 + e_3 = \{ a + b: a \in e_3 \land b \in e_3 \} = \{ a+b: rem(a+b, 7)=6 \} = e_6 e4+e3={a+b:ae4be3}={a+b:rem(a+b,7)=0}=e0e_4 + e_3 = \{ a + b: a \in e_4 \land b \in e_3 \} = \{ a+b: rem(a+b, 7)=0 \} = e_0

We cannot overflow, results of addition becomes an addition modulo nn and e0e_0 becomes a neutral element of the addition. Actually, that how addition modulo nn is defined:

i+j=k (mod n)    (nZ+i)+(nZ+j)=(nZ+k)i + j = k \ (mod \ n) \iff (n\mathbb{Z} + i) + (n\mathbb{Z} + j) = (n\mathbb{Z} + k)

This way we defined an additive group (Zn,+n)(Z_n, +_n) (more about groups soon):

Zn=Z/nZZ_n = \mathbb{Z} / n\mathbb{Z}

Multiplicative group

Multiplication of our equivalence classes would be defined as:

ei×ej:={a×b:aeibej}e_i \times e_j := \{ a \times b: a \in e_i \land b \in e_j \}

Basically: take every element from one set and multiply by every element from another set.

e2×e3={a×b:ae2be3}={a×b:rem(a×b,7)=6}=e6e_2 \times e_3 = \{ a \times b: a \in e_2 \land b \in e_3 \} = \{ a \times b: rem(a \times b, 7)=6 \} = e_6 e3×e3={a×b:ae3be3}={a×b:rem(a×b,7)=2}=e2e_3 \times e_3 = \{ a \times b: a \in e_3 \land b \in e_3 \} = \{ a \times b: rem(a \times b, 7)=2 \} = e_2 e4×e3={a×b:ae4be3}={a×b:rem(a×b,7)=5}=e5e_4 \times e_3 = \{ a \times b: a \in e_4 \land b \in e_3 \} = \{ a \times b: rem(a \times b, 7)=5 \} = e_5

Our set multiplication becomes a multiplication modulo nn with e1e_1 as a neutral element.

i×j=k (mod n)    (nZ+i)×(nZ+j)=(nZ+k)i \times j = k \ (mod \ n) \iff (n\mathbb{Z} + i) \times (n\mathbb{Z} + j) = (n\mathbb{Z} + k)

If we limit our set to only numbers coprime with nn we’ll get a multiplicative group modulo nn (Zn,×n)(Z_n^*, \times_n).

Partial order

Partial order (partially ordered set, poset) is a binary relation that is reflexive, transitive and antisymmetric - or an antisymmetric preorder if you prefer.

We usually use \leq as a partial order relation symbol. Numbers are partially ordered. But with (real) numbers we can always say, that some number is lesser than the other. On the other hands with sets and is-subset relation we might say that one set is lesser than the other in some situations:

{a}{a,b}\{a\} \subseteq \{a,b\} {b}{a,b}\{b\} \subseteq \{a,b\}

but some sets are completely unrelated!

{a} {b}\{a\} \ \{b\}

This is where the partial part came from.

Partial orders describe hierarchies. Because sets are a hierarchy (via subsets) and types are basically sets, types hierarchy in types programming languages are also partial orders (is-subtype). Another example of such hierarchy would be natural numbers and divides (\mid) relation:

2448    282 \mid 4 \land 4 \mid 8 \implies 2 \mid 8

Minimal, maximal, least and greatest elements

Interestingly, if we draw a graph of this relation for natural numbers greater than 1, we could easily spot prime numbers: they would be at the bottom of the graph.

Such elements that have no elements lesser than them are called minimal elements. Similarly in relations where there are elements that have nothing greater than them they are called maximal elements.

Sometimes we have an element that is lesser than any other element. We would call it the least element of the partial order. Element greater than any other would be the greatest element.

If a set has the least element it is automatically (the only) minimal element of the set. The greatest element is always (the only) maximal element of the set.

On the other hand, a set with one minimal/maximal element doesn’t necessarily have the least/greatest element. Take a set of all even numbers and odd numbers from 1 to 9. Now lets define relation is-lesser-than-and-both-are-even-or-both-are-odd. All even elements would be related to each other, but unrelated to odd elements. Odd elements would be related to each other but unrelated to even elements. Even parts would have no minimal nor maximal element. But the odd part would: 1 and 9. However, because neither would be greater/lesser than every other considered number (we consider all even numbers), set would not have a minimal nor maximal element.

Infimum and supremum

If for some subset SS of our partially ordered PP we can find find one greatest pPp \in P that is sSps\forall_{s \in S} p \leq s, we can say that pp is infimum of SS.

E.g. if our poset are integers and we take a look at subset S=Z[0;10]S = \mathbb{Z} \cap [0; 10], then all sSs \in S will be greater than 1-1. It is the greatest number with such property (3-3 and 2-2 are also smaller than anything in SS, but they are smaller than 1-1), which makes it infimum of set SS.

If for a subset SS of our partially ordered PP we can find find one least pPp \in P that is sSps\forall_{s \in S} p \geq s, we can say that pp is supremum of SS.

E.g. For integers and and S=Z[0;10]S = \mathbb{Z} \cap [0; 10] such p would be 1111.

If we think about it, the Least Upper Bound which Scala compiler used for type inference is basically a supremum of types to which value belongs.

Linear order

Such issues as the one described above disappear if we add another condition. Partial order where each 2 elements aa, bb have either aba \leq b or bab \leq a relation, is called a linear order (or total order). The names are obvious once we’ll understand that this property lets us take each element and align them in a line: a lesser before a greater. A number line is a great example showing why natural numbers, integers, rational and real numbers are linear order.

Some numbers are not linear (e.g. complex), but all subset of real numbers is. Question is it a linear order? can often be replaced by can I index the elements with real numbers?


A linear order which each subset contains the least element is called well-ordered or well-order.

Well ordered sets have one nice property: we can apply proof by induction on them. E.g. if we wanted to prove, that each (natural) power of 2 corresponds to some power set cardinality we could do the following:

  • for n=20n=2^0 we take a singleton of an empty set s0={}s_0 = \{\emptyset\}. It is a power set of \emptyset. Obviously, its cardinality is

    {}=1=20|\{\emptyset\}| = 1 = 2^0

    so it works so far,

  • for n=2i+1n=2^{i+1} we take a set sis_i and make a copy of it, where we are putting into each sets contained by it an element that is not yet there, e.g. ii and then combine both sets of sets. So,

    • for 2i+1=12^{i+1}=1 we take s0s_0 and put 11 into \emptyset, and then combine resulting {{1}}\{\{1\}\} with {}\{\emptyset\} obtaining s1={,{1}}s_1=\{\emptyset, \{1\}\},
    • for 2i+1=22^{i+1}=2 we take s1s_1 and put 22 into \emptyset and {1}\{1\}, and then we combine {{2},{1,2}}\{\{2\},\{1,2\}\} with s1={,{1}}s_1=\{\emptyset, \{1\}\} obtaining s2={,{1},{2},{1,2}}s_2=\{\emptyset, \{1\},\{2\},\{1,2\}\}
    • etc.

    For each si+1s_{i+1} the cardinality is by definition


Since we were able to build a power set for 202^0 and then we shown how to build a power set of cardinality 2i+12^{i+1} basing on power set for cardinality 2i2^i we provided a way to build a power set for each power of 2, which was what we wanted.

This and many other proves based on injections basically require that we can start from any point, and then we’ll have a way to get closer and closer to the point, where we can stop and where we know that we reached our goal.

In type-level programming we can think of such well-orders, e.g. when we think about the hierarchy of types and type class derivation. Notice that each induction - both on a product as well as coproduct - is basically an inductive proof where are using the property that HList and Coproduct are well-ordered!


Though the name looks similar to lettuce, a lattice is actually a partial order which has supremum and infimum for every two elements. In other words: for every two elements, there will be something greater or equal to both of them and something lesser or equal to both of them.

One of the best use cases for lattices I know are access management systems.

Imagine we have a certain file system. If we grant someone access to a directory (be it read, write or execute), it propagates down to all files, subdirectories, sub-subdirectories etc. You can specify that script A has read access to /company/accounting and /company/invoices/2018. Requirements for script B is that it has to have read access to /company/invoices and /company/accounting/integrations-payments. These 2 sets of accesses has a infimum - read access to /company/invoices/2018 and /company/accounting/integration-payments. They also have supremum - read access to /company/accounting and /company/invoices. With that, we can see a subset of files that can be used by both scripts, and a set of files that could be accessed by either of files.

If instead of scripts, these were accessed policies in a company (or AWS, because they also use LBAC) we could easily model which combination of policies gives access to which resources.

Such system is named lattice-based access control (LBAC). It’s the best and most powerful way of modeling access management in hierarchical resource structures. However, as you might feel it do comes with some complexity (you need to be able to efficiently decide whether or not something is a part of a lattice and derive relations from the way your store your data). That is why in most cases you’ll meet role-based access control. With RBAC your entitlements are usually 1-1 mapping of roles you have - e.g. OAuth2 entitlements. Roles are easier to implement and test, but they don’t deal well with structures where some entitlements should be somehow inherited.


A function is basically a relation between arguments and returned value. The only requirement is that the same arguments should always return the same value.

Functions have also a nice correspondence to relations. A unary function is a binary relation. A binary function is a ternary relation. A ternary function is a 4-ary relation… A nullary function would be a unary relation. A function’s arity is basically smaller by 1 than relations arity.

Of course, we are assuming, that the result is always a value, not a tuple itself. What if we don’t assume that? Well, IMHO then we are explaining currying as a function which is a relation between one tuple and another tuple - if we’d say that nested tuples are the same as flattened ones then:

function cartesian tuple application
f:A×BCf: A \times B \rightarrow C (A×B)×C(A \times B) \times C ((a,b),c)((a, b), c) (a,b)c(a,b) \rightarrow c
f:AB×Cf: A \rightarrow B \times C A×(B×C)A \times (B \times C) (a,(b,c))(a, (b, c)) a{(b1,c1),(b2,c2),...}a \rightarrow \{(b_1, c_1), (b_2, c_2), ...\}
f:ABCf: A \rightarrow B \rightarrow C A×B×CA \times B \times C (a,b,c)(a, b, c) abca \rightarrow b \rightarrow c

all of the above are interchangeable.

This should show us why maps (or dictionaries) in many programming languages are actually functions. While expressed in a different way they represent the same mathematical idea.

But we can say a bit more about a function’s properties.

Injective function

All functions have one value for each tuple of arguments. But sometimes we want to make sure for each different argument(s) function will return a different result. This way we’ll have a function, that has both unique arguments and returned values - this way we are able to revert it and obtain arguments by the resulted value!

We call such function one-on-one function or an injective function. Sometimes it is a required property (basically each 1-1 mapping). Sometimes we explicitly don’t want a function to be injective - e.g. each hash function in cryptography (there 1-1 property would be a vulnerability).

We denote injective functions with 111-1 above the arrow:

f:A11Bf: A \xrightarrow{1-1} B

or using an arrow with another head at its tail:

f:ABf: A \rightarrowtail B

A composition of injections is also an injection.

Surjective function

Sometimes we want to explicitly tell that function makes use of all values of a set that is declared as a codomain. Because, you know, it is not obvious:

f:NNf: \mathbb{N} \rightarrow \mathbb{N} f(x)=x mod 2f(x) = x\ mod\ 2

This function declares the whole set of natural numbers as its codomain. However, actual codomain will only be made of 00 and 11. On the other hand, the signature:

f:N{0,1}f: \mathbb{N} \rightarrow \{0, 1\}

would be precise. Such function which set of returned values exactly match the declared codomain is called a function onto or a surjective function. We can denote it with an arrow with the double head:

f:ABf: A \twoheadrightarrow B

though I saw also:

f:AontoBf: A \xrightarrow{onto} B

A composition of surjections is also a surjection.

Bijective function

Interesting things happen if a function is both onto and 1-1. A bijective function or one-to-one correspondence function is always reversible. You could take a domain of ff to calculate its image, then create f1f^{-1} and use it to recreate ff domain from its image.

To denote bijection we can use an arrow with 2 heads at head and 1 at tail:

f:ABf: A \rightarrowtail\!\!\!\!\!\rightarrow B


f:Aonto11Bf: A \xrightarrow[onto]{1-1} B

A composition of bijections is also a bijection.

If our AA and BB are somehow similar and the transformation preserves that similarity (homomorphism), then our bijection is called isomorphism.

When we say that 2 things are isomorphic we can understand that they are one and the same things just in 2 different representations.

Partial and total functions

So far we always assumed, that a function returns values for each value in its domain. Without that property, it would be hard to reason about function composition. However, at some point, people found that definition restricting.

That is why a function which has a defined value for each argument of its domain is now called a total function.

On the other hand, a function which defines values only for a strict subset XX' of some XX is called a partial function.

f:ABf: A \nrightarrow B f:ABf: A' \rightarrow B

Partial functions are useful for computer scientists working with e.g. functions in computability theory where an exact domain is not always known. In programming languages are quite often a result of either a messy design or are used to perform filtering and transformation in one step.

For instance Scala’s PartialFunction is used in collect to do filtering and mapping at once:

List(1,2,3,4,5,6,7,8,9,10) collect {
  case i if i % 2 == 0 => i.toString
// List("2", "4", "6", "8")

or for handling only certain kinds of errors:

Future(performAction()) recover {
  case e: SpecialKindOfException => // ...

In general, though partial functions cause more trouble than they are worth.


In this post, we talked a bit about relations, what they are, and what are their most common use cases. Knowing what are relations, equivalence, partial orders and functions we can move on and talk a bit about another foundation of modern mathematics - algebras. At the same time, we can see that all in all whole mathematics (and indirectly computing) is build on top of sets in various combinations.