Answering r/haskell: How to unit test code that uses polymorphic interfaces?

This short post is an answer to the following question asked on r/haskell. The original question is about how to test code that lives in a Monad class with polymorphic functions.

I highly encourage you to read the post. Its different answers are full of technical gems which we are not going to explore here in this post. Instead, our goal will be to provide a simpler solution to the problem, by just playing and reworking some abstractions.


The original problem

The OP came with the following need. She needs to model some notion of secure token management. The real implementation of the secure token management makes use of complex encryption and decryption algorithms, which she would like to abstract away from her unit tests.


Monad type class

To abstract away the details of the encryption and decryption, and make it possible to test her code without having to deal with their real implementation, the OP introduced the following MTL-like type-class:

  • encryptToken maps a polymorphic token to a string
  • decryptToken maps back a string to a polymorphic token
  • And a Token is something that can be serialized to and from JSON

Let us see of example of use of this Monad. The following code encrypts a token to decrypt it immediately after and return the result of the operation (agreed, it is not really useful, except maybe for some property based testing needs):

The whole approach described by the OP follows a typical Hexagonal-like Architecture, which is interesting for it decouples the code from the implementation of the services it relies upon (the encryption and decryption of tokens), allowing us to test it more easily.


Testing polymorphic interfaces

The problem with the approach above, pointed by the OP, is that testing code that makes use of the MonadToken typeclass gets pretty complex due to the polymorphic functions.

A typical fake implementation would use a Map to associate tokens with their corresponding encrypted string (and vice-versa for the decryption). The problem is that building a map with polymorphic keys in Haskell is not an easy task.

There are ways to do this, and you can check some of the great answers available in r/haskell. They make use of advanced and pretty interesting features of Haskell (such as Data.Typeable or Data.Constraint). We will instead explore a simpler solution.


Another take at the problem

Let us try a simpler solution, that does not require using any advanced Haskell features, but instead relies on rethinking the design just a little bit.


Looking at the types

Let us start from our use case. We are really interested in testing some code that makes use of the encryption and decryption of token, such as this code:

This code makes use of the encryptToken and decryptToken polymorphic functions, whose type are given below (using :type in the REPL):

Now, the fact that our code makes use of these function does not imply in any way that these functions must be part of the MonadToken interface. Instead, these functions could be based on lower-level function available in the MonadToken interface.

This is the option we will be exploring below.


Another Monad interface

We can transform slightly our Monad interface by realizing that the only thing that we know form the token given to the encryptToken and decryptToken polymorphic functions is that:

  • Our token must be serializable to JSON
  • Our token must be de-serializable from JSON

So there is no loss in generality in transforming our interface MonadToken into the following MonadCypher interface, which deals with concrete JSONs instead of polymorphic tokens:

To keep our original code working, we can then build our encryptToken and decryptToken polymorphic function on top this MonadCypher type class:

In fact, we can simplify further these two functions. We can generalize them by realizing that encryptToken only needs the ToJSON constraint, and decryptToken only needs the FromJSON constraint. I linked the implementation here.

Now, let us see now how this design helps with testing.


Testing with a Reader Monad

The key aspect in our new design is that our MonadCypher does not rely on polymorphic tokens anymore, but instead relies on concrete JSON values. It makes testing much easier.

To define a fake interface, we can start by defining a Cyphers data type that contains the necessary maps to associate JSON values to encrypted values, and vice-versa:

From there, you know the drill. We simply wrap this data type inside a FakeCypher type, and implements the necessary Monad type class instances:

Here we are. We managed to make our code testable. And we also avoided to rely on advanced features such as Data.Typeable or Data.Constraint to do so.


Additional benefits & Going further

In additional of being more easily testable, our MonadCypher interface also has additional benefits in comparison to the previous MonadToken interface.

A first advantage we get is that we are able to test more things. The faking of the encryption does not involve the faking of the serialization to JSON. A second advantage is that this design imposes less constraints on the implementation: encrypting does not require anymore the FromJSON constraint.

We could go a bit further and try to get to the essence of encryption and decryption, by relaxing some more constraints and being more generic. For instance, we could try to define our MonadCypher so that it is not defined in terms of JSON (and rather use Data.ByteString for instance).



There is a great deal of freedom in the way we can define type-classes or interfaces. Following the precepts of programming against abstraction helps us make our test more testable, but some abstractions are easier to test against than other.

Depending on the host language, and as shown by the OP of the initial r/haskell post, the choice of the mean of abstraction (the notion being abstracted being the same) leads to different level of sophistication in order to implement it or fake it.

Ultimately though, we can often drastically reduce the complexity of a solution by reworking it even just slightly, typically to avoid falling into the dark corners of the language we use.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a website or blog at

Up ↑

%d bloggers like this: