Building on the STL random header

In today’s post, we will discuss the design and implementation of a small headers only library, whose aim is to complete the STL random header with combinators to generate arbitrary random data.

Throughout this post, we will first describe the motivations behind this library, and the its high level goals. These goals will imply design choices that we will discuss as well. We will then go deep into the implementation of the main features, before concluding and discussing potential improvements.

The second goal of this post is to show how thinking in terms of concepts such as Functors and Applicative can help us find powerful design constructs to base ourselves upon when building an API.



The STL random header provides us with random number engines and random distributions. The random engines produce randomness, and the distributions consume this randomness to create random data: instances of booleans, integers and floating point number, all following some probabilistic properties.

The decoupled design of the STL random header offers a strong basis to build upon, for real applications requiring us to generate instances of more complex data types.

Indeed, in general, we have the need to generate random arbitrary sets of data, from the simplest tuples to more complex data types. For instance:

  • Random element inside a collection of possible values
  • Random tuples (like a 3D coordinate)
  • Random strings, or other STL containers
  • Random custom data types (like RGB colours)
  • Random sum types, such as optional or variant

The goal of the tiny_rand library we will build today is to satisfy these needs. We will build upon the STL and complete it with the ability to randomly generate any kind of data we might want.


High level goals

Before discussing the design of tiny_rand library, it is useful to state and describe its high level goals. And these are:

  • To build on top of the STL, making sure the integration is easy
  • To be as light as possible in terms of dependencies (headers only)
  • Te be complete, making sure we can generate any kind of data we want
  • To be concise and small, with a small set of functions covering our needs

These high level goals will influence our high level design choices. This is the subject of our next section.


High level design choices

This section will describe the design choices we will follow to implement the tiny_rand library. These main lines behind choices are connected to our high level goals, although there are other possible variations on it.


Mersenne Twister

In this talk rand() Considered Harmful, Stephan T. Lavavej encourages to use the std::mt19937 as our default pseudo random generator. To keep our API simple, we will select this random engine specifically, and build on top of it.

We could choose to enlarge the API to support the other pseudo random generators. This can always be done later: we will keep it simple for now.


Essence of generators

With the std::mt19937 as our source of randomness, our goal is build more distribution-like generators. These generators will extend the concept of distribution and allow us to generate instance of arbitrarily complex type.

Looking at the std:: uniform_int_distribution, we see that it defines an operator() that takes a random engine as parameter. Simplifying things a little, we can see it as a function from a source of randomness (like std::mt19937) to a value of type int.

We can see there the essence of a random generator in the design of the STL: a callable object from a source of randomness to a type T the generator will generate instances of. We will follow this design of generators as modelled by functions.


Composition, no inheritance

OOP often drives us toward the use of objects, and quite easily toward using complex design patterns. There might be several design patterns that might seduce us here, like the Template Method.

Here is an example of usage of Template Method to handle the generation of random generators. It delegates to the generate_element the responsibility to generate the values inside the vector.

    |   IGenerator[T]   |
    *-------------------*        (Interface for generators)
    | + operator(): T   |
|     VectorGenerator[T]     |   (    TEMPLATE METHOD    )
*----------------------------*   (Abstract implementation)
| + operator(): vector[T]    |   (and generate_element as)
| - generate_element(): T    |   (the customisation point)

But this design has an important flaw: it is not composable. We cannot easily handle vector of vector of list of integers. Each layer would bring new generate_element sub-functions, and it becomes a real mess.

Because of this, we will stay away from the Template Method and rely instead on direct injection (*). The vector generator will take as parameter a generator to create its values.

Our design will rely on defining higher order generators, combinators, to combine other generators, and return new generators. This approach automatically composes and will allow us to generate any data we want, with only a few combinators.

(*) In general, I would recommend to stay away from the Template Method design pattern. Because it relies on inheritance, it couples things much more than composing Strategy. Overall, I think this is more often an anti-pattern than a pattern to follow.


Headers only, no dependencies

In order to make the library as light as possible, tiny_rand will be built as a headers only library. To avoid participating to the dependency hell syndrome, it will not be allowed to depend on any third parties library, including Boost.



The STL offers generators for booleans, integers and double, but it does not offer ways to generate characters. As we will see, satisfying this need will be a bit more tricky that just adding one distribution.


Character generator

It seems the STL forgot about char generation. We will start by correcting this grave injustice with our char_gen generator:

This generator will generate characters that cannot be properly printed on the screen. It points to the need to generate string with a restricted set of characters.


Reduced character set

We could reduce the range of allowed character by specifying a range, and deal with it the same way the STL distribution do it for integers. But characters are not integers: there are much fewer, and they have a different semantic.

We will instead introduce a more general way to pick an element in a finite (and small) set of values, the choice generator:

From this choice_gen generator, we can build generators for alpha-numeric characters, or any other kind of reduce character set we want. We use it below to generate letters:

This generator is however much more general: you can also use it to generate values inside an enumeration which is a useful primitive to handle as well.


Playful character generation

We can be more imaginative than just generating letters. We could for instance write a generator that creates valid identifiers for C++ functions or classes:

This code makes use of the string_gen, which allows to generate strings with a maximum length, and is parameterised by a character generator. We will see how to implement it later in this post.

This cpp_identifier_gen will produce perfectly valid C++ identifiers. Feel free to use it, if ever you lack inspiration in finding good-hard-to-decipher function names…


Functors and Applicatives

This is not going to be a lecture on Category Theory. We will however use the Math concepts of Functor and Applicative to get a first shot at a good design.

One thing that Haskell practitioners quickly discover is that Functors and Applicative concepts are very general.

  • They occur quite often and quite naturally.
  • They compose well with other pieces of software.
  • They almost always lead to good design.

In this section, we will apply these concepts very practically. We will see how they connect to our random generators, and how they will help our design.


I see a functor in your randomness

Functions are functors. A function from R (for random) to A can be transformed into a function from R to B, if we have a function from A to B. We only need to compose these functions together. This is the definition of a Functor.

Since we design our random generators as callable from std::mt19937 to a value of type T (a kind of function), we can apply this useful mathematical construct here. We call it transform_gen and implement it as function composition:

This powerful construct allows us to create new random generators by taking existing generators and composing them with a transformation function. Using it, we can for instance generate random square numbers between 0 and 100:

Please note we are not limited to such scalar-to-scalar mappings. We could for instance generate a range [0, N) from a random generator N the same way.


The road to Applicative

When we discover a Functor in our code, a great habit is to immediately look for the next more powerful pattern: Applicative Functors.

One way to see Applicative is as a generalisation of Functors on multiple arguments. Our Functor was able to transform a single generator into one generator. Our Applicative Functor will be able to combine several generators into one generator.

It might look obscure if you never tried Haskell (you should, it is glorious), but it is very easy to implement. We call this function apply_gen:

  • The N generators are combined into one single generator
  • The resulting generator triggers the N generators and feed finaliser
  • The finalizer is a function that takes the output of the N generators


Applicative, the key to custom types

Because of its ability to compose N generators and feed them to an arbitrary function, our Applicative Functor allows to create generators for custom data structure, by using a constructor or factory function as finalizer.

We can demonstrate this with a simple example. The following piece of code shows out to build a generator for rgb_color, in just 3 lines of code:

We used to_object as a helper function (also provided in tiny_rand) to create a function that calls the constructor of a struct, with a variadic number of arguments provided as parameters:

We can give it a try to convince ourselves that it works:


Tuples: special case of Applicative

One specific use case of Applicative Functors is the generation of tuples of arbitrary types. We could do it by using std::make_tuple as finalizer of apply_gen.

Because these specific generators are likely to used frequently, we can write them some specialized implementation to help our client:

From this, we can create a generator of 3D coordinates, each coordinate component between -10 and 10, in just a couple lines of code:


Containers random generators

In the last section, we implemented the first building blocks of our library. The two functions we implemented already lead us pretty far. The power of Functors and Applicative allows us to create custom data structures from other generators.

In the previous examples, we started to notice the need for random generators of containers. For instance, we used string_gen to generate random valid C++ identifiers. In this section, we will implement these generators.


Pulling the Strings

We start with the random generation of std::string. To remain flexible regarding the set of characters that can appear in the string, our string_gen combinator takes as input a random generator of character.

We want the size of the string to be random as well. To do so, we define two overloads that strike a balance between flexibility and ease of use.

The first overload gives us full flexibility: we take a SizeGenerator as input to generate the length of the generated string. This allows to choose any distribution from the STL and not necessarily stick to the uniform one.

But in the most general case, the user will only want to specify a maximum length for the generated string. So we offer an overload for this case as well.

This function makes use of the repeat_n_gen helper functions, that allows to fill a container with repetitive calls to a random generator:


Vector, List, Deque…

All sequential containers can be implemented almost the same way we did for the string random generator. Here is the implementation for the std::vector:

The main difference between the containers is the use of the reserve that is not available for all containers.


Associative containers

Associative containers such as map or set are a bit more tricky to design. In particular, the size of the container is a bit problematic.

Should we ask for a size and try to insert more keys until we reach that size? This might be problematic if the required size is bigger than the number of possible inhabitants for the keys. Doing so might create an infinite loop.

For this reason, these generators will not ask for a given length but for a given number of rolls of the random generator for keys.

Here is the implementation for the std::unordered_map:

Note that reserve that cannot be used for std::map. This is again the main difference between the associative containers.


Enjoying container generators

Using these combinators, we can create a generator of unordered map from strings (like player names) to 3D coordinates (the location of these players on a game map) in only 4 lines of code:

We can generate a sample map for fun, and try to iterate it to convince ourselves that it works fine. Here is a possible output:


Sum types, our last combinator

The previous sections described a set of random generator combinators that allow to generate custom data types and containers. There is however still one missing piece: the support for sum types.


Sum types

Sum types are very useful constructs in software development and arise in C++ through different forms.

The most commonly known form is through polymorphism (*): if B and C inherits from A, then ideally, we should be able to combine random generators of B and C to create a random generator of A.

Variants are the second form of sum types known from C++ programmers. For instance, we can define A as a boost::variant of B and C. Again, we should ideally be able to combine random generators of B and C to create a random generator of A.

(*) This is not exactly true. Polymorphism is not the same thing as a sum type. But it can be used (and was used commonly before boost::variant) to implement it, in combination with the Visitor design pattern



We want to support different forms of sum types. We also had for goal to avoid introducing dependencies to Boost. So we have to find a way to generate a variant without having to depend on it.

The solution is to abstract these concerns away under the notion of a finalizer function, as we did for the Applicative. This function will be called on the production of one of the generator passed as input of our one_of_gen random generator combinator.

The implementation makes use of type erasure through the use of std::function (there are probably ways around this) to store all the generators inside a vector. It then randomly selects one to generate a value and gives it to the finalizer:


Demo time!

Using the finalizer appropriately, we now have the support for the random generator of sum types by combining generators of each of its parts.

But the pattern is more general: the finalizer can be used to create an std::optional, a boost::variant, or anything else. For instance, weird_gen is a weird way to implement a generator of integer values between -10 and 10:

  • The integer generator will produce values from -10 to 0
  • The string generator will produces string with maximum length 10
  • The finalizer maps each of the string to their length

For sure, nobody sane would implement such a thing, but that should gives you a taste of what it is possible to with one_of_gen. For instance, we could generate random events and send it to an event handler.


Enjoying our hard work

We are done. The resulting library is available as tiny_rand on my GitHub. The samples directory shows some more examples of use cases.

We will conclude this post with one such more elaborate example. Let us imagine we have a game object, that contains an integer for the current round number, and a map from player names to their respective 3D position:

Here is how we can generate a random instance of Game, using the combinators we presented throughout this post:


Conclusion, and what’s next

Throughout this post, we built a small random generator library to complete the STL random header with additional features to generate any kind of data. This library is available on my GitHub. Any feedback (suggestions or criticisms) are welcome.


Small core, Big reach

We built the API based on a small set of core ideas, which proved pretty powerful and sufficient to go all the way:

  • A random generator is a function from std::mt19937 to a value
  • Choosing to combine generators by direct injection to compose them
  • Embracing the power of Functor, Applicative and Sum types

It demonstrates that surprisingly basic constructs can take us a long way, and that Haskell has some pretty useful design techniques ready for us to use.


Further improvements

There are some improvements that could be done on the library and its implementation, and on which I plan to work, among which stands:

  • Complete the choice_gen with weighted_choice_gen to add weights
  • Improve random char generation: it only works for small character sets
  • The use of std::function, which could be replaced by a less overhead construct
  • An overall work on performance and in particular the number of copies
  • The generators created cannot easily be inspected (at compile time)


Further further improvements

After having read the great post An Introduction to Reflection in C++, I think it would be worth to find a way to automatically create generators from introspection.

Especially for enumerations, where we end up saying things twice:

In some cases, creating the generators by introspection would not be desirable: as shown through the examples of this post, we often need additional parameters to tune the random generation.

But it would probably cover quite a lot of use cases nevertheless and could be implemented as a different library.


Closing words

Any feedback regarding the features, the design choices or the implementation are welcome. Feel free to reach me on Twitter.

Lost in permutation test complexity

In our lost in permutation complexity post, we talked about the std::is_permutation algorithm and its algorithmic complexity issue. We went over several use cases that seems like perfect matches for std::is_permutation.

But because of its quadratic complexity, we made the argument that std::is_permutation is almost impractical: its costs is not worth the trade-off of manually coding the alternative.

In our still lost in permutation complexity post, we discussed an alternative hash based implementation, which decreased the complexity drastically. We also discussed proposals of changes for std::hash in the STL and would improve the C++ developer’s experience.

To conclude this series, I would like to answer an interesting comment that was added in the Reddit associated post. Answering this comment on Reddit directly would make for a big wall of text, hence this post, which aims at providing a comprehensive answer.


Situating the context

Our first post started by describing a necessary and sufficient property to test that an unstable sort was doing its job correctly.

This property is based on the fact that an unstable sort is a permutation of a collection such that the resulting collection would answer true to std::is_sorted. We named this property check_sort_property and translated it into code:

We used this property inside a Property Based Test. Such tests usually consist in four distinct phases:

  1. Generating random inputs (random vectors of pair of ints in our case)
  2. Call our unstable sort routine on each of these vectors
  3. Check that the property holds on the output of the unstable sort
  4. Shrinking the failing random input to find a simpler counter example

To summarize, the goal of the check_sort_property property is to describe succinctly and precisely a condition that makes such a test pass or fail (knowing that the test is performed on random inputs).


The Reddit comment

The check_sort_property property got some attention and got a comment saying that the usage of std::is_permutation was not justified here, and that there were another ways to unit test the unstable sort:

[…] the given problem is unit testing an unstable sorting algorithm. Compare the output with your expected result, and check that corresponding items in output and expected are equal (or neither less than the other).
Using is_permutation to compare two sorted ranges is misguided.

The comment (as I understand it) arguments that it would be much easier to test the unstable sort by comparing the output of the unstable sort against an expected result.

We will now go through some rationales that justify the usage of properties (such as check_sort_property) to verify the correctness of an algorithm, instead of comparing equal the result of a function call with an expected output. I believe these arguments do apply for both property based tests and example based tests.


Missing In Action: Expected

There are cases in which we cannot test an algorithm against a fixed expected result. This happens when the expected result is not easy to craft.


Random inputs

In most cases, it is very difficult to check our input against expected results when the inputs are random. Generating random inputs is precisely what we do in the context of property based testing .

As mentioned in our first post, it is however still possible to get the exact expected result in such cases. For instance, if we have a reference algorithm to test against. If we were implementing a stable sort, we could for instance use std::stable_sort to test against.

In our case however, there is no algorithm that would match exactly what we want to test. We went over this argument in the section “FINDING A GOOD PROPERTY TO CHECK” of our first post.


Big inputs

One second use case in which the “expected” result is not easily accessible is when dealing with big inputs. For instance, we could try to test our unstable sort on a vector of thousands or more elements.

An hand-crafted expected result (matching a hand-crafted input) will be very hard to create and later maintain. Understanding a failed test in such case is hard if not abstracted by good properties (predicates).

It becomes so tedious that most developers will just copy-paste the result of their algorithm and set it as “expected” output. This somehow defeats the purpose of testing in the first place.

But it gets even worse: “unit tests” on big inputs often transform into “regression tests”. These tests do not ensure correctness as much as they ensure that nothing changes.

This is rarely a good idea: software changes. And when it does, these test will most likely become red in a non-meaningful way. These failed test will most likely be ignored, or be fixed by a copy-paste of the new result into the “expected” result.


Beware of over-testing

Now, and even if we can hand-craft an appropriate expected output, there is another reason why checking the output of an algorithm against an expected result is not necessarily always the best thing to do.

We should avoid testing too much of a function, and in particular, we should avoid to expand the tests past the contract of the function. We will explore this claim in the context of our unstable sort.


Freedom to improve

The purpose of implementing an unstable sort instead of a stable sort is to get a degree of freedom on the output of the algorithm. This degree of freedom can be leveraged to implement a faster algorithm.

This freedom extends in the time axis too: it should be fine to get different results for our stable sort across releases. If we discover a faster implementation in the future, we want to be in position to implement it. This might change the relative ordering of equivalent elements: this is fine.


Freedom implication on tests

If instead of relying on a property to check the correctness of our unstable sort, our unit tests matched against whole expected output, improvement to our implementation would be made more difficult.

Improving the algorithm could make such tests fail. This fail status could be because we broke the algorithm. But it could also be because the test relied on the implementation details of the algorithm (the specific instability).

The developer will have to check the result to make sure the failed status represents a real issue or if the test should be adapted. This manual work could be avoided by making sure our unit tests test the interface, not the implementation details (*).

For that reason, testing the output of an algorithm using a property might have a big positive impact on a software flexibility and developers productivity. Tests that only test the contract of a function will only fail if the function gets broken.

(*): This argument for testing the contract and not the implementation is in contradiction with some brainless applications of Test-driven development. The inflexibility of locking everything in place by testing implementation details will likely hurt productivity.



I hope this post answers the valid concerns that were raised inside the Reddit post and did clarify why we went for the implementation of a property making use of std::is_permutation and std::is_sorted to verify the correctness of our unstable sort.

It first had to do with the use of property based testing, and the difficulty to come up with exact expected outputs in such a situation. But it also has to do with the notion of testing the contract of a function and not its implementation, to make future changes easier (within the boundaries of the contract).

You can contact or follow me on Twitter.