Weighted choice implementation

In the previous post, we developed the tiny_rand library as an extension of the STL random header. The goal was to build a minimal set of functions such that it would allow to randomly generate any kind of data: containers, classes, tuples, variants, and any combination of those.

The tiny_rand library is built around concepts coming from Functional Programming. In particular, it leverages Functors and Applicative to get a lot of composability for free. But it is far from being perfect. We ended the previous post with a bunch of idea for improvements.

In today’s post, we will tackle one of these improvements. Our goal will be to implement one of the missing feature: the ability to randomly select among a bunch of values, adding weights to each of the values.

 

Motivation


Before developing the feature, we will first describe what it is useful for. We will therefore start by some interesting use cases.

 

Generating strings

Let us imagine that we wish to generate random strings. But instead of having equal probabilities to generate each character, we want to weight each character, such that the generated strings matches the distribution of characters in the English language.

We could use an API that looks like follows:

  • We provide weigthed characters to weighted_choice_gen
  • A weighted character associates a character C to a double W(C)

The resulting generator would generate a character C with the probability P(C), equal to the weight of the character W(C) divided by the sum of the weights of all characters.

 

Generating strings, revisited

Our weighted_choice_gen is very useful. It is however pretty low level, and might not be the simplest way to generate strings where each character has an associated weight.

For instance, let us imagine that we wish to generate strings, such that there are in average X times more consonants than vowels (*). Let us assume that we already have two generators available: one for vowels and one for consonants.

In such a use case, we could certainly do it using weighted_choice_gen, but it would be very awkward, very quickly:

  • We would have to compute the weights of individual letters
  • We would loose reuse of the vowel and consonant generators

It would be much better to separate concerns and restore composability, by associating weights to generators instead:

(*) Note: this is very different from generating each vowel with a weight of X, and each consonant with a weight of 1. Can you see why?

 

The emerging need

Both these test case show the the need to randomly select stuff (value or generators) with associated weights, to introduce bias toward some choices.

  • weighted_choice_gen will implement the weighted choice of values
  • weighted_one_of_gen will implement the weighted combination of generators

In some sense, weighted_choice_gen is the generalization of choice_gen, and weighted_one_of_gen is the generalization of one_of_gen.

 

Other use cases

Both these generators are pretty useful in practice. For instance, we could use them to stress test a trading platform with test inputs that models the the kind of traffic the platform usually receives.

If the traffic is made of 0.25% IRS and 0.75% Spots, we can combine (using weighted_one_of_gen) a generators for IRS and a generator for Spots.

 

Overall design


We will follow the same recipe we followed for the design and implementation of the previous features. We will create combinators that take other generators and produce new generators (*).

We recognized two very similar problems to solve:

  • The weighted random selection of values
  • The weighted random selection of generators

Because these are very similar in nature, we will build an algorithm that works for both values and generators (generators are special case of values) and build nice looking combinators around it.

Because we want to be able to test our combinators, we will also work on separating the non-deterministic part from the deterministic part in our algorithms.

(*) It is generally a good idea to stick with a very few amount of constructs, patterns or idioms inside a given library. Switching too much (even for reasons that might seem good locally) might lead to an explosion of different ways to do things, making the library as a whole less cohesive. In one word, being consistent in our design makes API users happier.

 

Core implementation


In this section, we will discuss the design and implementation of the deterministic core algorithms that will power our combinators.

 

Types

We could use std::pair to associate a value to its weight. To be a bit more declarative thought, we will make a dedicated type to model this association:

Note: It is not always clear whether using strong typing leads to real improvements. There is always a trade-off involved between safety versus specificity, which will be discussed in some future posts.

 

Algorithm discussion

The choice_gen combinator for random non-weighted selection had a constant time algorithm complexity. The implementation just had to randomly pick an index and look inside a vector to get back the chosen value.

When we consider weights, the algorithm is less straightforward, and we cannot ensure a constant time complexity. For this reason, and although choice_gen could theoretically be implemented in terms of weighted_choice_gen, we keep them separated.

Our new algorithm will proceed as follows:

  • We will build weight intervals: each interval will be associated a value
  • The width of an interval is the weight of the associated value
  • We randomly pick a double between zero and the sum of all the weights
  • We binary search this number in the intervals to select the value

The construction of the intervals is depicted below:

INPUT:
[{ val_1, 1.2 }, { val_2, 0.7 }, { val_3, 1.5 }]

INTERVALS:
[0, 1.2)   => val_1
[1.2, 1.9) => val_2
[1.9, 3.4) => val_3

This algorithm requires linear complexity to build the intervals. But this only has to happen once, at the creation of the generator. The search itself has a logarithmic complexity and occurs at each roll of the generator.

 

Algorithm implementation

The STL offers a pretty useful algorithm that we can use here to help us doing the binary search job: std::lower_bound. Its documentation on cppreference.com says that it “Returns an iterator pointing to the first element [..] greater or equal to the [..] value.”

This means that we can implement the interval logic we described earlier by just storing the end of each interval. Finding the appropriate interval is finding the interval whose end is at least greater than the searched value.

Based on this insight, we can implement the setup phase that builds our intervals as follows. We just associate end of intervals to the value associated to the interval:

Searching the intervals can be implemented as a simple std::lower_bound on the resulting collection, with a custom comparison:

Both these algorithms can be easily unit tested. We took care avoiding any kind of random generator side-effect in these core building blocks.

Note: splitting algorithms into parts is a good way to make the parts testable. A lot of times, there is no need for dependency injection, fakes or mocks to make a piece code testable. It is not to say these techniques have no uses, just to state simpler alternatives.

 

Wrapping up as an API


In the previous section, we built the algorithmic building blocks needed to randomly choose a value, with weights associated to each of the values. In this section, we will assemble these blocks into the combinators of our API.

 

Weighted Value Choice

To randomly pick a value, with weights associated to each value, we just assemble our core building blocks.

At the creation of the generator, we build our interval search vector and pass it to the generator returned from the weighted_choice_gen combinator. This generator will first generate a random double value, then find its associated interval. It then returns the value associated to the interval.

 

Weighted Generator Choice

To randomly pick a generator, we will use the combinator we just implemented to pick a random value, weighted_choice_gen, and add some glue around.

We first transform our parameter pack of heterogeneous generators into a vector of homogeneous weighted generators that can be fed to weighted_choice_gen. We use type erasure and std::function to do this.

The output of weighted_choice_gen will be a generator that produces a generator when we roll it. We capture it inside the lambda returned by our new weighted_one_of_gen combinator. The implementation mirrors this quite naturally:

 

Conclusion, and what’s next


In this post, we added two combinators in our tiny_rand library. These combinators allow to randomly select items (value or generators), with weights associated to each of these items to introduce bias.

We looked at our problem and recognized the similarities between these two features. We saw it could be implemented based on one single algorithm. The resulting generator is quite efficient: each roll has a logarithmic time complexity (logarithmic in the number of items to pick from).

In future posts, we will look at improving the tiny_rand library with combinators to build dependent generators (generators whose behavior depends on the result of a previous generator). This will lead us to the concept of Monads, which we will discuss in the context of C++.


You can contact or follow me on Twitter.

Building on the STL random header

In today’s post, we will discuss the design and implementation of a small headers only library, whose aim is to complete the STL random header with combinators to generate arbitrary random data.

Throughout this post, we will first describe the motivations behind this library, and the its high level goals. These goals will imply design choices that we will discuss as well. We will then go deep into the implementation of the main features, before concluding and discussing potential improvements.

The second goal of this post is to show how thinking in terms of concepts such as Functors and Applicative can help us find powerful design constructs to base ourselves upon when building an API.

 

Motivation


The STL random header provides us with random number engines and random distributions. The random engines produce randomness, and the distributions consume this randomness to create random data: instances of booleans, integers and floating point number, all following some probabilistic properties.

The decoupled design of the STL random header offers a strong basis to build upon, for real applications requiring us to generate instances of more complex data types.

Indeed, in general, we have the need to generate random arbitrary sets of data, from the simplest tuples to more complex data types. For instance:

  • Random element inside a collection of possible values
  • Random tuples (like a 3D coordinate)
  • Random strings, or other STL containers
  • Random custom data types (like RGB colours)
  • Random sum types, such as optional or variant

The goal of the tiny_rand library we will build today is to satisfy these needs. We will build upon the STL and complete it with the ability to randomly generate any kind of data we might want.

 

High level goals


Before discussing the design of tiny_rand library, it is useful to state and describe its high level goals. And these are:

  • To build on top of the STL, making sure the integration is easy
  • To be as light as possible in terms of dependencies (headers only)
  • Te be complete, making sure we can generate any kind of data we want
  • To be concise and small, with a small set of functions covering our needs

These high level goals will influence our high level design choices. This is the subject of our next section.

 

High level design choices


This section will describe the design choices we will follow to implement the tiny_rand library. These main lines behind choices are connected to our high level goals, although there are other possible variations on it.

 

Mersenne Twister

In this talk rand() Considered Harmful, Stephan T. Lavavej encourages to use the std::mt19937 as our default pseudo random generator. To keep our API simple, we will select this random engine specifically, and build on top of it.

We could choose to enlarge the API to support the other pseudo random generators. This can always be done later: we will keep it simple for now.

 

Essence of generators

With the std::mt19937 as our source of randomness, our goal is build more distribution-like generators. These generators will extend the concept of distribution and allow us to generate instance of arbitrarily complex type.

Looking at the std:: uniform_int_distribution, we see that it defines an operator() that takes a random engine as parameter. Simplifying things a little, we can see it as a function from a source of randomness (like std::mt19937) to a value of type int.

We can see there the essence of a random generator in the design of the STL: a callable object from a source of randomness to a type T the generator will generate instances of. We will follow this design of generators as modelled by functions.

 

Composition, no inheritance

OOP often drives us toward the use of objects, and quite easily toward using complex design patterns. There might be several design patterns that might seduce us here, like the Template Method.

Here is an example of usage of Template Method to handle the generation of random generators. It delegates to the generate_element the responsibility to generate the values inside the vector.

    *-------------------*
    |   IGenerator[T]   |
    *-------------------*        (Interface for generators)
    | + operator(): T   |
    *-------------------*
              ^
              |
              |
*----------------------------*
|     VectorGenerator[T]     |   (    TEMPLATE METHOD    )
*----------------------------*   (Abstract implementation)
| + operator(): vector[T]    |   (and generate_element as)
| - generate_element(): T    |   (the customisation point)
*----------------------------*

But this design has an important flaw: it is not composable. We cannot easily handle vector of vector of list of integers. Each layer would bring new generate_element sub-functions, and it becomes a real mess.

Because of this, we will stay away from the Template Method and rely instead on direct injection (*). The vector generator will take as parameter a generator to create its values.

Our design will rely on defining higher order generators, combinators, to combine other generators, and return new generators. This approach automatically composes and will allow us to generate any data we want, with only a few combinators.

(*) In general, I would recommend to stay away from the Template Method design pattern. Because it relies on inheritance, it couples things much more than composing Strategy. Overall, I think this is more often an anti-pattern than a pattern to follow.

 

Headers only, no dependencies

In order to make the library as light as possible, tiny_rand will be built as a headers only library. To avoid participating to the dependency hell syndrome, it will not be allowed to depend on any third parties library, including Boost.

 

Primitives


The STL offers generators for booleans, integers and double, but it does not offer ways to generate characters. As we will see, satisfying this need will be a bit more tricky that just adding one distribution.

 

Character generator

It seems the STL forgot about char generation. We will start by correcting this grave injustice with our char_gen generator:

This generator will generate characters that cannot be properly printed on the screen. It points to the need to generate string with a restricted set of characters.

 

Reduced character set

We could reduce the range of allowed character by specifying a range, and deal with it the same way the STL distribution do it for integers. But characters are not integers: there are much fewer, and they have a different semantic.

We will instead introduce a more general way to pick an element in a finite (and small) set of values, the choice generator:

From this choice_gen generator, we can build generators for alpha-numeric characters, or any other kind of reduce character set we want. We use it below to generate letters:

This generator is however much more general: you can also use it to generate values inside an enumeration which is a useful primitive to handle as well.

 

Playful character generation

We can be more imaginative than just generating letters. We could for instance write a generator that creates valid identifiers for C++ functions or classes:

This code makes use of the string_gen, which allows to generate strings with a maximum length, and is parameterised by a character generator. We will see how to implement it later in this post.

This cpp_identifier_gen will produce perfectly valid C++ identifiers. Feel free to use it, if ever you lack inspiration in finding good-hard-to-decipher function names…

 

Functors and Applicatives


This is not going to be a lecture on Category Theory. We will however use the Math concepts of Functor and Applicative to get a first shot at a good design.

One thing that Haskell practitioners quickly discover is that Functors and Applicative concepts are very general.

  • They occur quite often and quite naturally.
  • They compose well with other pieces of software.
  • They almost always lead to good design.

In this section, we will apply these concepts very practically. We will see how they connect to our random generators, and how they will help our design.

 

I see a functor in your randomness

Functions are functors. A function from R (for random) to A can be transformed into a function from R to B, if we have a function from A to B. We only need to compose these functions together. This is the definition of a Functor.

Since we design our random generators as callable from std::mt19937 to a value of type T (a kind of function), we can apply this useful mathematical construct here. We call it transform_gen and implement it as function composition:

This powerful construct allows us to create new random generators by taking existing generators and composing them with a transformation function. Using it, we can for instance generate random square numbers between 0 and 100:

Please note we are not limited to such scalar-to-scalar mappings. We could for instance generate a range [0, N) from a random generator N the same way.

 

The road to Applicative

When we discover a Functor in our code, a great habit is to immediately look for the next more powerful pattern: Applicative Functors.

One way to see Applicative is as a generalisation of Functors on multiple arguments. Our Functor was able to transform a single generator into one generator. Our Applicative Functor will be able to combine several generators into one generator.

It might look obscure if you never tried Haskell (you should, it is glorious), but it is very easy to implement. We call this function apply_gen:

  • The N generators are combined into one single generator
  • The resulting generator triggers the N generators and feed finaliser
  • The finalizer is a function that takes the output of the N generators

 

Applicative, the key to custom types

Because of its ability to compose N generators and feed them to an arbitrary function, our Applicative Functor allows to create generators for custom data structure, by using a constructor or factory function as finalizer.

We can demonstrate this with a simple example. The following piece of code shows out to build a generator for rgb_color, in just 3 lines of code:

We used to_object as a helper function (also provided in tiny_rand) to create a function that calls the constructor of a struct, with a variadic number of arguments provided as parameters:

We can give it a try to convince ourselves that it works:

 

Tuples: special case of Applicative

One specific use case of Applicative Functors is the generation of tuples of arbitrary types. We could do it by using std::make_tuple as finalizer of apply_gen.

Because these specific generators are likely to used frequently, we can write them some specialized implementation to help our client:

From this, we can create a generator of 3D coordinates, each coordinate component between -10 and 10, in just a couple lines of code:

 

Containers random generators


In the last section, we implemented the first building blocks of our library. The two functions we implemented already lead us pretty far. The power of Functors and Applicative allows us to create custom data structures from other generators.

In the previous examples, we started to notice the need for random generators of containers. For instance, we used string_gen to generate random valid C++ identifiers. In this section, we will implement these generators.

 

Pulling the Strings

We start with the random generation of std::string. To remain flexible regarding the set of characters that can appear in the string, our string_gen combinator takes as input a random generator of character.

We want the size of the string to be random as well. To do so, we define two overloads that strike a balance between flexibility and ease of use.

The first overload gives us full flexibility: we take a SizeGenerator as input to generate the length of the generated string. This allows to choose any distribution from the STL and not necessarily stick to the uniform one.

But in the most general case, the user will only want to specify a maximum length for the generated string. So we offer an overload for this case as well.

This function makes use of the repeat_n_gen helper functions, that allows to fill a container with repetitive calls to a random generator:

 

Vector, List, Deque…

All sequential containers can be implemented almost the same way we did for the string random generator. Here is the implementation for the std::vector:

The main difference between the containers is the use of the reserve that is not available for all containers.

 

Associative containers

Associative containers such as map or set are a bit more tricky to design. In particular, the size of the container is a bit problematic.

Should we ask for a size and try to insert more keys until we reach that size? This might be problematic if the required size is bigger than the number of possible inhabitants for the keys. Doing so might create an infinite loop.

For this reason, these generators will not ask for a given length but for a given number of rolls of the random generator for keys.

Here is the implementation for the std::unordered_map:

Note that reserve that cannot be used for std::map. This is again the main difference between the associative containers.

 

Enjoying container generators

Using these combinators, we can create a generator of unordered map from strings (like player names) to 3D coordinates (the location of these players on a game map) in only 4 lines of code:

We can generate a sample map for fun, and try to iterate it to convince ourselves that it works fine. Here is a possible output:

 

Sum types, our last combinator


The previous sections described a set of random generator combinators that allow to generate custom data types and containers. There is however still one missing piece: the support for sum types.

 

Sum types

Sum types are very useful constructs in software development and arise in C++ through different forms.

The most commonly known form is through polymorphism (*): if B and C inherits from A, then ideally, we should be able to combine random generators of B and C to create a random generator of A.

Variants are the second form of sum types known from C++ programmers. For instance, we can define A as a boost::variant of B and C. Again, we should ideally be able to combine random generators of B and C to create a random generator of A.

(*) This is not exactly true. Polymorphism is not the same thing as a sum type. But it can be used (and was used commonly before boost::variant) to implement it, in combination with the Visitor design pattern

 

Implementation

We want to support different forms of sum types. We also had for goal to avoid introducing dependencies to Boost. So we have to find a way to generate a variant without having to depend on it.

The solution is to abstract these concerns away under the notion of a finalizer function, as we did for the Applicative. This function will be called on the production of one of the generator passed as input of our one_of_gen random generator combinator.

The implementation makes use of type erasure through the use of std::function (there are probably ways around this) to store all the generators inside a vector. It then randomly selects one to generate a value and gives it to the finalizer:

 

Demo time!

Using the finalizer appropriately, we now have the support for the random generator of sum types by combining generators of each of its parts.

But the pattern is more general: the finalizer can be used to create an std::optional, a boost::variant, or anything else. For instance, weird_gen is a weird way to implement a generator of integer values between -10 and 10:

  • The integer generator will produce values from -10 to 0
  • The string generator will produces string with maximum length 10
  • The finalizer maps each of the string to their length

For sure, nobody sane would implement such a thing, but that should gives you a taste of what it is possible to with one_of_gen. For instance, we could generate random events and send it to an event handler.

 

Enjoying our hard work


We are done. The resulting library is available as tiny_rand on my GitHub. The samples directory shows some more examples of use cases.

We will conclude this post with one such more elaborate example. Let us imagine we have a game object, that contains an integer for the current round number, and a map from player names to their respective 3D position:

Here is how we can generate a random instance of Game, using the combinators we presented throughout this post:

 

Conclusion, and what’s next


Throughout this post, we built a small random generator library to complete the STL random header with additional features to generate any kind of data. This library is available on my GitHub. Any feedback (suggestions or criticisms) are welcome.

 

Small core, Big reach

We built the API based on a small set of core ideas, which proved pretty powerful and sufficient to go all the way:

  • A random generator is a function from std::mt19937 to a value
  • Choosing to combine generators by direct injection to compose them
  • Embracing the power of Functor, Applicative and Sum types

It demonstrates that surprisingly basic constructs can take us a long way, and that Haskell has some pretty useful design techniques ready for us to use.

 

Further improvements

There are some improvements that could be done on the library and its implementation, and on which I plan to work, among which stands:

  • Complete the choice_gen with weighted_choice_gen to add weights
  • Improve random char generation: it only works for small character sets
  • The use of std::function, which could be replaced by a less overhead construct
  • An overall work on performance and in particular the number of copies
  • The generators created cannot easily be inspected (at compile time)

 

Further further improvements

After having read the great post An Introduction to Reflection in C++, I think it would be worth to find a way to automatically create generators from introspection.

Especially for enumerations, where we end up saying things twice:

In some cases, creating the generators by introspection would not be desirable: as shown through the examples of this post, we often need additional parameters to tune the random generation.

But it would probably cover quite a lot of use cases nevertheless and could be implemented as a different library.

 

Closing words

Any feedback regarding the features, the design choices or the implementation are welcome. Feel free to reach me on Twitter.