LISP Meta-Programming for C++ Developers: Compile time computations

In the previous posts, we started a series dedicated to familiarise C++ developers to the world of LISP meta-programming, based on the Clojure programming language. The goal is to offer a different perspective on meta-programming, how we can approach it, and the kind of use cases it can address.

We started the series by an introduction on Clojure, homoiconicity and how meta-programming works in LISP dialects. We introduced the basics of macros through simple examples and started to discuss compile time computations:

 
In the last post of the series, we talked about constexpr and how it makes compile time computation in C++ more accessible, testable and maintainable.

In today’s post, we will continue exploring compile time computations, and study some of the differences between what we can do with macros and constexpr. We will discuss the advantages of both constructs, and some of the constraints of C++ constexpr.

Note: this post builds on the previous posts of the series. Unless you already know Clojure and its macros, you need to read the previous posts to be able to follow this one.

 

C++ Constexpr composability


We will start our study by showing one thing that works really well with C++ constexpr functions: the ability to de-compose (and recompose) compile time computations, something that does not work as well with Clojure’s macros.

 

Back to compile time additions

In our previous post, we introduced how to perform compile time computations through the simplest possible example: compile time addition.

We went through and dissected how to do it in both Clojure and C++. You can find below the corresponding Clojure macro and C++ constexpr function:

If you do not feel comfortable with these constructs, pause there and go back to the previous post before continuing, as it is a pre-requisite for the rest of the post.

 

The problem with the add-m macro

Remember than a macro in LISP works by manipulating the AST fragments it receives as input, and produces an AST fragment as output.

Upon finding the macro call (add-m 1 2), the Clojure compiler will expand the macro. To do so, it calls the add-m macro with the AST fragments wrapped by the macro as arguments.

In our example, the arguments are the two AST fragments 1 and 2. Since these AST fragments happen to be integers, they can be summed together. The net result is 3 as compile time output:

This is quite subtle and relies pretty heavily on dynamic typing to work. But when we introduce the intermediary variables x and y and call add-m on them, the Clojure compiler gets lost, and we get a compilation error:

Even though x and y are symbols that refer to integer values (1 and 2 respectively), the AST fragments given to add-m are the symbols themselves and not their value. And since we cannot sum symbols, it fails to compile. We will see later in this post how to solve this.

 

C++ constepxr compose just fine

There is no such problem with constexpr. As long as the variables we sum are constant expressions, we can introduce temporary variables and it works fine:

Being able to decompose an expression is the key to build maintainable software: decomposition allows decoupling and separation of concerns. C++ constexpr functions offer this characteristic out of the box. As C++ developers, we should be greatful for this.

And since constexpr functions are both usable as standard run-time function and as a meta-function, C++ ends up offering a quite satisfying meta-programming experience, when it comes to simple compile time compilations.

 

Functions and macros

Functions are much more composable than macros. This is one important lesson that Clojure programmers learn quite early when experimenting with meta-programming.

Since composition is such a great weapon against software complexity, functions should be preferred over macros whenever both can do the job. Macros should be reserved for tasks that cannot otherwise be performed with functions. Whenever we can, we should extract the heavy lifting of macros inside functions.

The reason why constexpr functions are more convenient to decompose than macros is precisely because constexpr functions are just functions.

 

Extending the language


We just learned how compile time computations using macros happens to be less composable than using C++ constexpr functions. Can we fix this?

One of the good thing with macros is that we can use them to extend the language (almost) as we wish. Let us try to build a constexpr equivalent for Clojure.

 

Hacking our way around

Before trying to patch the language by adding some new constructs in it, let us see how we can fix our issue differently. One way to fix a problem with a macro is to introduce more macros (seems legit).

The trick we will use is to create a macro that will return a code fragment that contains another macro inside it. The Clojure compiler will keep expanding macros as long as new macros are returned.

For instance, to compute z the compile time sum of x and y, we create a macro named z that constructs a code fragment where add-m is called on the values of x and y:

How does it work? Remember that backquote acts as an escape mechanism to return an unevaluated AST, while tilde allows to do “interpolations” inside the returned code fragment.

So the macro z will return the code fragment (add-m 1 2) in which a and b have been replaced by their values, thank to the tilde. Since add-m is also a macro, Clojure will expand it, yielding 3.

Note: If you are unclear about how quoting (using backquote) and unquoting (using tilde) works, you can refer to our previous article in which we explained this in more details and using simpler examples.

 

Extending the language

We just saw how to decompose a compile time computations using local variables inside a macro. If we want to make x and y visible as global constants instead (to reuse them elsewhere for instance), we will have to use something else.

Because macros allows extending the compiler, it effectively grants the developer the power to add new features into the language. This is quite common and part of the philosophy of LISPs. We will use this to solve our problem.

Let us create a constexpr equivalent for Clojure. To do so, we will use another interesting feature of Clojure, the eval function. This function takes an AST fragment as input, and evaluates it (*):

Based on eval, we define below our constexpr macro. This macro creates an AST that corresponds to the call of the function f with its arguments args, and then immediately evaluates the constructed fragment:

  • The args will capture all arguments after fct (like variadic templates: …args)
  • The tilde-arobase unpacks the variadic arguments in the enclosing expression
  • The eval function then evaluates the constructed fragment

Let us first illustrate how we can use it before explaining how it works in more details. Here is how we could define a add-m using the constexpr macro. Now, we can sum two global constants:

To understand how it works, we will go step by step. Calling (add-m x y) first returns the AST fragment (constexpr add x y), which contains a second macro. The Clojure compiler sees another macro, and will thus expand this AST fragment as well.

Expanding (constexpr add x y) will create an AST fragment (add x y) before immediately evaluating it. The evaluation will resolve the symbols add, x and y (yielding the definition of add, and the values 1 and 2 respectively), before proceeding with the function call. And so we get 3.

The key difference with our previous macro is the use of eval. Because eval resolves the symbols x and y, we end up summing integers and not symbols. The different steps of macro-expansion are depicted below:

(add-m x y)
=> (constexpr add-m x y)   ;; Macro-expansion
=> (eval (add x y))        ;; Macro-expansion
=> (definition-of-add 1 2) ;; Symbol resolution
=> 3                       ;; Application of add

(*) Some languages such as Python or Ruby do offer eval functions as well. The key difference is that the Clojure variant takes an AST as input, while the usual eval function takes a string. This difference is an important one: dealing with data structures is more powerful than doing string manipulations.

 

Constexpr combines two solutions into one

We just saw two different ways for Clojure to deal with the decomposition of a compile time computation into pieces. One technique applies for decomposition into local computations. The other applies for decomposition into global computations.

This means we have two different solutions in Clojure, while C++ provides a single way to deal with the same two problems. Clearly, constexpr has some good advantages here over macro in terms of usability.

This shows how fantastic the addition of constexpr was for C++. It provides a consistent syntax for functions and meta-functions, it allows to test meta-functions, and it is quite intuitive compared to alternatives. But it is also limited as we will see.

 

Macros strike back


The previous sections show some of the very good sides of C++ constexpr. We will now explore some examples of compile time computations where constexpr functions are not as flexible as macros.

 

Not so hidden motivations

These examples are not meant to diminish the merits of constexpr or mock C++ meta-programming. They are meant to discuss some shortcomings of the current state of constexpr, not its potential.

In fact, the current limitations are related to the current constraints imposed on constexpr by the standard, some of which could probably be relaxed. Which constraints could be relaxed is the subject of the next section.

 

Compile time average

Compile time computations are rarely as simple as in our previous examples. Summing two integers at compile time has not much value added. Optimizing compilers will usually do it anyway.

Let us try something slightly more complex in this section: computing the average of a collection of values, at compile time. As a reminder, we explained how to implement average in Clojure in our very first post:

Where reduce is the Clojure equivalent of the STL std::accumulate. What it does is in essence the same as the following C++ code:

 

Clojure solution

Because we can call any function inside a macro, implementing a compile time average, with the macro average-m, is only a matter of calling our already implemented average function:

It does not look like much. But there is something here that a C++ constexpr function would not be able to do. We just used a Clojure’s standard vector (which require dynamic allocation) inside a compile time computation.

 

Constexpr restrictions

If you read the specification of constexpr, you find that it is subject restrictions that come from the specification of constant expressions in C++.

For instance, both inputs and outputs have to be literal types. As a result, the following constexpr function will refuse to compile, GCC complaining that std::vector is not a literal type:

In fact, GCC will only complain if average is used inside a constexpr context:

The item 18 of the constant expression specification also specifies that we cannot use new or delete to compute a constant expression. As a result, compiling the following function fails whether or not the function is used in a constant expression context:

 

C++ solutions

The before mentioned restrictions mean that, to compute our average at compile time, our constexpr function cannot take a vector as input, nor can it instantiate one inside its implementation. So we have to resort to alternative solutions.

One solution is to use variadic templates and the fold expression of C++17:

Another solution is to use std::array, a valid literal type:

Quite unfortunately, we cannot make use of the std::accumulate algorithm to make our solution less imperative. Most algorithms are not constexpr yet, and so the following implementation does not compile:

 

Impact of these restrictions

Because of the heavy restrictions on C++ constexpr functions, a lot of tools available at runtime are not accessible at compile time. Standard containers are out of reach. Algorithms are not yet available. Any code that is not constexpr cannot be reused.

This contrasts with Clojure, in which there is no difference between the runtime and compile time world in terms of code we can write. The same data structures are available, the same algorithms, with the same behavior and almost the same performance.

We discussed in our previous post how important it was for accessibility to have a similar syntax and the same set of tools for both function and meta functions. Because of these restrictions, constexpr functions do not quite reach this ideal.

 

Arguments for relaxing constexpr constraints


One rationale behind the restrictions on constexpr functions is that the forbidden constructs (such as dynamic allocation) do not make sense for constant expressions. Let us discuss this rationale and see whether some of them could not be relaxed.

 

Relaxing the implementation

Let us assume for now that both inputs and outputs of a constexpr function indeed have to be literal types. Why could we not relax the constraints for what concerns the implementation of the function?

In Haskell, pure functions are allowed to mutate data inside them. As long as no side-effect can be observed from the outside, a function can be considered pure. The same goes for noexcept functions which may call functions that can throw.

Would the same line of reasoning apply for constexpr function? A constexpr function could be allowed to instantiate a vector or a map as long as it stays an implementation detail and does not leak through the interface.

 

Performance considerations

Let us consider a function frequency_map that takes as input an array of integers, and computes the occurrence count of each element.

It cannot return a std::map associating to each value an occurrence count, because of the literal type restriction. It cannot return a std::array of pairs containing the associations either, as the size of this array would be hard to know a priori.

So we make it return an array of integers of the same size as the input array. Each value of the output array is the occurrence count of the corresponding element in the input collection (the element at the same index).

For instance, for the input [1, 2, 1, 1, 3, 3, 1], we get:

Input:  [1, 2, 1, 1, 3, 3, 1]
Output: [4, 1, 4, 4, 2, 2, 4]

The constexpr restrictions already constrained our interface pretty heavily. They also constrain the implementation of the function. Even with our selected prototype, we could imagine using a std::map to make the algorithm efficient:

But it will not compile, since we cannot use a std::map in the implementation either. We are therefore left with two choices:

  • Use a less efficient algorithm (potentially N squared complexity)
  • Develop a constexpr compatible version of the STL associative containers

None of these alternatives looks very appealing. The first might slow down compilation time due to the algorithm’s inefficiency. The second forces developers to learn another container API and violates DRY by doing so.

 

Performance traps

The talk Constant fun (CppCon 2016) shows the difficulty of developing our own efficient constexpr-compatible data structure or algorithm. It also shows the kind of performance traps waiting for us when we decide to do so.

The speaker shows at the 42nd minute that the performance of his compile-time merge sort algorithm is quite slow. The speaker does not show the full implementation, but the complexity of the algorithm might be a valid explanation for this slowness.

The implementation indeed makes use of cons and tail to manipulate arrays. Both are likely lacking the efficiency of their functional programming counterparts, which work on list with O(1) complexity: creating new arrays has O(N) complexity.

So both cons and tail will likely trigger copies of the whole array, making the merge operation pretty expensive. This same performance defect may occur with repeated push_back-like calls on std::tuple. Without amortization techniques or structural sharing, creating a tuple by pushing back N times in it has a quadratic complexity.

 

Going further: relaxing inputs and outputs

What about relaxing the constraints of literal types on inputs and outputs? Would it be feasible? And would it be worth it? We will list below some arguments that would go in favour of relaxing the literal type constraint.

Consider below how simple it is in Clojure to compute a frequency map at compile time, what we just struggled with in C++. In Clojure, it is simply a matter of calling a function that does the job at run-time (freq-map here):

Consider the advantages of having the same level of performance at compile time than at run-time. This above naive implementation takes 15 milliseconds for an input of 10000 elements at compile time. Optimized versions go below 5 milliseconds.

This is two to three times slower than the run-time version, but it is still pretty decent performance for a compile time computation. But most importantly, this ensures that compile time computations will not be the bottleneck of the compilation.

Consider that being able to produce standard containers at compile time also has a positive impact on run-time performance. The runtime code will not need to transform custom compile-time-compatible data structures back to standard containers.

 

It is not that bad

C++ does not need to support non-literal types for inputs and outputs of constexpr functions. There are alternatives to compute things such as frequency maps. Optimizing compilers might be able to do it automatically if everything is made const properly.

We could also push the computation at “init time”, forcing the computation to happen just once at the initialisation of the program, and storing the result in some static variable (either global or local depending on the need).

Another alternative, if such a computation is not affordable at “init time” (and is not optimised away), is to generate the result using an external program. This causes some problem on its own: like informing interested parties that the source of truth is outside of the source code of the program.

 

Conclusion and what’s next


Through this post, we went deeper into the world of compile time computations in both C++ and Clojure, and compared constexpr with macros.

Through this exploration, we discovered the really good sides of the C++ constexpr functions and the pleasant meta-programming experience it offers. We also saw some of its limitations, when faced with more complex compile time computations.

Those limitations are mainly due to the constraints that constant expressions have in C++. Some of these restrictions could maybe be relaxed for constexpr function. We discussed one potential improvement, and the benefits it could bring, both in terms of maintenance and compile times.

This closes the chapter of compile time computations. In the next post, we will continue our meta-programming journey by diving into AST manipulations. We will see how we can leverage macro to automate operational concerns, and ways to emulate this in C++.


Follow me on Twitter at @quduval.

LISP Meta-Programming for C++ Developers: First Macros

In the previous post, we started a series dedicated to familiarise C++ developers to the world of LISP meta-programming, based on the Clojure programming language. The goal is to offer a different perspective on meta-programming, how we can approach it, and the kind of use cases it can address.

We started the series by an introduction on Clojure, homoiconicity and how meta-programming works in LISP dialects. We ended up by introducing the concept of macro as functions from fragment of AST to AST, pluggable into the Clojure compiler.

In today’s post, we will continue our exploration of macros. We will illustrate how they work through simple examples to make sure to keep everyone afloat before diving deeper in the future posts. We will end up by a small discussion on C++ constexpr.

Note: this post builds on the previous post of the series. Unless you already know Clojure, you need to read this first post to be able to follow this one.

 

First AST manipulations


In this first section, we will implement our first very basic macros. The goal is to start building an intuition on how they work, and what they can do.

 

Swallowing an AST

As we saw in our previous post, a macro is a function that is given as arguments fragments of an AST. It returns another AST that will replace the original fragments it is given. A macro is also different from a simple function because it is called at compile time.

Our very first macro will just swallow the AST fragment that is given to it, and replace it with nil, the null pointer of Clojure. We will call this macro swallow, for the AST it will take as argument will never get out.

You can find below its implementation. The syntax of macros is strictly identical the syntax of functions, but for the use of defmacro instead of defn. Other than this, it is a perfectly normal function:

  • It has a name, just as functions do: swallow in our case
  • We can attach an optional comment to it, a string that follows the name
  • It takes arguments, a single one named code in our case
  • It returns a result, which will just be nil here

 
In fact, macros are almost like functions. The difference lies in the evaluation of the macro. Instead of receiving evaluated arguments (at run-time), the macro will receive fragments of AST as argument (at compile time).

To better explain how this work, let us consider the following piece of code, which defines a function test-swallow, in which we call swallow on (+ a b):

The macro will be called at compile time. It will receive (+ a b) as its code argument: a list made of the symbols +, a and b. We insist on the fact that the macro does not receive the values associated to these symbols (they do not have any at compile time anyway) but the symbols themselves.

The macro will then perform its work: in our case, it will ignore its arguments and return nil. So the whole code (swallow (+ a b)) will be replaced by nil at compile time. As a consequence, the function test-swallow will behave exactly as if its body was just nil.

In fact, after compilation, it is nil, just as if we would have written this:

Hence calling test-swallow will always return nil:

 

Swallowing side effects

We just saw that the swallow macro takes the fragment of AST and replace it by a new AST consisting of only nil. The resulting code does not contain any trace of the code that was passed to the swallow macro.

This is completely different from what would happen with a function that would ignore its argument and systematically return nil. In case of a function, the argument would still be evaluated, and the result of this evaluation would be discarded.

To better illustrate this key difference, we can give to swallow a code that performs a side effect, for instance printing “Hello world”. The code is simply discarded by the macro. So the resulting code does not perform any side effect:

If instead, we had defined a function such as no-swallow that systematically returns nil as well, the outcome would have been very different. The argument would have been evaluated, triggering the side effect, as shown below:

 

Printing during the compilation

To better illustrate the fact that an AST is being provided as argument to a macro, and that this AST consists of symbols, we will write a macro that:

  • Prints its argument as it receives it (*)
  • Return as output its argument, unchanged

We will name this macro print-macro:

Let us write a test driver for this macro. We will create a function named add-two that adds its two arguments a and b. But instead of summing our arguments directly, we surround the addition with the print-macro:

The macro will return the code fragment (+ 1 2) unchanged, so the function will do the sum as we expect. But at compile time, we see the result of the println:

(*) Yes, we can do side effects inside macros, and thus inside the compiler. We will see in future posts interesting usage for this.

 

Compile time computations


Our previous example of macros were not very useful. In this section, we will show how to shift computations from run-time to compile time using macros.

 

Adding at compile time

Let us start simple, and add numbers at compile time. Please note that this is not very useful, the JVM will likely do these optimizations by itself, but we will start simple here.

You can find below the code that corresponds to the function add that adds two numbers, and the corresponding add-m macro that can do the exact same thing, but at compile time, when called on two numbers.

The code is almost identical. The difference is that if you call (add-m 1 2), the compiler will execute the macro with the fragments of AST 1 and 2, the macro will sum these numbers, and you should end up with the value 3.

 

Trust, but verify

Do not take my word for it. We need proof the computation indeed happens at compile time. To do this, we will use a very powerful tool available in the Clojurian’s toolbelt: macro expansion.

In all the code that follows, we will use a function that allows to see the effect of macros on a piece of code, walk/macroexpand-all (*). It effectively allows us to see what code will be compiled after having been transformed by our macros.

We will not go through all the details behind the macro expansion process, as it is a quite large topic. We will only use it to check that our add-m macro does indeed return a constant at compile time:

(*) The slash corresponds to the separation between the namespace of the function and its name. Here “walk” is an namespace alias for “clojure.walk”.

 

Factorizing

An interesting feature of macro is that they can call any function. As we saw with our previous example with println, it includes calling functions with side effects. In our specific case, our add-m function can call our add function:

We already had the need for ways to debug and test meta-functions: macro expansion allows us to see the effect of calling a macro on a piece of code. Being able to call any functions is another very important feature: it allows Clojure developpers to move most of the heavy lifting of macros inside standard functions that can be tested much easier.

In the context of compile time computations, macro expansion allows to verify that we return a constant, while factorizing into functions allows to check that the constant is valid (through unit tests for instance). We will come back to testing, when discussing constexpr later in this post.

 

Inlining


Until now, the macros we wrote returned only constants. In this section, we will see how we can return more complex code fragments, and use this to inline functions.

 

Inlining addition

The ability to return any code fragment from a macro gives us the power to inline the content of a function. Instead of writing a function, we write a macro that returns what would be the body of the function.

The following macro returns a code fragment that represents the sum of the two parameters a and b (themselves code fragment) given to the macro:

We will dissect this expression in a second. For now, let us first verify that it behaves as we expect. To do so, we use macroexpand on (add-inline 1 2).

  • It returns a list containing the operator +, followed by 1 and 2
  • This list matches the syntax of calling the operator + on 1 and 2

So the call to the macro was effectively replaced at compile time by the body of the macro. This is usually what we call inlining.

 

Understanding the syntax

We will now dissect this macro, and in particular the meaning of the weird characters it contains: the backquote and the tilde.

The backquote escapes a piece of code. It tells Clojure to not evaluate the piece of code that follows, and to return it as a fragment of AST instead. We call this quoting an expression. So in our case, it asks Clojure to return the list starting with the symbol +.

The tilde asks Clojure to replace part of an escaped expression (a quoted expression) by the value of the variable name that follows the tilde. So in our case, it asks Clojure to replace the symbol a and b by their value: the arguments of the macro themselves.

 

Like string interpolation, but on AST

One way to see it is as string interpolation but for AST fragments. The backquote allows to create a kind of string. The tilde allows to replace part of the string by the value of a variable that has the same name. Here is an example of string interpolation taken from Wikipedia (using Python):

Pushing forward the analogy, the add-inline macro returns the AST (+ a b) where a and b are replaced by their values: the fragment of AST provided as argument of the macro when it is called.

This process works for whatever value the arguments of the macro might have. You can find below some examples with more complex arguments than simple integers:

Note: Please make sure you understand these examples before going along, as we will use these mechanisms quite extensively in the future.

 

The need for tooling

One thing that clearly appears in the previous examples is that the process of macro-expansion is rather mechanical. The call to (add-inline (+ 1 2) x) will be expanded into (clojure.core/+ (+ 1 2) x). Whether or not the variable x exists does not matter, the macro will be expanded the same.

If then the compiler figures out that the variable x does not exist, the code will not compile. But the compilation error will likely be obfuscated. The code that does not compile does not necessarily look like the original source code. For all we know, it may be completely different.

In addition to this, macros might also create invalid piece of code. The macro might be bugged. The resulting code might not compile. This is why macroexpand-all is such an important tool. We have the exact same problem in C++, when template meta programming starts to trigger weird error messages that does not look like the source code we have written.

Clojure macros, C++ templates and meta-programming in general is tricky. Because it happens at compile time, and because it affects our source code, it is especially hard to prove correct or debug. If a language wants to embrace meta-programming to perform complex tasks at compile time, it need powerful tools to debug meta-functions and test them.

This is where I think C++ needs something more. Constexpr functions do help (see next section), concepts will definitively help too, but this is not enough still yet.

 

Beloved C++ Constexpr


The section about compile time computation might have made you think about C++ constexpr functions. This section will briefly talk about constexpr and its importance. A more detailed discussion on constexpr, featuring more involved examples, will follow in the next post.

 

The world before constexpr

Before C++11, we could already write C++ meta-functions computing values at compile time. For instance, and to continue on the simple example we introduced in Clojure, we could compute a compile time addition as follows:

We will not go into the details of the different ways to write this meta-function. We recommend you to have a look at this recent article that goes into more details if you want to know more.

 

The world after constexpr

The addition of constexpr in C++ did not strictly speaking increased the expressive power of meta-programming in C++. But it sure made it more convenient, as judged by the ease by which we can define compile time computations:

One important note is that a constexpr function can be used to perform both run-time and a compile time computations, depending on the call-site.

 

LISP-like accessibility

As we saw in our previous example, there is not much of a syntax difference between standard functions and macros in Clojure. This characteristic helps tremendously in popularizing the use of meta-programming for Clojurians.

With constexpr, there is a part of C++ meta-programming that has access to the same syntax as the one used for standard function. In this restricted part of C++ meta-programming, there is no need to learn a second language inside the language.

This is a huge boost to the adoption of meta-programming in C++, not by the increase in expressive power, but by the removal of some entry barriers, which effectively increase accessibility.

 

Facilitated maintenance and testing

The constexpr keyword allows to re-use some of our run-time functions as compile time meta-functions. This is a bit different but close to the ability of Clojure to re-use and call any functions inside meta-functions.

As for Clojure, this helps testing our constexp meta-functions. In C++, these functions are made even easier to test than in Clojure since they cannot have any side-effects. The flip side is that constexpr functions have (as of this writing) a more limited reach (see next post) than macros.

This also helps maintenance by removing the burden of defining the same logic twice, once as a function and once as a meta-functions, for cases when we need both. It is always nice not to repeat one-self.

 

Conclusion and what’s next


In today’s post, we made our first Clojure macros. We discovered how they work, showed how they could be used to inline code, and played with them to shift simple computations at compile time.

We also talked about the importance of having good debugging and testing tools when doing meta-programming. In this context, we saw how constexpr helped C++ meta-programming adoption, by allowing a similar syntax for runtime and compile time computations and by facilitating tests on meta-functions.

In the next post, we will go a deeper into the world of meta-functions to perform more complex compile time computations. We will discuss macros and constexpr, compare them, and describe some of the advantages of both approaches.


You can follow me on Twitter.