In my last post I talked about how templates could be extended to manipulate and transform bits of the C++ syntax tree. Parsed C++ syntax objects, like block statements and parameter lists, are just the kind of thing you need when generating code.

But there are other ways to approach the problem. You can generate code as text, spitting out source code files to be compiled. Or you can generate code with the preprocessor.

With the preprocessor you don’t deal with syntax trees of course, since the preprocessor doesn’t know how to parse C++. It only knows how to tokenize the text stream, and so it deals with tokens. Tokens are syntax primitive, and aren’t as abstract as statements or expressions, but you can do a lot with just tokens. Sometimes you want to work on that level anyway, like when you want to generate a piece of code with unmatched curly brackets, maybe to open a namespace.

Of course the preprocessor is weak without recursion, looping, or data structures. But you can imagine a sophisticated language, embedded in C++ source like the preprocessor, that inserted generated C++ token streams. You’d want a language with recursion, data structures, and data types that could be manipulated and transformed.

The Erlang language gives us a hint of what this might look like. (We could also look to Prolog as a model, although it may be overkill, at least for what I have in mind.)

Erlang appears well suited to code generation. It’s declarative and functional like the C++ template language. Functional means a function’s return value depends only on its arguments and will always be the same if you pass in the same parameters. In other words, my_struct<43,char*> will evaluate to the same type as long as the arguments 43 and char* don’t change.

Erlang’s pattern matching and native list structure are good for simple parsing and list processing, which are also useful for code generation. Erlang-style pattern matching and guards are particularly useful if you’re working without strong typing.

Here’s what an Erlang-like preprocessing language might look like embedded in C++. Here we define a simple function, chop_off_head/1, which returns a list sans the first element. (The /1 part of the name means it take one argument.)

// Warning - this is all fantasy/speculative C++
// It will NOT compile.
// How would we embed a language like Erlang in
// C++ source code?
// 1. Surround the meta code in a special wrapper,
// like you do with asm { ... }
erlang {
-module( cpp).
-export( [chop_off_head_a/1]).
chop_off_head_a( [] ) -> [];
chop_off_head_a( [A | Rest] ) -> Rest.
}
// 2. Make it clear that we're defining parts of a module
// in the special wrapper, and we're not in "shell" mode.
erlang:module( cpp ) {
-export( [chop_off_head_b/1]).
chop_off_head_b( [] ) -> [];
chop_off_head_b( [A | Rest] ) -> Rest.
}
// 3. Keep the Erlang code in a separate file. This way
// we can also load/test the Erlang code in a native
// environment. Allow a "-module(cpp)." line in the
// included file.
erlang:module( cpp ) {
# include "cpp.erl"
}

We have to distinguish module definitions from commands. Modules only define Erlang functions and do not generate any C++ code until the functions are called.

How do we feed C++ tokens to the Erlang functions, and how are the return tokens injected back into the compiled program? I’m thinking the code might look something like this.

// Warning - this is all fantasy/speculative C++
// It will NOT compile.
// What would a call to an embedded Erlang function
// look like?
template< typename ... TYPEs >
struct
mystruct_a
{
// We could set an erlang variable to build a list
// and then pass that to the function. The last function
// returns a list of cpp tokens to be taken up by the
// compiler.
erlang {
X =
-cpp_tokens_start
TYPEs ...
-cpp_tokens_end.
cpp:generate_member_vars_a( X ).
}
};
template< typename ... TYPEs >
struct
mystruct_b
{
// Or we could unpack TYPEs into a comma-separated list.
erlang:cpp:generate_member_vars_b( [ TYPEs ... ] ).
};

In these examples I assume TYPEs… expands to a comma-separated list of types which will have to have some standard representation for the embedded Erlang, maybe as atoms or Erlang tuples.

In the mystruct_a example, X is set to a list of all the tokens of the expanded TYPEs… including the comma separators. Since a type can consist of many tokens (think int const * const &) the Erlang function will need the comma tokens. Unless we get the compiler to parse the tokens for us and set X to an Erlang list of type object.

I use -cpp_tokens_start and -cpp_tokens_end (which is NOT standard Erlang) to escape out of Erlang and tell the compiler to tokenize everything between as C++, and to bundle all those tokens up into a list and assign them to X. If we use delimiters like { and } (curly brackets) we’ll have a harder time specifying unmatched } in our escaped C++ token stream.

Maybe the C++ parser could compose the C++ tokens between -cpp_tokens_start and -cpp_tokens_end into more abstract objects, like s.

The last function in the wrapper, cpp:generate_member_vars_a(X), presumably returns a list of C++ tokens that the compiler takes and and treats as C++ code.

In mystruct_b I was thinking [TYPEs…] would expand into something like [int,float,char] which is an Erlang list. In this case the commas would not be tokenized since they are part of the Erlang syntax.

As you can see, expressing ideas in Erlang is a lot easier than expressing them as C++ template meta-functions, which is the reason I started thinking about using it for code generation. Erlang is a clean and easy-to-understand language that could work well at the token level or higher. But you can also see how the Erlang and C++ syntax clash, particularly when feeding C++ tokens to Erlang functions, although my proposal above could probably be improved.

I firmly believe that code-generating languages and development environments will become much more important in the future, and the C++ language is clearly moving in that direction. One tool that could help is a re-vamped C++ preprocessor, maybe based on a language like Erlang or Prolog. Or maybe it could even look like Javascript, which is imparative, and would be good at treating the code as a character stream. Or maybe C++ templates really are the best solution, especially if they could be extended to work with smaller fragments of parse tree.

In my last few posts about shared_ptr<T>s I’ve been using a struct called private_deleter. When you first attach a target object to a shared_ptr<T> you can also specify a deleter, which is a functor with an operator() that takes a single argument, a pointer to the target object, and deletes it.

You use the deleter, of course, to control target object deletion. Which is symmetric since you also control target creation. Without the deleter you could only create target objects with operator new to match the operator delete assumed by shared_ptr.

Since the deleter becomes part of the intermediate object (aka the “control block”), you can also use it as a way to add variables and functions without intruding on the target class. For example, you can use it to store a list of notifier functors to be triggered when the target is deleted. Or you can use it as a chunk of memory in which you construct the target object. But in this post we’re just going to use the deleter to delete.

The Boost smart pointers provide a shared pointer just for arrays, called shared_array<T>. It is similar to shared_ptr<T> except it uses operator delete[] instead of operator delete to delete the target. But shared_array<T> is not part of TR1, and it is not necessary since shared_ptr<T> supports custom deleters.

The following shows how to use a deleter so shared_ptr<T> correctly deletes arrays.

Since shared_array<T> is no longer necessary to support array deletion, I suspect it will never become part of the standard library. operator [] is not enough justification for its existence. It will languish in the Boost library, a barely supported dead-end experiment, and will never be integrated with shared_ptr and weak_ptr.

As a final note, another way to work with arrays and shared_ptrs is to use an array wrapper like the TR1 array class. Since you can allocate these objects with operator new you can rely on the default shared_ptr deleter.

We’ll find it much harder to specialize make_shared<T>(..) and allocate_shared<T,A>( A const &, ..) to use these class-supplied factories that take parameters because you cannot partially specialize a templated function like you can a templated struct. Forgetting that, you might try:

And the compiler will choke and complain. You could try to fully specialize the template function since partial specialization is illegal, but that means you have to know the type of the allocator beforehand. And allocators tend to come in many types. Another approach would be to define a separate overloaded (not specialized) template function called allocate_new, but typename T would have to be the first template parameter, and the first thing you want to do is specialize that as factory_type. So that doesn’t improve things in this case.

We could make this work if the first template parameter was used in the function’s argument list. If instead of allocate_shared< factory_type >( alloc_inst) the idiom was allocate_shared_2( factory_type::tag( ), alloc_inst) then we could define another template function around ALLOC_T and we would not have to specialize. But in this case that’s not a very attractive option.

Another probably better option is to define allocate_shared<T,A>(..) as a simple call to something like shared_ptr_maker<T>::allocate(..). Then we could specialize the template class shared_ptr_maker<T> instead of the function.

And finally, I’m not sure, but these examples may be an abuse of make_shared and allocate_shared. These were originally proposed as a way to hide use of the new operator since shared_ptr also hides delete. But looking at Peter Dimov’s code suggests the purpose of these functions may now be to implement an allocation strategy, where the intermediate object (aka the “control block”) and the target object are allocated as one chunk of memory instead of as two. If that’s so, it’s unlikely you’d provide specializations or overloads for these functions.