Plugin Architecture

This article will guide you through the design of a simple yet powerful plugin
architecture. It requires some experience in C++, using dynamic library (.dll,
.so) as well as understanding of fundamental oop concepts, such as interfaces and
factories. But before we start, let’s first see what advantages we can gain from
plugins and why we should use them:

Increased clarity and uniformity of code – Because plugins encapsulate
3rd party libraries as well as code written by other team members to clearly
defined interfaces, you get a very consistent interface to just about everything.
Your code also won’t be littered with conversion routines
(like ErrorCodeToException()) or library specific customizations.

Improves modularization of projects – Your code is cleanly separated into
distinct modules, keeping your working set of files in a project low. This
decoupling process creates components which be reused more easily since they’re
not webbed with project specific peculiarities.

Shorter compile times – The compiler isn’t forced to parse the headers of
external libraries just to interpret the declarations of classes which internally
use these libraries because the implementation happens in private. This can
drastically reduce compile times (did you know windows.h includes about 500 kb
of code?)

Replacing or adding components – If you release patches to the end user,
it’s often sufficient to update single plugins instead of replacing each and every
binary of the installation. A new renderer or some new types of units for an add-on
to your game (including mods made by end-users) could easily be added by just
providing a set of plugins to your game or engine.

Usage of GPL code in closed source projects – As you probably know, you are
required to publish your source code if you use GPLed code in it. If you, however,
encapsulate this GPL component in a plugin, you’re only required to release the
plugin’s source (note: this requires you to run the plugin as a separate process,
see the GPL FAQ).

As a side note, personally, I don’t use plugins because they’re cool, not because
I have to regularly send patches to my end-users, and not even to force myself to write
modular code. I’m using them because it simply seems to be the best way for organizing
large projects. Dependencies are greatly reduced and you can easily work on the
replacement of specific systems instead of stalling your entire project or team until
the code base has been fully reworked.

Introduction

Now let me explain what a plugin system is and how it works: In a normal
application, if you need code to perform a specific task, your options are: either
write it down in the editor yourself or look for an existing library which suits
your needs. Now what if your needs have changed? You either need to rewrite your code
or use a different library, two choices both of which may lead to a rewrite of many
other parts of your codebase that are depending on this code or external library.

Now we get to know a third option: In a plugin system, any component of your project
which you do not wish to nail down to a specific implementation (like a renderer which
could be based on OpenGL or on Direct3D), will be extracted from you main code base and
placed in a dynamic library in a special way.

This special way involves the creation of interfaces in the main code base to decouple it
from the dynamic library. The library (plugin) will then provide the actual
implementations of the interfaces defined by the main code base. What sets plugins apart
from just normal dynamic libraries is how they are loaded: The application doesn’t
directly link to these libraries, but, for example, searches some directory and loads
all plugins it finds there. The plugins then somehow connect themselves to the
application in a well defined way common to all plugins.

A common mistake

Most C++ programmers, when confronted with the task to design a plugin system, start by
integrating a function like this one into each dynamic library that is to act as a plugin:

PluginBase *createInstance(constchar*);

Then they decide on some classes whose implementations should be provided through plugins
and voila… The engine queries one loaded plugin after another with the desired object’s
name until one of the plugins returns it. A classical chain of responsibility for the
design pattern people.

A few programmers more clever will also come up with a design that lets the plugin
register itself in the engine, possibly replacing an engine-internal default implementation
with a custom implementation:

Though this architecture may work for you, personally, I would classify both ways as
major design errors, provoking conflicts and crashes. Why?

A major problem of the first design is the fact that a dynamic_cast<>
or reinterpret_cast<> is required to make use of the object created
by the plugin’s factory method. Often the artificial derivation of plugin classes from
a common base class (here: PluginBase) serves to provide a wrong sense
of safety. Actually, it is pointless. The plugin could silently, in response to a
request for an instance of an InputDevice, deliver an
OutputDevice.

With this architecture, it has become a surprisingly complex task to support multiple
implementations of the same plugin interface. If plugins would register themselves under
different names (eg. Direct3DRenderer and OpenGLRenderer),
the engine wouldn’t know which implementations are available for selection by the end
user. And if this list is then hard-coded into the application, the main purpose of the
plugin architecture is entirely eliminated.

If such a plugin system is implemented within a framework or library (like a game
engine), the chief architect will almost certainly try to also expose the functionality
to the application, so that it would also “benefit” from it. Not only would this carry
over all the problems of such the plugin system into the application, but also forces
any plugin-writer to obtain the engine’s headers in addition to the application’s ones.
That already means 3 potential candidates for version conflicts.

The plugin system I’m going to discuss in this article avoids all these problems, is 100%
type-safe and thus gets the compiler back to your side again. It’s always a good thing
to have the compiler help you instead of battle you, don’t you think ? 😉

Individual Factories

The interface, through which an engine performs its graphics output for example, is quite
clearly defined by the engine, and not by the plugin. If you think about it, this is the
case for any interface: The engine defines an interface through which it instructs the
plugins what to do and the plugins will implement it.

Now what we’re going to do is a let the plugins register their implementations of our
engine’s interfaces at the engine. Of course, it would be stupid if a plugin directly
created instances of its implementation classes and registered those to the engine. We
would end up with all possible implementations existing at the same time, hogging up
memory and CPU. The solution lies in factory classes, classes whose sole purpose is to
create instances of other classes when asked to.

Well, if the engine defines the interface through which it will communicate to plugins,
it can just as well define the interface for these factory classes:

If you compare this to the example in the previous chapter, you’ll notice that the
unsafe cast is gone. It isn’t that much work and, using the template approach for our
factories, there isn’t even any redundant code involved to create standard factories,
which you will be using most of the time.

Option 1: PluginManager

The next question you could ask is how will the plugins register their factories in our
engine and how the engine can actually make use of the registered plugins. You’ve got
free choice here. One possible solution which integrates nicely with existing code is to
write some kind of plugin manager. This would give us good control over what components
plugins are allowed to extend.

When the engine needs a renderer, it could look in the PluginManager for
renderers that have been registered by plugins. Then it would ask the
PluginManager to create the desired renderer. The PluginManager
in turn would then use the factory class to create the renderer without even knowing
the implementation details.

A plugin would then consist of a dynamic library that exports a function which can be
called by the PluginManager to make the plugin register itself:

void registerPlugin(PluginManager &PM);

The PluginManager can simply try to load all .dll/.so files in a specific
directory, checking if they’re exporting a method named registerPlugin().
Or use an .xml list where the technically aware user can specify what plugins to load.

You can design the PluginManager in a way that it just stores the
implementation that was registered lastmost for each class. You could as well create
a fancy PluginManager which keeps a list of possible implementations and
their descriptions, versions and more for each plugin, then let the user choose whether
to use the OpenGLRenderer or to use the Direct3DRenderer (or
any other renderer that becomes available when a new renderer plugin is installed…)

Option 2: Fully Integrated

An alternative to this PluginManager would be to design your entire code
base from the ground up to support plugins. The best way of doing this, in my humble
opinion, would to break down the engine into multiple subsystems and form a system core
which manages those subsystems. This could look like this:

class Kernel {
StorageServer &getStorageServer()const;
GraphicsServer &getGraphicsServer()const;};class StorageServer {// Used by plugins to register new archive readersvoid addArchiveReader(std::auto_ptr<ArchiveReader> AL);// Queries all archive readers registered by plugins// until one is found which can open the archive (CHOR pattern)
std::auto_ptr<Archive> openArchive(const std::string&sFilename);};class GraphicsServer {// Used by plugins to add GraphicsDriversvoid addGraphicsDriver(std::auto_ptr<GraphicsDriver> AF);// Get number of available graphics driverssize_t getDriverCount()const;// Retrieve a graphics driver
GraphicsDriver &getDriver(size_t Index);};

Here you see two examples of subsystems (whose names are postfixed with
Server, just because it sounds so nice). The first one internally manages
a list of available image loaders. Each time the user wants to load an image, the image
loaders are queried one by one until an implementation is found that can load the desired
image (or not, in which case an error could be raised).

The other subsystem has a list of GraphicsDrivers that will serve as
factories for Renderers in our example. Again, there might be a
Direct3DGraphicsDriver and an OpenGLGraphicsDrivers in its
list, which will create a Direct3DRenderer or an OpenGLRenderer,
respectively. Just as before, the engine can use this list to let the user make a choice
between the available drivers. New drivers can be added by simply installing a new plugin.

Versioning

Note that both previous options don’t require you to place your implementations in
plugins. If your engine supplies a default implementation of an ArchiveReader
for its own custom pack file format, you can just as well go ahead and put this into the
engine itself, registering it automatically when the StorageServer starts up.
Still, plugins can be added to also facilitate loading of .zip, .rar and so on.

Now, a single problem introduced with plugins remains: If you’re not careful, it can happen
that mismatching (eg. outdated) plugin versions are loaded into your engine. A few changes
to subsystem classes or to the PluginManager are sufficient to modify the
memory layout of a class and make the plugins terribly crash wherever they try to register
themselfes. An annoying issue that is not easily seen in a debugger.

Well, luckily, it isn’t hard to recognize outdated or wrong plugin versions. The most
reliable way happens to be a preprocessor constant which you put in your core system.
Any plugin then obtains a function which returns this constant to the engine:

What happens now is that this constant is compiled into the plugin, thus, when the constant
is changed in the engine, any plugin that is not recompiled will still report the previous
value in its getExpectedEngineVersion() method and your engine can reject it.
To make the plugin workable again, you have to recompile it. And due to our type-safe
approach, the compiler will then point out any incompatibilities of the plugin for you,
like new interface methods the plugin doesn’t implement yet.

The biggest risk is, of course, you forgetting to update the version constant. Anyway,
you’ve got an automated version management tool, don’t you ?

Well, that’s it. A type-safe, flexible and easy-to-use plugin architecture which can be
added to existing code bases just as well as it can be incorporated into new projects.
Have fun!

In the sources though, I don’t understand why have so many layers:
GraphicsServer
->GraphicsDriver
—>Renderer

Why not:
GraphicsServer
->Renderer

What does the extra layer provide?

Also, if case you are interested, I added CMakeLists.txt files to the project so that it can be compiled in any platform (with different definitions for the export macros and replacing the windows APIs by Mac/Linux compatible dlopen, dlsym, dlclose). I can send you the modified files.

The “Renderer” in my example owns the rendering window and sets up a Direct3D device or OpenGL context in it. Thus, if loading the plugin created the Renderer right away, you’d have several game windows pop up the moment the plugins are loaded.

The Server, Driver, Renderer hierarchy is intended to be used like this:
1. Game starts, GraphicsServer, StorageServer and other servers are created.
2. Game loads plugins. Plugins add their drivers to the servers
3. Game picks the drivers it wants (eg. by finding the drivers the user selected last time via the .cfg file)
4. Game creates one renderer, which opens a window and initializes the rendering API with it.
5. Game uses the renderer to draw stuff

It would certainly be possible to combine the Driver and Renderer into the same class, but then you’d have a class with methods like CreateTexture(), CreateRenderTarget(), etc. where these methods fail unless the class is in a specific state (render window opened).

For the same reason, you won’t find any additional Init()/Shutdown() methods in my classes — I try to design my stuff so that if you’ve got an instance of a class, it’s in a valid state. If eg. window creation fails, the Driver won’t return a Renderer in the first place but throw an exception.

Also, all the methods would have to be provided with some token that identifies which render window you want to create the resource for. That results in a more C-like API again..

Having rendering API initialization / window creation and all the stuff in the same interface would also give the drawing code the ability to open new game windows at will. That’s another indication that merging both is against the single responsibility principle.

1. Would it be possible for you to include the Linux code into the example?

2. I am not sure if I am using the example project correctly. I performed the following steps:
2.1 Compiled myengine with MYENGINE_EXPORTS as defined symbol and created a myengine.dll
2.2 Compiled the applicaton with MYENGINE_EXPORTS as defined symbol and created an executable. To perform this step I needed to specify myengine.dll to the linker.
2.3 Compiled the opengl_plugin with with MYENGINE_EXPORTS and OPENGL_PLUGIN_EXPORTS as defined symbol and created opengl_plugin.dll. To perform this step I needed to specify myengine.dll to the linker.
2.4 Copy all three files into one directory and execute. –> works

As you can see in all three steps the defined symbols led to __declspec(dllexport). Is that right? If yes, why do I need __declspec(dllimport)?

3. If I have done step 2 correct, can I have my plugins in a different directory from the application and the myengine.dll?

I have now used the Linux port Felipe offered and refactored it a bit:
– Introduced a reusable SharedLibrary class to abstract the Win/Linux specific stuff
– Added Linux building via Boost.Build instead of CMake (because… CMake, well, I’ve grown to really dislike it)
– Engine project now is a shared library again (this was the intention of the original code — makes no sense to statically link the entire engine into each plugin).

@Konstantin: The XYZ_EXPORTS constants are intended to be set only when compiling the associated project. So when compiling MyEngine, only MYENGINE_EXPORTS is set, nothing else. When compiling ZipPlugin, only ZIPPLUGIN_EXPORTS is set and so on.

It probably worked anyway because I defined nearly everything in the headers. Your MyApplication project and all the plugins probably ended up containing and exporting all of the engine classes themselves instead of using the classes contained in MyEngine.dll/.so 🙂

With the updated code I just added, this is no longer necessary. The constants are set through a #define that is contained in all .cpp files. I also added GCC versions of the macros!

It should be possible to put the plugins into a different directory. While they all depend on MyEngine.dll/.so (which would then be one directory level up), that library is guaranteed to already be loaded into memory when the plugins are loaded.

I just modified the Linux (Gentoo, x64) example a bit and it could load and use the plugins from a subdirectory just fine!

Yes, each directory below the root is one DLL or EXE in case of MyApplication.

You seem to be mixing GCC and GetProcAddress(), so I take it you’re compiling on Windows with MinGW? Until now, I only tested with GCC on Linux and MSVC on Windows.

GetProcAddress() does indeed return a FARPROC, but Visual C++ allows the conversion to void * with /W4 (highest warning level). I applied your suggestion and it compiles cleanly with both MinGW 4.7.0 and Visual C++ 2012.

If you have MinGW installed already, you can do a very simple build using Boost.Build. Download Boost, extract, run bootstrap.bat which doesn’t require any input and will compile Boost.Build in a few seconds. Then just set BOOST_ROOT, navigate to the plugin example and run b2.exe to build:

dllimport is never ‘necessary’, but it is good practice. if I remember correctly, it provides a hint to the compiler to provide more instruction space for the function call, which removes a level of indirection when calling the function.

Allowing plugins to unload could be done by simply adding an exported unregisterPlugin() function. The code already provides a SharedLibrary::Unload() method and the Plugin class will use it if you write a matching Kernel::unloadPlugin() method that removes the plugin from the loadedPlugins list.

I decided not to provide this mechanism in my reference implementation because unloading is pretty risky stuff and usually not needed.

By risky I mean that, at the time the plugin is unloaded, absolutely everything from the plugin has to be removed – if just a single helper object or node remains attached to the engine somehow, or if just one thread started by the plugin hasn’t terminated yet, the whole application will crash and burn 🙂

Thank you for the article, it is really inspiring. I have one question about how memory is handled between the engine and the plugins. As far as I understood the code, the plugins allocate some memory (for example for a new GraphicsDriver) that is registered in the kernel through one of the servers. However, deallocation of this memory is done by the allocator used by the kernel. This might have implications if the kernel and the plugins do not use the same allocator. Did you make any assumption on this like having control over how the kernel and the plugins are compiled?
This could be an issue if you let 3rd parties freely (i.e. without any control on how they build the plugins) provide plugins for your kernel.

@Sergio: That is right, the plugin system as it is presented here is mostly suited for plugins in the sense of original vendor shipped plugins (eg. to enforce component boundaries during development).

A modern implementation (C++11) would use std::shared_ptr<> or std::unique_ptr<>, alleviating the memory issue (both smart pointer types store a custom delete function pointer which is set during creation of the smart pointer, thus pointing to the creating runtime’s heap).

For modding, C++ itself would be the biggest obstacle, I imagine (different name mangling, different standard library implementations, different vtable / object layout rules).

There are several solutions I could think of:

Write the whole engine in plain C

Completely avoid standard library library types in public interfaces and provide custom std::string, std::vector<> and std::shared_ptr<> implementations, reducing the requirements for modders to just a compiler with a compatible object layout – eg. like COM or CORBA)

If the engine is being written in C++ and, for modding, a C transition layer has to be created anyway – why not put it into a plugin that loads mods? Allows control over what can be modded, allows compatibility crutches in future versions to remain in the modding plugin, modding support can be added and removed as needed)

Require every modder to use the same compiler + standard library per platforms