What's this all about?

This page contains the many ramblings I've spread across varying internet sites over the last 5 years or so (with corrections and extensions). I've been building Reflection APIs since 2003 when I saw a need for one on the ill-fated Advent Shadow game for PSP, and would really love to condense everything I've learned into a coherent set of documents that others can learn from. Unfortunately the subject is huge when applied to games and I'm just too busy with work, training, life and other stuff to organise it all.

Some Source

I've implemented the "simple as pie" method documented below and you can find it, here:


This contains everything up to the binary serialisation with versioning but leaves out a few things such as endian processing (necessary for network communication) and the object database (pointer serialisation). The work required to get this functional enough to ship a game is minimal (I used a similar solution when working on the Splinter Cell Conviction engine).

The PDB method for generating reflection information is quite literally broken and doesn't compile after a mammoth refactoring session that was never finished. Luckily, thanks to the magic of source control, I've zipped up a version of it all before the craziness took over:


You'll have to excuse the name of the hosting "game"! It was initially meant to be a conversion of PDB files to an SQLite DB but quickly diverged (hence the app name).


A reflection API is a very basic, powerful and important tool that every game studio should have at their disposal. This is what I'd consider a reasonably featured-up C++ Reflection implementation capable of:

In the past I've also reflected events and introduced the notion of "type extension" to remotely extend existing types in a similar spirit to a few AOP techniques. They were pretty complex problems and tbh, the type extension was over-engineered. These are some of the options open to you if you have a Reflection API:

There's too many ways to do reflection with C++:

  1. Using macros inline with your source.
  2. Using templates inline with your source.
  3. Doing either of the above non-intrusively.
  4. Using an IDL/DDL to generate cpp/h files.
  5. Merging the lot into a scripting solution and generating cpp/h files from there.
  6. Performing a post/pre-process of your cpp code.
  7. Extending 6 to a link-time post-process to catch function/method addresses.
  8. Mix 6 with some cpp generation to catch function/method addresses.

Some of them are just plain nasty and I'm continually surprised that boost hasn't got their own wildly over-engineered version of 3 as they already have all the code necessary in Boost.Python. Each will always be limited in some way and even though some don't have an on-disk database (it's all generated at runtime), this won't be a problem for your tools as you can either link to the needed modules or just send your type database over the network.

In the past I've done this two ways in production code and another, more transparent/powerful way at home...

The Uber Solution

I can't remember any of the source code so this will be an approximation. It all starts with an IDL file, for example:

import "SomeFile";

enum Blah
    A, B, C, D = 0x54 * 7

struct POD
    string yup;

interface iStuff : iBase
    int data_member [transient];
    property int GeneratesGetSetPair;
    property char GenerateGet [readonly];
    method void Hello(Blah x) [call_on_postload];
    event Borked(int x, string blah);
} [attribute_for_interface("value")]
This would generate:

There were extensions that allowed you to drag in types from 3rd party libraries and specify their included headers (you can't get away without this really) and you could also use a C++ API to do your own registration. You would then inherit from the interface with a specific implementation and implement the methods. There were many reasons for this design which I could go on about but it had shortcomings in that it was a bit confusing and the overuse of virtual methods should have been avoided. A more concrete DDL approach would have solved this (ala UnrealScript but without the script bit and horrible interdependencies and changing one file rebuilds all and yadda yadda - owww it still hurts).

This powered:

In short, I believe the solution was over-engineered in a lot of ways (it was incredibly template-heavy and I also reflected the reflection API, *shudder*) but it worked pretty well and showed me that when done right, a DDL approach could be simple and good enough for any studio.

The Simple-as-pie Solution

The next one was very simple. It only reflected data types, enumerations, data members and a pre-defined set of fixed attributes in place of a generalised attribute system. It was purely code-based and non-intrusive, with all registration performed outside a type's implementation. Again, it was template based (no macros anywhere) and looked something like this:

Property props[] =
     Property("Blah", &MyClass::Blah).Flags(PF_NETWORKTRANSIENT).Load(func),
     Property("Yerp", &MyClass::Yerp)
When you start dealing with get/set properties and function/method reflection your template-fu requires an order-of-magnitude increase in complexity if you want to do it cross-platform (you can use your platforms ABI to remove most of it if you're feeling heroic - Scott Bilas covers some interesting ideas on his homepage). In this case it was more than manageable.

This powered:

I really, really like this solution. It took 2 days to implement, is only a few hundred lines of code and was very stable, requiring very minimal updates to its implementation as more code started using it. And it was very fast - it's absolutely perfect for small teams on a fixed budget that are prepared to just shut up about coding miracle cures and get on with their work.

PDB Method

The idea here is that all you need to do is specify what types and functions you want reflecting and the information required to do that is automatically deduced from the PDB file generated after compilation. It can be very powerful but I've not put any of this through production code; there may be big problems. Everything you need is reflected with minimal C++ code intrusion and automatically: data types, members, enums, functions, parameters, templates, etc. Some quick points to note about this approach:

When I started this system I was convinced it was simple. However, the PDB application is getting big and contains many nuances that rival the initial implementation of my old IDL compiler.

What would be nice is some help from the compiler authors at Microsoft and Sony. They have full access to this information and could output it in such a way that we could rely upon it to generate this data. This kind of support would yield quite measurable productivity gains, especially when applied to the next topic...

Dynamic C++ Code Reloading

This is not just edit-and-continue, but the ability to edit large sections of your C++ code base without having to shutdown the game and reload the compiled executable. The repetitive cycle of compiling, linking, loading the executable, loading the level and reproducing the steps that put you in a position to test your feature can account for many hours of lost time, frustrated developers and gameplay that's worse than it should have been. Games and their engines have been putting in "hot-syncing" features for their assets or scripts for many years now and no production cycle is complete without them. Some games switch to scripting languages purely for the fast iteration and script reloading functionality - why can't we do this with the rest of our C++ code?

Assuming you have a good Reflection API, you can:

When your game detects a DLL change, do the following:

The serialisation can be done to RAM on PC, letting the OS take care of the paging. On consoles you can't really do that (with the exception of some debug kits) so you may have to implement a slower path.  Also, from what I can gather of my limited exposure to SPUs, offsets beat pointers and reloadable SPU code is achievable without the above.

If you have a sensible include strategy that keeps build times low then you're looking at less than a second turnaround on any compile change. This gives you an incredible amount of flexibility to change most aspects of your DLL code at runtime. You need to be careful with the DLL. Make sure you load a dynamicly-named copy of it, rather than the output from the compiler as the compiler can't write to the DLL when it's being used. Debugging is achieved simply by attaching/detaching to your process whenever necessary.

It can be demoralising, however, if you have implemented this with limited scope for one module only. As soon as you switch to working with other modules, you start to realise what you're missing.