Ok, so I got extremely sick of the limitation of the old extended bit system and I decided to try and get runter's infinite bit system to work. I got the code in, it seems to be working, however now I'm kinda stumped as to exactly how the definitions should work. The old system of course Defined a letter to a number, and then defined a bit to that letter. So what I'm wondering is could I just take those letters and just define them as 1-500 as to not completely have to change how the system works? Or do you just define each new affect/act/imm etc.. to a number, so something like "#define ACT_AGGRESSIVE 2" as I do believe 1 is what is used to return as no bit vector. Also, with this system am I going to have to go through the entire code and change every instance of the IS_SET/REMOVE_BIT/SET_BIT to conform to the new system to look like "if (is_set(&ch->act, 2))" instead of "if(IS_SET(ch->act, ACT_AGGRESSIVE))". Just trying to figure out exactly how much time I'm going to have to put into this as there are quite a few instances of these functions throughout the code and I wanna make sure I have to do this before I put the time into it. Thanks for any help in advance.

It's not entirely equivalent – an enumeration will give you a little more type safety if you're using a decent compiler.

It can be dangerous in some sense if you don't know what you're doing because you really don't want to be inserting entries into the middle of the enum if numbers are significant e.g. due to being stored as-is in file formats. Having an explicit list makes it harder to do that, because you have to go update a bunch of numbers, which is annoying enough to prevent most people from doing it.

A full proper conversion is going to mean on many muds handling things like

code->act = 1 | 2 | 3;

You'll need to set those independently. (I think there maybe some tables even that use that format.)

Also, the enum system *could* be safe depending on how you save your information. It would be fine if you were saving and loading based on names in the table instead of actual run-time values. If you are saving actual runtime values (probably the most common way) then you would need to actually ensure that enum's order. You can still do the enum with the same amount of safety:

Actually, you can still use enums if you like…. you just have to specify what every element is.

enum { ACT_FOO = (1 << 0), ACT_BAR = (1 << 1), ACT_PFFT = (1 << 2)}

Or, you can join me in blaspheme and use functions with string arguments that get loaded at boot time from files. But then you really can't use bitwise operators. Not that that's such a horrible thing anyways.

If it's an infinite bit system, the use of enums makes it finite. Sorry haven't looked at the infinite bit system code. I use infinite bits in some TeensyMud code that flags quests as solved using the BigNum class, meaning the number of quests one might implement is infinite. Although the number of bits in a BigNum is infinite, you're practically limited by memory store.

Edit: Also when writing C/C++ code I now avoid #defines in favor of const integrals for type checking. Not religiously, but I try. ;-)

The BigNum class is another option I had forgotten about. Haven't used that in ages, but it has C/C++ bindings as well.

Personally, I don't see the value in keeping bitwise operations. The miniscule amount of CPU savings is hardly worth the added complexity of maintaining flag values as unique bits. If you really want to be able to check multiple bits in one call, passing the list of bits to check for as an array, or even a string, isn't THAT big a deal.

std::bitset is a good library but what people should realize about it is that it uses memory equal to your largest element /8 bytes.

My bitmask code uses interval skipping so it only uses 4 bytes per 32 bit interval. That means if you have no bit but bit 1029102 set then you still only use 4 bytes.

Bitset also requires to know how many bits you want to intend to use when it is created. dynamic_bitset is better in that it lets you resize it.

But it's still not implementing interval skipping. So depending on actually what you want your system to do, it could be using far more memory overall. (and some implementations may not need interval skipping)

It has its purposes, but choose wisely before you decide on which system to use.

Using more memory also means that you are considerably more efficient when it comes to looking up if a byte is set. Arguably, if your bitset is so sparse that skipping is giving you considerable gains to offset the efficiency loss, you are using the wrong data structure.

Runter said:

Bitset also requires to know how many bits you want to intend to use when it is created

In practice, this is rarely a problem, particularly for the purpose being served here.