I've heard people draw a distinction between arguments and parameters, where arguments are the values passed at a call site and parameters are the variables that receive them in the called function, but I wouldn't trust that distinction to come across. Both of those terms are pretty commonly used the other way around.

If I had to pick terms to draw that distinction without being able to clarify what I mean, I would use something built around actual and formal, respectively. You might call them "actual arguments" and "formal parameters" for example, to bring the two terms together. A lot of PL literature uses "actual-in" and "formal-in", respectively. (You also can have actual- and formal-out(s) corresponding to return value(s).) When talking to other people at work, I'm likely to just call them "actuals" and "formals," as a noun.

Now, that's not exactly what you're asking about -- "my" actuals are a just a component of your "call signature." (For example, you might define a call signature as being the collection of actuals at a particular call site.) I can't think of an existing term for that either. "Call signature" seems pretty reasonable.

The mathematician would say that this function always returns c, as the two x cancel out.The naive computer scientist is wary that the actual implementation on a floating point system might screw things up and argue that we should do the division and multiplication seperately. For example, for x = 0.0 we would first do the division, which results in NaN or inf or -inf, depending on the system. However since we then multiply with 0.0, the result should be 0.0 instead.The knowledgable computer scientist knows that this is a special case specified in some arcane standard and that a multiplication of 0.0 with NaN/inf/-inf results back in NaN/inf/-inf.The lazy programmer (e.g. me) just wrote the function as it was given and now spends their days hunting for the "infinite force" bug that "should not happen".

Well, to be fair, c is not just a constant but a complex function in of itself, just independent of x, and the division and multiplication happen two files, thirty lines and half a dozen function calls apart, which is why we never figured out we could cancel the x. Also the x are not actually exactly the same, they just happen to take on values very close to 0 whenever the other is also close t 0. No, I don't know why. And it is frustrating.

:hy:If the two instances of x are different values, then changing the order of the operations doesn't necessarily help. You can still get large rounding errors, or even NaN/other stuff if they are too close to zero.

A different number format?

Finding the mathematical cause of the cancellation and use parameters that don't have this issue?

mfb wrote:If the two instances of x are different values, then changing the order of the operations doesn't necessarily help. You can still get large rounding errors, or even NaN/other stuff if they are too close to zero.

Yes, but if you can gather the two different values of x and "cancel" them up front before the final calculation, you might avoid those rounding errors. This of course depends on where each of the xs are calculated, and whether they can be rounded up in time.

Jose

Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

On closer inspection of the code (which I wrote myself, so I really have no excuse there), I realise that I misread the function. Not only are the two x of different values, they describe different things; velocity and position respectively. The problem of dividing by zero has always been there, but just never happened because the calculations, initialised with random values, never were exactly zero.

I later changed the initialisation with an ordered structure and now the differences in position and velocity are exactly zero at the first time step, which resulted in the infinite force. So the cancelation of a zero and an infinite values would only have happened in very specific circumstances, which weren't obvious during usual runs.

As a fix I now check the position before calling the function. If it's zero, I just use zero as the value instead of calling the function, which is not entirely correct, as the function does have a singularity there, but it's not removable. Furthermore, a simple check against exactly zero still produces the infinite force bug, so the solution now checks against x < 0.00000001 && x > -0.00000001. For the ordered structure initialisation this means that the particles need a tiny disturbance in their position, otherwise the forces will all be zero until disturbances from the side propagate into the center.

I will have to revisit the math behind the function anyway. A non-removable singularity in the second order derivative indicates a non-differentiable first order derivate, which kinda conflicts with how we derived the SOD in the first place.

Conformal field theory coding fleeting thought: I ch*rping hate poorly documented APIs without minimal code samples. Even third-party code samples for the API in question (cairo-gl) are rare, and it looks like it's going to be quite a puzzle to cobble together something that works and is actually portable.

I'm looking for input into a core data-structure I'm using to build a roguelike; it's working great so far, but I guess I just want those with more experience to tell me if there's anything I need to watch out for (or tell me if it's a pretty stable approach, I guess?).

It's my take on an entity-component system -- using Python 3.5. Spoilered for length:

Spoiler:

WHAT IS AN ENTITY COMPONENT SYSTEMEntity-Component systems treat every 'game object' as the same type of object; they each have different components associated with values. 'monster' would be an entity; 'health' would be a component. 'monster.health' might be 6. Meanwhile, 'sword' is an entity with no 'health' component -- but it has a 'sharpness' component.

Every 'tick', processes associated with these components iterate over every entity that has these components. So, 'gravity' is a process that iterates over every entity with a 'position' component. If an entity gains 'position', gravity affects them; if they lose it, gravity stops affecting them.

The benefits of this are significant; since processes really only have 'concerns' -- things they specifically deal with -- you can write them to operate independently of one another (and with no presumptions regarding the entities they act upon outside of the component they're associated with). So, you remove, edit, or add in 'gravity' as a process without worrying about breaking... well, anything, really.

This main dictionary is the System object; the strings are Components. The internal dictionaries these strings map to are Component Dictionaries, the integers in those dictionaries are EIDs, and the (typically) arbitrary values these integers map to are Component Values.

So, let's say I want to know the player's health. Presuming the player's eid (entity identifier; the integer that defines it) is 0, I could do it like this:

Entities are just a flimsy wrapper class for an EID and the System object that spawned them. When you try to access an entity's attributes -- via __getattr__, __setattr__, or __delattr__ -- what actually happens is that you access the System instance the entity belongs to, plugging in the attribute name and the EID to retrieve (or modify -- or delete!) the associated value.

There's three reasons I see for doing it this way:

It makes iterating over all entities associated with a process much easier. Every entity EID associated with 'health' can be found in System["health"]... so when a process iterates over all entities with health, it really just accesses System["health"]'s keys, wrapping each in an Entity object and processing them.

Because it means that all game objects are just integers. Sure, if you want to do anything interesting with them, you need to wrap them in Entity objects... but once you're done, you drop the Entity object and go back to having it just be an integer.

The save/load thing I mentioned above was something I didn't realize until I started working on instituting saving/loading.

By subclassing Python's dict, I can create custom Component Dictionaries that handle component values in special ways. For example, an entity's 'position' component should always be a 3-D tuple, right? So I can make a custom dictionary that wraps all incoming values in a namedtuple (Point3). Now, assigning entity.position = 5 will raise a TypeError (as it darn well should!).

One of the weaknesses of an entity-component system is that you need a messaging system so processes can communicate with one another. But with this structure, it's actually really easy to institute callbacks whenever you modify an entity's component. For example, you can write 'world.bind_new("sprite", load_sprite)' -- and now, whenever you define a 'sprite' on an entity for the first time (entity.sprite = "monster.jpg", load_sprite will be called on the entity and the value assigned to the component (load_sprite(entity, "monster.jpg")). By offering world.bind_change (fires whenever the value changes) and world.bind_del (fires whenever the value is deleted), I think I've got pretty much every message I'll ever need covered.

SOME OF MY QUESTIONS AND CONCERNS

Is storing entities as integers really going to make a big difference? I have no idea. Maybe I shouldn't list that as a positive, because I have zero experience regarding speed (and it's a late-game concern, anyhow).

New EIDs are generated by an internal counter in System; any time we need a new EID, we just add one to the counter and use this as our new EID. This has... some interesting / weird consequences. Like, if an entity has no components, then it's not stored anywhere; it just 'vanishes'. So, if you create a bunch of entities -- then either delete their components, or never even bother assigning them any -- new entities will still have high EIDs, despite no other objects existing. In other words, you can have a session where there's two objects -- one is the player (EID: 0), and one is a monster (EID: 26573546). I'm not sure if that's a problem? Maybe it is? o.O

The load method accounts for the above, by the way -- when we reload a pickled file, we restart the EID counter. So if you saved the above game and reloaded it, the player EID would be 0... and the monster EID would be 1. Again, I'm not sure if this is necessary? Otherwise, eventually, the EID could end up becoming... ridiculously high?

Related to the above: There's no way to tell how many game objects exist at a given moment. I mean, this wouldn't be hard to implement (just iterate through all the Component Dicts, counting however many unique EIDs you encounter); I haven't bothered because I don't see a need? It brings to mind questions regarding how this system treats object-existence and object-permanence, though -- like, a game object only exists if it has a component associated with it. Otherwise, it... doesn't exist anywhere, really?

In order to maintain the idea of storing EIDs -- instead of entities -- I'm thinking of instituting special containers (like EntitySet) which accept entities, store them as eids, and reproduce them as entities whenever prompted. They'd be available from instances of the System object (as they'd have to contain a reference from System). Is this worth it? Is there a lot of value to be had from guaranteeing that references to Entities are never preserved -- only EIDs/integers?

--if you actually read all of that, I appreciate it! But if not, that's fine too; I think I started babbling a little in there.

If you're reassigning the EIDs during the load of a game, how would references between entities be preserved? I imagine if a player has, for example, the entity with EID 23414 as their target, but then saves and loads the game, the entity with EID 23414 might point to a different entity than before or not even exist at all. I guess you solve that by sorting the entities during load by their EID and then iterating over the EIDs updating all references to that entity to their new, lower EID. Since you're iterating over a sorted list, you're only ever overwriting higher EIDs with lower ones (or the same).

I'm not that versed in python, but I guess you could also switch from the EID as the entity "root" to a struct (or whatever python has) that contains the EID and a list/dict of the components. That way, you can slap on routines for creating/deleting entities that will guarantee that the EID is unique and will otherwise keep the range of EIDs tight, by keeping track of holes created by deleted entities.

Then again, if the EIDs get into the range of int.MaxVal during the lifetime of a game, that might point to a mechanism that uses the entity system for something it wasn't meant to do, e.g. projectiles or particles.

A signed 32 bit integer has a maximum value of about 2.14 [i]billion[/b]. So you could generate a thousand entities a second, for over a week, and not have a problem.Integer overflow is usually a problem caused by multiplication, not addition.It's also not a *huge* problem in Python, as Python will automatically widen the type to a big integer class when overflow occurs (which has costs, but not correctness problems).

raudorn wrote:If you're reassigning the EIDs during the load of a game, how would references between entities be preserved? I imagine if a player has, for example, the entity with EID 23414 as their target, but then saves and loads the game, the entity with EID 23414 might point to a different entity than before or not even exist at all. I guess you solve that by sorting the entities during load by their EID and then iterating over the EIDs updating all references to that entity to their new, lower EID. Since you're iterating over a sorted list, you're only ever overwriting higher EIDs with lower ones (or the same).

Yeah; I actually use another mapping -- I think I called it 'eid_translation' or something -- where each of the eids I'm loading is mapped to the output of a counter that ticks up whenever we hit an eid that hasn't been mapped yet. So the first time I hit 23414, eid_translation maps it to 6; now, every other time I hit 23414 while loading, it'll be treated as 6.

raudorn wrote:I'm not that versed in python, but I guess you could also switch from the EID as the entity "root" to a struct (or whatever python has) that contains the EID and a list/dict of the components. That way, you can slap on routines for creating/deleting entities that will guarantee that the EID is unique and will otherwise keep the range of EIDs tight, by keeping track of holes created by deleted entities.

I actually think in my last build (that I finished a couple of hours ago), I went with something like what you're describing? -- I ditched the eid_counter and I'm using an eid_tracker, now; it's a list in which every value corresponds either to its own index or None. So, eid_tracker[5] (the fifth element in the list) is either 5... or None. When I query it for a new eid, it hands me the index of the first None value in itself... and if it doesn't find one, it extends the list by one more None and hands me the new None's index (the Nones are basically 'freed' eids). Whenever an eid is removed or 'freed', it checks to see if its last value is a None... and if it is, it starts chopping off its tail until it ends in an integer/eid instead of None.

Xenomortis wrote:A signed 32 bit integer has a maximum value of about 2.14 [i]billion[/b]. So you could generate a thousand entities a second, for over a week, and not have a problem.Integer overflow is usually a problem caused by multiplication, not addition.

--oh. Huh. Fair enough! It might not even be a significant enough of a problem to address, then -- my concern was that it would eventually get out of hand, since otherwise, every time I save, if I cleared any entities, the counter would get higher... but it sounds like I'd have to be saving and loading the game for months (like, literally) before I'd risk hitting a problem.

I'm using it to create containers that contain elements which can be removed from those containers with a reference to the element, but not the container itself.

I use weakref to avoid circular references (so the elements don't keep the container alive), and I change the element's class back to its prior class to dispose of the unbind method (because once unbound, I want unbound elements to behave exactly like they would if they had never been bound in the first place).

Basically the only advantage of the automatic-subclassing stuff is that it lets it work on objects that have __slots__, or builtin classes, where you can't just create an unbind property... but I don't think that upside covers for the downside of weird hard-to-understand code that has the potential to clash with other weird "too clever" code that is trying to key of an object's type, or is doing something fancy with metaclasses or something.

But beyond all of that, I think much cleaner would be to make the call "unbind(obj)" rather than "obj.unbind()"... makes more sense, anyway, since really the unbind function is a part of your binder module, not a part of obj.

Also, what happens if you try to put the same object in two collections?

[edit] Actually, looking closer at your code, I think I might be misreading it... I was assuming that "UnboundObj" was just a standin for whatever object you happened to want to shove into your collection, but is it actually a specific class you are in control of? Like, you'd only expect to use your Bunder with this one specific class, which is part of the same package? Then... I still think that messing with __class__ is a bad idea, but the interface is probably still reasonable...

phlip wrote:[edit] Actually, looking closer at your code, I think I might be misreading it... I was assuming that "UnboundObj" was just a standin for whatever object you happened to want to shove into your collection, but is it actually a specific class you are in control of? Like, you'd only expect to use your Bunder with this one specific class, which is part of the same package? Then... I still think that messing with __class__ is a bad idea, but the interface is probably still reasonable...

Oh -- yep! Sorry for not making that clear; UnboundObj is specific here, not generic -- you define its behavior side-by-side with Binder, in the same module. You wouldn't have an object with multiple bindings, cuz each Binder produces a unique object with only one binding (hence why you don't pass an object to the add method; in the actual code, you just pass the values the Binder uses to construct an instance of it, which it then hands back to you. Binder is in complete control of BoundObjs).

(The purpose of this is so I can have objects that, when inserted into another object, change that other object's behavior... And then I can later "unbind" these modifiers without actually storing a reference to the object they're modifying)

That being said, yeah, I see what you mean -- storing unbind in an attribute -- rather than a class variable -- gets me the same result without mucking around with __class__.

In that case, why all the messing around with subclasses and whatnot? Why do you need separate UnboundObj and BoundObj? Can't you just have SomeObj which has an unbind method, and if you call it while it isn't bound, it does nothing or raises an exception or whatever? What's the benefit in making it so that an unbound obj doesn't have an unbind method at all?

I originally did it this way because I thought by defining the BoundObj class inside of the Binder instance, I could have an unbind method that referred to the Binder instance without actually preserving a reference to it. When I tested it, though, I found out it did keep a reference to Binder, so I ended up switching over to weakref to fix that problem.

But now that I'm using weakref, there's really no reason to define BoundObj inside of Binder; just let Binder instance UnboundObj and shove an unbinding function in its 'unbind' attribute.

Like maybe I'm misunderstanding here, because I don't know javascript, but that does seem ridiculous; why would you even let module A load module B, which then tries to go back and load module A before module A has even finished loading? It seems like at that point the code should just say 'Nope, sorry! You are drunk, go home'.

Cyclical dependencies are a great language feature (in object oriented programming at least) so you don't have to merge your dependencies into one or mangle them to be explicitly acyclical. Okay, usually it makes more sense to just extract a relational model and put that in one module, but that might be considered mangling. Not sure if I'm rambling at this point, but it's probably best to limit dependencies to "exposing identifiers" (and their types if it's a statically typed language) in a language, rather than "just execute the code up to the import statement".

ahammel, does node simply complain about an identifier that was declared after the import statement? Or did you try to execute code while loading the dependencies? IMO the latter is usually a bad thing –rather load passive modules and, after they're done loading, invoke some function.

Flumble wrote:ahammel, does node simply complain about an identifier that was declared after the import statement? Or did you try to execute code while loading the dependencies? IMO the latter is usually a bad thing –rather load passive modules and, after they're done loading, invoke some function.

Bear in mind that this is a legacy code base. I don't want a dependency cycle. In fact, I have no idea where the dependency cycle even is. If I knew where it was, I would destroy it with fire.

But anyway, there's a dependency cycle somewhere, so when the higherOrderFunction is called, the value of 'thing' is '{}' (because it hasn't finished loading yet). This, of course, means that the value of 'thing.fn' is 'undefined'. Which means that something inside "higherOrderFunction" throws an exception. And I don't even know which invocation of higherOrderFunction is to blame because—wait for it—Node does not provide useful stack traces!

And in this particular case, it's even dumber because higherOrderFunction returns a Promise. And if you throw an exception inside a Promise it doesn't even tell you the call site! All you get is something like this:

Mangling: B can get X, but to avoid accidentally getting X (or accidentally redefining X), we mangle X's identifier while we're in B. So, to get X in A, you might just type "X"; to get X in B, maybe you need to type something like "A_identifier_X"?

Exposing Identifiers: B can get X by accessing A, which has exposed X explicitly; from A, you type "X" -- from B, maybe you type "A.X"? We trust that B will only refer to A.X once A has defined X.

Relational Model: Object C defines X, then loads A (which loads B). If B refers to X before A has defined it, it gets C's definition (which might just be an error message or something)? Alternatively: Before B refers to X, it asks C if X has been defined?

That is, a.foo doesn't exist at the time that b.getfoo is defined, but it does exist at the time it's called... so does that work? When foo is added to exports in a.js, is the "a" object inside b.js also updated, or is it a separate copy of what a was exporting at the time?

In Python, the equivalent code works just fine, because in Python everything that imports the same module gets a reference to the same aliased module object, not a copy.

The circular-import behaviour of Python is not exactly what I'd call intuitive (I agree with TGH that trying to import a module that's already half-imported should probably just be an error), but it's at least consistent once you understand what it's doing...

The Great Hippo wrote:So this is just me trying to understand terminology, feel free to ignore (though I appreciate any help!):

Is it terminology if I just use some words that I feel fit in?

The Great Hippo wrote:Object A defines value X; object A loads object B -- B refers to X. The problem is that B might refer to X before A has defined it?

That's a reasonably vague example to work with. The problem with B getting an undefined X in ahammel's case stems from the fact that both 'objects' load each other. The compiler can't decide which statements to run first and just goes through the statements in A and B depending on where the imports are written. (phlip's link above explains it much better) If the compiler would be smart enough it could analyse the code and see that the statement that initialises X is independent of everything else in A and B, and then run that first, so B can have an X that's initialised.

With "explicitly acyclical" I meant restructuring the code such that there are no cycles in the dependencies, only "A loads B" or "B loads A". And with "mangling" I simply mean that your code may look like a pile of garbage with lots of seemingly duplicate code if you restructure your code to not have cyclic dependencies.With "relational model" I meant (assuming object-oriented programming) extracting the classes and their fields from all the dependencies and putting them in one dependency (similar to header files in C++ except it's only one file), thereby hopefully removing the need for cyclic dependencies again (since all the relevant names are in that one file; their value/implementation shouldn't matter at load time). So E declares X exists (it, err, "exposes the identifier"), A and B both load E. But it turned out ahammel's problem was with executing code during load time, so extracting classes wouldn't have mattered. Tangentially,

Spoiler:

a lot of languages are fine with cycles (in dependencies or otherwise) up to some point, otherwise something like this:

would not be possible. How would the compiler know what a B is when it arrives at line 3, let alone B.anInt? A good compiler has (I think) two ways to overcome that problem: either postpone parsing copyBInt until it knows what a B is and postpone it again until it knows what anInt is, or assume B is a thing and assume it has anInt and later see if it actually exists.

That apparently injects enough cycles between the 'require' and the invocation for module loading to finish. I guess.

I'm quite certain in this latter setup the call to higherOrderFunction is delayed until someFunction is called (the first time, and probably every time, because both thing.fn and higherOrderFunction change between calls for all the runtime knows, nullifying that const), and thing.fn is filled in before someFunction is called.

phlip wrote:is that just a figure of speech, or is it actually a copy?

Must be, otherwise ahammel's thing.fn would still be undefined in the "fixed it" code, right?

(also, will <code> (i.e. inline monospace) ever be added to the bbcode tags?)

phlip wrote:That sounds very similar to the way it works in Python... except for that word "copy" I see in there... is that just a figure of speech, or is it actually a copy?

Flumble wrote:I'm quite certain in this latter setup the call to higherOrderFunction is delayed until someFunction is called (the first time, and probably every time, because both thing.fn and higherOrderFunction change between calls for all the runtime knows, nullifying that const), and thing.fn is filled in before someFunction is called.

Yeah, with fresh eyes on it, it looks less like a weird implementation of circular dependencies and more like a reasonable implementation of circular dependencies that happens to interact poorly with other javascript "features" (such as function exports, undefined field access and general lousy error handling).

Anyway: the good news is that I found a tool to statically find import cycles. The bad news that the dependency cycle in question is 41 modules long.

might let you creep towards 3x or more (4x is the limit for audio, but that might just be my browser/sound configuration).

I find 2.25-2.5 nice for binge watching; but, more technical stuff, or accents, push me as far down as 1.25.

That code works with the html5 player, so YouTube, Netflix, Twitch, likely more stuff as well.

I've just had the realization that I could use a bookmarklet to sidestep the need for console entry; so, I've punched a few bookmarklets up. They're white-space sensitive JavaScript (ugly), but if anyone is interested they can copy the code as if it were an address and paste it into a new bookmark.

Like the quoted text says, they work with html5 video players; I've tested them on Chrome and Firefox.

That makes more sense; I thought Firefox was breaking the bookmarklets when it was adding the %20's (where Chrome seems to interpret them without issue). I suspect the issues I was having with Firefox were reference related and I had just ruled out spaces without reconsidering them.

I've been trying to practice a certain technique in writing my exceptions for Python -- rather than raising exceptions immediately, I delay them as long as possible and raise them only after the block they're in is finished running. By way of an example, here's an __init__ for a dictionary-like object I created that associates provided strings/keywords with dictionary (or dictionary-like) objects:

Rather than stopping the code the moment it comes across an illegal object (a name, or a value that's not callable/can't be called with no arguments), I add them to sets, then if those sets are not empty, I raise appropriate exceptions. This way, you could foreseeably bypass my exceptions with a try: except block that took them into account.

Hmm, in what scenario would you input illegal names in the first place? And in what scenario shouldn't the whole program just crash (so it doesn't matter how many valid names you accept before throwing an exception) if there's an illegal name?

Flumble wrote:Hmm, in what scenario would you input illegal names in the first place? And in what scenario shouldn't the whole program just crash (so it doesn't matter how many valid names you accept before throwing an exception) if there's an illegal name?

I think I am over-thinking this.

There is no scenario I can think of where you would knowingly input illegal names -- and if you try to assign an object that isn't a dict (or doesn't behave like a dict), it really should crash right away. The object is a customized dict of customized dicts.

Yeah, don't do this. You want the error as close to the code that caused it as possible; it helps with debugging.

The only time you should group errors like this is when you're outputting information to application users in a human-friendly way; app users don't like repetitious errors, and it's best for them to get as much information together as possible so they can fix multiple things in one pass if possible.

The Great Hippo wrote:Is this a good practice? Or am I being a little overcautious, here?

It is neither good nor bad practice; it depends on your requirements.

Your function or method is supposed to have certain guarantees, for example on sort():"when it returns, it outputs a sorted list, containing the same elements as the input list".

But you should also have some guarantees when things fail, for example on insert()"If an exception is thrown, then the new element was not inserted, and the collection remains unchanged."That's an important guarantee that we usually take for granted. Failing to insert an element must not destroy or mangle the rest of the collection. Because exceptions aren't meant to signal errors - you could signal errors with exit(5). Exceptions are meant to give your program a chance of recovering, by catching them and moving on. And for your program to do that, it needs to stay in a well-defined state.

So when you throw your own exceptions, think about the work you've already done, and the state you're leaving your objects in. Do you need to do some cleanup first? Should you just exit(5) instead of throwing?

In your case, throwing on the first bad name or insert seems good enough. You're probably going to reject the whole list. Unless you wish to process the valid names and ignore everything else. Then you add everything valid, maybe print a warning about the invalid inputs, then move on without throwing.

However, this only compiles if FooType is defined as Foo; if I typedef it to int instead, it errors out because is underlying_type is used on something that isn't an enum (i.e. exactly the error I was hoping to avoid by using this construct).

Strictly speaking I don't need to use this since in my real-world case the enums would all be C-style and I can just typedef int FooHashable, but I'm kind of wondering if there's a way to implement this semi-automatic type deduction anyway (it would be pretty nice to have). C++14 fixes the enum hashing problem, but that requires very recent compilers, and I'm not quite willing to force that on potential users just yet.

If you wish to call a template (like std::conditional), then all template parameters need to be valid. SFINAE does not apply here, and std::conditional cannot conditionally evaluate its parameters (even though it would be useful).

underlying_type<int>::type does not exist, so it cannot be a template parameter.underlying_type<int> however is a valid type (it's an empty struct without a ::type member), so use that. After evaluating the conditional, you can safely access its ::type member. Remove the red, add the green:

Oh wait, now it works for int, but not for Foo, because the whole thing gets evaluated to FooType::type, which doesn't exist. So wrap the third parameter into something which has a proper ::type member, like this:

--is totally valid, so long as your metaclass (MyClassType) accounts for the 'special' keyword in its __init__ and __new__ methods.

Once I more or less figured them out, I realized descriptors (objects with custom __get__, __set__, and __delete__ -- like the property decorator) are actually kind of amazing.

You can actually provide a custom type of internal mapping class-instances use to store class variables (like methods) via __prepare__. But it's kind of tricky and makes me feel sleazy when I do it.

The correct use of descriptors were a big one; I've been trying to create a mapping of mappings object to facilitate my code, and I just realized I can do it way easier via descriptors. Something like...

Now instances of 'MyObject' will have a 'my_component' attribute that isn't actually inside of the instance's internal __dict__. Which, maybe doesn't sound useful, but is actually precisely what I needed.