It's funny how trying to have a consistent system design makes you constantly jump from one area of the designed OS to another. I initially just tried to implement interrupt handling, and now I'm cleaning up the design of an RPC-based daemon model, which will be used to implement interrupt handlers, along with most other system services. Anyway, now that I get to something I'm personally satisfied with, I wanted to ask everyone who's interested to check that design and tell me if anything in it sounds like a bad idea to them in the short or long run. That's because this is a core part of this OS' design, and I'm really not interested in core design mistakes emerging in a few years if I can fix them now. Many thanks in advance.

I see most of that as quite pragmatic, so I don't think I can argue much further than already had been. However:

Question is: can you, with declarative data, transform an old instance of the class into a new instance of the class without putting inappropriate data in the "identifier" class member? My conclusion was that it is impossible in my "sort of like RPC" model, but maybe declarative data can do the trick.

You may not be able to make the old code suddenly be new, but without recompiling, you can make the old code speak in the new slang. The parser can just export glue code (as thin as possible, hopefully).

"Also, once versioning is done, it also means you can provide function overloading (in versions, not parameters, this time), (...) It also means that you can employ temporary fixes as you go along, which is definitely powerful.

Not sure I understand this concept, can you give more details ? "
Nah, it's simple stuff. For the moment, think of a design loop: Maybe to implement something important, you found that your own design phase had an infinite loop. To implement A, you had to first implement B, which requires A. Then, what you can do is to implement proto-A and proto-B and get it over with. The mechanism can take over from there, really.

Or, if you found yourself in a temporary crisis: Something important crashed in the middle of your computing. Your debug options are in peril. Then, you may find yourself implementing temporary fixes in your codebase that you intend to remove and reconstruct later. (Something you do to just keep temporary order at the fastest moment, so that you can still get some rest.) Something like the BKL (Big Kernel Lock) Linux had.

The feature of versioning can definitely be added to an RPC mechanism : at the time where prototypes are broadcasted to the kernel, the client and server processes only have to also broadcast a version number. From this point, it works like function overloading : the kernel investigates whether a compatible version of what the client is looking for is available.

If all you have is a version number, then it is really troublesome trying to keep the details intact. Having a complete spec sheet makes interoperability easier. With a version number, then you can guarantee that the functionality is provided in a later version. But you cannot guarantee the functionality works exactly as prescribed before. Also, it means that you absolutely have to provide said functionality in future versions of the library code -- you cannot do a turnabout and remove said functionality. With a spec sheet, you can guarantee that the client code can still run, as long as it does not use the removed functionality.