Hey everyone, I figure I'll take a stab at making a blog to cover work I've been doing the past week or two.

Unsurprisingly, I've worked a fair bit on the Entity/Component stuff. Improvements are happening all the time, but it's a granual thing. The good news is, the next big update should improve a lot of things, and in large part due to one of the other things I've been working on:

Taml and Assets in T3DI made mention in the tasset idea and discussion thead I'd been working on this, and it's nearing completion for the port-over. I need to put it through it's paces, but then it'll be tossed up in branches and PR'd against development. If everything looks good, hopefully those can go in for 3.8! It's pretty much direct-ports from T2D, so it should be utilized in much the same way there. The serialization is especially important to the entity/component stuff...

Back to Entity/ComponentsThe reason they're so useful - more specifically, why TAML is so useful - is because it streamlines several aspects I've been floundering around trying to make workarounds for. Stuff like updates to the prefab system, or component templates.

See, templates in the original draft were like datablock-lites. They served a fair bit of the same purpose, being sent down ahead of any actual components on entities, used as default field references, etc. But it drastically complicated the code, created network dependency-order issues, and caused problems with namespaces and naming conventions.

With TAML, the template version of a component can exist in a non-object, serialized format. A component then just has a reference to that particular TAML file. If it needs to initialize it's default fields, or re-load them later for whatever reason, it'll be a snap to just crack open the TAML file, read the field(s) of interest, and set our live component's data. Doesn't require any extra simobjects lying around and is lightweight.

It'll also be a big help with the next iteration on the networked code. It works well enough for now, but each component that has relevancy on the client, such as rendering components, have to be ghosted down. In small projects that's fine, but that can build up in larger ones. So I'm testing out having the owner entity call down into it's components passing it's network stream to get data added to it instead.

Each component manages it's own netmasks, so you can have up to 32 masks PER COMPONENT. Which will allow people to break stuff down in a really granular way to minimize how much you need to write with a given update.As long as the components stay in the same order in the entity, everything should network fine. To facilitate "ghosted" components working properly, we come back to the TAML stuff. We can send down each component's template TAML file when we add/remove a component and create a local component on the client. Then the component has it's fields set per the TAML template file. Once that's done, the networking as mentioned above will update changed fields as usual.

This should end up meaning you can have as many components on an entity as you want and it doesn't get in the way of how many ghosted objects you can have. The only thing actually ghosted in the normal networking stuffs is the entities themselves.

Assets have the potential to become the tasset stuff as I was describing in the other thread

Node and Web Graph GuisThis is something I've been wanting to get to for a while(I started on it a while back but it needed a rewrite drastically). Fortunately, I got a chance to return to this this week. I started tests during my lunch breaks at work, and then friday pushed it into full prototyping mode. The result are as follows:Node Graph:

Web Graph:

So what's the deal with these? They'll act as starting points for any visual editors. The node graph would be good for stuff like Visual Scripting and Visual Shader/Material editing. The web graph would be used for stuff pertaining to state machines. Animation state machine controllers, AI, weapons, etc.These are pretty much purely the functional GUI, there's no specific rules for them. That'd be what the derived editors would do.

They look a little...bland/basic because they also serve a secondary purpose. Until most of T3D's GUI stuffs, which is image-based, these are purely generated/rendered by the code.

This has the advantage of requiring a LOT less code to get them to render right, it's also harder to break them when you make them arbitrary/code based. This improves iteration time when making changes, and makes it more flexible over-all. It's easy to change the color of a particular node, connection, socket, etc without needing entirely new sets of images.

And THAT ties into another thing this is good for as a litmus test. One of the things I was curious to look into was retooling the editors to use fully programmatic GUIs rather than the aforementioned image-based system it uses now. The editors themselves likely wouldn't change much(yet), but they'd be a lot easier to maintain and modify.

There'd be several other benefits to it, such as being reaaaaaaaally easy to implement color themes that affect the whole editor. Rather than the current process which is a massive chore of replacing images and hoping stuff doesn't break, this would be as simple as changing some preferences that control the colors of the controls and it'd be changable during run-time. Think like how in blender you can easily change the colors of stuff to get a different look.

It'd also be WAY easier to implement 'dynamics', such as dragging windows into dockable regions, being able to drag windows into tabs and tabs into windows, and popping those out into their own second canvases for multi-monitor support.

This would also make it easier to extend the editors themselves. If you've looked at the current editors, there's a fair bit of extra code in all of them to get the UIs to render with all the doodads working. A rewrite in this style would let the particular editor GUI manage JUST what it needs to without having to worry about the other parts of the editor imploding. To say nothing of how much easier it'd be to extend/modify the existing editors when they would automatically scale to fit stuff rather than needing black-voodoo coding to try and the gui controls to line up like they do now.

This wouldn't touch the regular game GUIs - this style of UI is inferior when it comes to end-user interfaces for games - but for the EDITORS? I feel it'd be a massive step up. Which is why working on the basics via these graph gui controls has been helpful.

Well, that's a breakdown of my little corner of the R&D world. I feel I'm missing a few things i dabbled on on the side, but if I remember, I can edit those in here.

Anywho, feel free to level some feedback and ideas on this stuff. More data is always good when trying to hash out where stuff is headed.

And, for giggles, tonight I got an ultra-rough cut of a dockable window system for the above mentioned editor redux. Again, this was mostly me just spending a bit of time dicking around and testing the waters on that. It's not really usable yet, but it serves as a promising glimpse.

Blank 'editor window'

Spawn a new child window

We go to drag the window, and it automatically lightly transparencies it and markers the 'dockable' regions of a potential parent window.

If we drag our window over to a dockable space, we see a highlight indicating if we let go, we'll dock into place here

Let go, and the window we were dragging docks into place. If you click the tab and drag, it undocks!

Still a long way to go before it's actually a foundation for an editor, but it's a good start. This would allow the end user to drag things around in a normal and predicable way, dock and position stuff like the 3d view, the inspector and scene tree exactly where they want. You'd have it save that info out as prefs so it always loads it back up in the same way just how you had it before.

Things that would be needed for this to be a proper foundation is to have it correctly 'push' the sibling window we docked with out of the way, instead of sitting over it. I also need to implement dragging to/from the tab bar area to have it automatically convert from full window into a tab like the tab books.

And then from there I'd need to implement various field uis to actually let you USE junk.But I gotta say, only an afternoon of work and it's got me itching to try and see it through to the end

Hey, back on the subject of the EntityComponent work you're doing, have you updated that anywhere? I keep stalking your github but I'm not seeing any new work on it since Jan. 14. I'd love to see anything else you've done!

I'll do a bit more in-depth update later that covers it better when I'm not about to go out the door.

So, been doing some updates to the E/C stuff. Other than an eventual rework of how it handles networking component data(right now networked components are ghosted as regular objects, which could chew through your ghost limit pretty fast) to be more efficient, the main setup is largely what it's going to be going forward.

I pushed this stuff to a new branch on my repo, EC_Experimental.

Biggest change is the implementation of 'Scripted Game Objects'. It was an idea that was brewing for a while, and further solidified when I looked into UE4's blueprints system.

The basic idea is you would build out an entity + components + child objects, and you can convert it into an SGO.

Think of them as prefabs, but designed for actual use and further scripting of functionality on them. When you convert an entity in the world editor to an SGO(by r-clicking and selecting that option in the scene tree list) you pick a destination(such as /scripts) and name.

This changes that entity's class to that name, and saves the entity's hierarchy as a taml file. It also creates a basic script file to accompany. In the template, there's 2 player objects.

PlayerObject and ThirdPersonPlayerObject. They have slightly different component configurations, and have different scripts. In short, the PlayerObject is your classic FPS setup, and TPPlayerObject is a normal third-person player object with an orbital camera configration.

The components they use are standard between them, the actual functional differences are handled in their associated script files. Check those out to see how that works.

This means you have 2 main approaches to doing functionality on game objects. Writing a component that enacts the specific functionality you want and adding it to an entity, or storing an entity as an SGO, and writing script specific to that SGO that happens automatically when spawned.

Speaking of spawning, when you create an SGO, it automatically adds itself into a manifest file. This part is kinda ugly and needs review, but the basic idea will be carried on going forward.

Because they're exec'd for you, you can just do SpawnSGO("mySGOName") and it'll return the spawned entity based off the stored taml.

So doing SpawnSGO("PlayerObject") creates an instance of my PlayerObject SGO, and automatically enacts the script functionality associated to it through the PlayerObject namespace. As mentioned above, that means this object will control as a FPS object.

I also standardized the camera components down to a single camera component with mounting/offset stuff in it. Having a second component just to have a camera mounted on a node was overkill.

I also added the CameraOrbiterComponent, which is used in the ThirdPersonPlayerObject SGO. This allows the camera to be handled in an orbital control in a simple, straightforward manner.

I'll touch on some other stuff later, but with the SGO setup, I've begun doing as many script hooks as I can to make things very easily usable. The callbacks for collision/contact events are good examples of this.

I've also thrown the WIP uis that I've talked about above into a branch on my repo. It's the NodeGraphUIs branch.

They don't hook to anything, and need more work before they're actually usable, but they're a good head start, and it gets them up there for people that want to fiddle while I have to work on higher-priority stuff in the near-term.

Had a crazy week last week due to my sister getting married, but now that that's over, I've gotten back to work.

Immediate order of business was to finish the TAML-ecosystem port and carry over Asset and Module management from T2D. That happened last night, though I haven't yet tested it, everything looks like it lines up.

In the short term, basically nothing will change, but that enables us to start moving over to a much more logical management of assets for projects. Center to that, I want to duplicate the existing Material browser and turn it into a general-purpose Content Browser. Allowing you to browse all content in the project. It would also act as the central hub for adding/removing stuffs from the project itself.

If you want a new model + textures in the game, you would open the Content Browser, select to import content, and then select the files(or folders!). It'll automatically parse the types and generate asset entries for each and a bunch of meta data associated to it. The content browser will utilize that for organization purposes, which should make content far easier to find and organize, while minimizing how often you would need to step outside the editors themselves and into the OS's file system.

The module system would further ease usage going forward by allowing us to shift content packs and the like into modules that can be very easily imported and removed. Open the content browser, pick to import a module, and select the module definition file. I'll power through all the module's associated content and organize it for you. If you no longer want that module in your project, there'd be the option to remove the module in the CB as well, and it'll handle the cleanup for you.

This would DRASTICALLY simplify getting content packs and the like in a project, and further management afterwards.

So yeah, there's that. The port itself didn't take much time, but I gotta test it and begin hashing out the idea of the Content Browser and all that business. That part will definitely take the bulk of the implementation time, so I'm gunna spread that out as I go

I'll do a more in-depth write-up soon about what all this will impact, but suffice to say it's kind of a big deal to have this ported into T3D.

I want to extend a HUGE thanks to Lukas though, he kicked off the initial TAML implementation, and I just took it, made it a bit cleaner an integration, and rolled the asset/module stuff over to work with it.