@ SqHd Not yet. It was pretty much a 'lets get this sucker working' first pass.

The good news is, because it's the module system and does some namespace function override trickery, you can just drop it in and it'll work with whatever AI is running, as per the video. The guard AI module implements their behavior, and the RPG dialog does the, well, RPG Dialog.

Obviously a more tightly implemented system would see different reactions between friendly and unfriendly AI, and more ideally a more custom AI thought solution so townsfolk will go to waypoints or whatnot to simulate a day-to-day. Not particularly hard behaviors to implement, but a bit past the scope of the initial 'lets see if this works at all'

The longer term goal would be to hook both AI and the RPG Dialog stuff to the neat incoming Web/Node graph interface for doing up state machines. I think the RPG Dialog one would get a customized one for setting text for each dialog 'state', idle animations to play during them, yada yada. But the idea would be skipping out on any weird, complicated text interface and just being able to smash out a web graph tree for the dialog flow to make it lightning fast to make AI talk. It'd also be a good test for tools and custom editors that package in with a module, and finding out needs/wants to make that a super smooth system that's easy to use.

Same for AI behavior. Being able to slam out an AI state machine in a short time with a visual interface and drop it on a guy and they just GO would drasticaly simplify the workflow on that end as well.

Meanwhile, took about 2 hours before bed tonight to jam on something that's been annoying me for a bit:

Drag-and-Drop asset usage. In the video you see shape assets being D-n-D'd onto the main editor view to spawn an Entity with the mesh component, and the Shape Asset field on the mesh component on an entity in the inspector to assign the shape there as well.
Obviously pretty rough-cut, but the idea proved pretty simple to get basics working. The ideal would also see materials/images being able to be dropped on anything with a valid rendering component, and - via raycast testing to find the contact surface - able to quickly drop a material onto an object's surface in the editor view to apply the material to a mesh. Or obviously drop it onto the material inspector field to apply for cases where it may be harder to hit the right surface, or want of explicit control. Sorta like a reverse eyedropper.
This, and my talk of a Live-Tutorial mode really have me itching to implement a means to highlight gui controls with an outline or whatnot via arbitrary means. Being able to outline the ShapeAsset fields in the inspector or whatever else is a valid receiving target for this sort of thing would be a nice bit of polish that is a great bit of convenience.
Also want to have 'live dragging' in the case of the main editor view, so you can drag-and-place a mesh in one smooth action, instead of HAVING to drop and then move. Minor deal, but it would remove clicks from the editing process, which is always nice.

Also thinking of doing it for component assets onto the inspector with a selected entity to quickly add it to that entity, and other obvious ones like dropping a sound asset into the main view plops down a sound emitter, particles drop particles, etc, etc.

Because behavior like 'spawn static shape' can infer a lot of things to a lot of people, I'm thinking we'll want some kind of config for dictating what would be the spawning behavior in the video's example case. Some obvious components are, well, obvious, like the Mesh Component, and a Collision Component. Some things could be inferred from the inbound asset, like if the mesh has animations associated to it, add an Animation Component, but from there, interests may diverge project-to-project, or just what stage of editing the map.

So having some editor setting to assign a GameObject for what to spawn may be a useful means of overriding with custom behavior if needbe, or whatnot. Something to mull on.

Besides that, been drafting up some initial documentation for the wiki on getting started with the new Base Game template, and also some docs on how to convert existing projects, as well as looking at beginning to port my writings on modules/assets to the wiki to better consolidate. Hoping to get that up tomorrow, but it's a fair bit to write and document with images and the like. so we'll see.

Creating a brand new project with the BaseGame template and a walkthrough of the directory layout

Dropping in an existing module to see how modules can interact and automatically load to bring content into your project

Creating a new module from scratch

Everything pertaining to the modules system

Everything pertaining to the assets system

Porting your existing project to the BaseGame template and what needs to be changed to bring it up to speed(this will also see updates as we shift to PBR, the Entity/Component system take over, and anything else that'll be revelvent to keeping your project up to date with major changes to help streamline the porting process)

As a fun aside, it broke at some point, so I need to fix it, but I'll have a contexted documentation pop-up entry in a lot of the RMB popup menus for 'Jump to Documentation'. The idea being that if you, say RMB click on a MeshComponent rollout in the inspector for an entity, or on a ImageAsset or what have you and click it, it'll jump straight to the wiki documentation detailing all the specifics for that particular thing.
Idea being to cut out as much manual searching and drudging through documentation as possible to jump you straight to what you need to know for the thing you're wondering about at the time.

This should help immensely with the learning process, and remembering details/functionality on things you don't often utilize.

Been busy crunching on documentation, testing various things, and a dabble of other random bits.

One thing that came up today in irc, was Tim-MGT was talking about Unity having a callstack dump in the event of a crash, so you can at least see what the game was doing leading up to it, which helps debug.

I thought that was a really smart thing to do, so we were talking about it some, and he had a few other ideas I figured I'd take a stab at as well.

First was a suggestion he made that isn't major, but a quality of life thing, especially when you're just looking for warnings/errors in the console log, but I added some buttons to filter out message types:

At minimum, the filter buttons update their text any time a new console message is output to the viewer, so it can at least provide a quick-look of how many errors and warnings you're getting, let alone being able to filter them out to quickly see what's going wrong and where.

The other was the original kicker for the discussion: callstack dumps. I knew T3D already supported spitting out the console callstack, because Torsion and other torquescript editors could display the callstack. It was just a matter of figuring out how to do it. It only took an hour or two and I got it working fairly well.

However, the script only ever tells part of the story of a crash(and usually you have to screw up HARD for the script to be the actual cause of a crash, it's definitely not a normal thing). Which meant that to really maximize it's use as a debug utility, we needed to dump the C++ callstack as well.

Well, I did some digging, and while not surprising, it is nice that each platform has their own standard functions for fetching the callstack of an application. I got the windows side working tonight. The example console.log with the test stack dump:

As you can see, it gets a very nicely fleshed out callstack leading up to the dump call.

From here, I need to shift this into a proper centralized location in the codebase, somewhere in the Platform namespace, and then integrate it into the assert macros, as well as rig up the app to catch the unhandled exceptions that may occur and do a dump in that event as well.
Then I just need to implement the Linux and MacOS variants of the callstack fetch function and bam! Delicious debugging info in the event ANY catastrophic failure occurs which should help a TON in figuring out what went wrong, where and why.

The other thing mentined in irc that Tim-MGT found convenient was how it would auto-upload to a cloud service, so if an end-user had a crash, it'd automatically upload to a place the developer could access it, cutting out the middleman of the end user having to know where to go to submit the failure report.

That's more involved, but there's definitely merit to the concept, so it's something I'll be pondering on going forward in order to maximize development efficiency of not just T3D, but everyone's personal projects as well.

Mostly been jamming with Az in trying to get the last important bit of the PBR stuff sorted: Probes.

Not like, alien probes, but Reflection/Environmental Probes.

They're a common method of providing information of the surroundings to nearby objects when rendering, most specifically reflections which is critical for PBR, because in PBR, everything reflects. It may reflect an infinitesimal or 0 value depending on material settings, but the rules say all surfaces reflect, so there you go.

Reflections have been working decently for a while now, but it was pretty WIP and needed solidifying. It's also proven a good opportunity to do more reviewing on our lighting logic, and do some re-organization there, as needed, as well as look at having the probes pull double-duty to provide Global Illumination info.

It's certainly not bad, but obviously the ambient lighting is really flat, and doesn't convey the difference in lighting between the main lobby area and the 'indoor' archways.

To see what I mean, let's have a look at CryEngine utilizing the same Sponza model, to see how they light the scene with their GI in there.

As we can see, there's definitely a lot more nuance that comes into play with the lighting, especially in the 'indoor' spots.

So, obviously, that just won't do! Obviously the current flat ambient would SUFFICE, but that's not how we do 'round these parts!

So, because we already had the reflection probes that can provide, well, reflections. It made sense to let the little guys pull double-duty, and also be able to provide ambient light color information.

There'll be 2 ways to do that for now: flat color, and spherical harmonics.
Flat color will apply a flat, selected ambient color to the area in the probe's influence. It sounds similar to the regular sun-based ambient color now, but the distinction is this isn't global. It's just the area that the probe covers. So you use several probes, and you can get pretty finessed with the ambient lighting info. This is really good for artists to specifically control the ambient lighting conditions. It's more manual work, but it's got a lot of control.

The other way is Spherical Harmonics. This will take the cubemap the reflection portion of the probe is using, and convolve and encode that info as spherical harmonics data. This gives a very smooth, but compact means of passing around the ambient color data that probe can see. The cool part is, because of the voodoo that is SH, it actually contains the directional info, so if one wall is lit and the other isn't, one side of the probe's ambient color will be bright, the other dark.

It should also contain color info, via the cubemap capture(if you don't just use a static cubemap, of course), meaning if there's, say, the bright-green curtain being brightly lit, the far wall will get a hint of green due to the reflected light.

I jimmied up the suuuuper basic version of it, with some hilariously incorrect math, but I mainly just wanted to affirm that things were writing to the correct buffers, and shader parameters were being passed around correctly. So to see how it is, about after about an hour of work to add in the indirect lighting contribution to probes, here it is:

Direct Lighting only: we have some point lights sprinkled around, and the sun is pretty much directly overhead, shining light straight down onto the floor.

Indirect Lighting only: This is the combined of the probe's reflection data and flat ambient color(SH encoding isn't ready yet, so it's somewhat ugly still). It's a bit ugly because it was a quick job to assign the colors, but you can get a sense of light bouncing off the lit floor, the lit red curtain, etc.

And the Combined. Pretty dark because, again, my math is suuuper wrong, but you can see the hints of the ambient lighting actually being applied onto the walls, replicating the light bouncing off the surfaces, well away from the directly lit floor.

Obviously a good deal of work to do on it yet, especially needing to get the SH encoding in, so we can get proper directional lighting data rather than merely flat colors, but the proof of concept worked, and we'll be doing a lot to shore everything up and get some nice GI lighting in there to complement the PBR renderer. In the end, we should end out with some pretty nice lighting to work with the new pretty nice materials.