@gamecreator It looks noisy because it uses a massive amount of rays for the raycasts. People often use cone-tracing with a mipmap chain instead of scattering rays because of the cache trashing and ridiculous amounts of memory lookups. "GPU Pro 5" has a nice chapter about this called "Hi-Z Screen-Space Cone-Traced Reflections."
Josh's post is a pretty interesting demo though.

@Josh Yeah, I know, but I meant that the OP could place instances (models) by himself for destructible vegetation.
@tournamentdan I don't remember seeing any geometry shaders for the vegetation last time I checked. I doubt that would be a good idea though. I think I remember Josh trying that, and he said something about it not having great performance.

You'll need to modify that script and have them not chase the player. There should be a "GoToPoint()" or "Follow()" command within that script. You need to change that to a hiding spot.
I thought that you wanted them to move away from the player (according to your original post)?

The map size isn't the only factor here. Many linear games use streaming, and those map sizes aren't huge. I've had issues in the past with performance after filling out a 2k map with a decent amount of unique entities. Of course, the terrain itself plays a role, and I've seen a few topics complaining about memory issues with 4k maps. Unless those get solved, having maps that are 16k, 32k, etc., won't make a difference in increasing the number of open-world/larger games in Leadwerks.
It largely depends on how many unique objects you have close to each other, and that includes the terrain since the terrain is a totally unique object (you can't "instance" tiles of the terrain). It also helps with loading screens (so you can do while loading your game, such as play a startup animation or allow the player to play a mini-game or something).

You have two practical options IMO:
Option 1
Use one character controllers per group of rats.
Option 2
Create you own pathfinding system just for rats. Obviously, this requires some setup, but depending on what you want to accomplish here, this might be an option.
With option 1, you would basically add a bunch of rats to a character controller. If you look at the AI script that comes with Leadwerks, you should see that it follows the player.
Instead, you will need to set up a bunch of points that will be "hiding" spots for the rats. When the player gets close enough to the rats ("GetDistance()"), then need to use "GoToPoint()" to go to one of those hiding spots. Which hiding spot you go to depends on how you want the AI to behave (is the closest point OK, or do the rats need to get as far as possible from the player?).

Destructible vegetation is a very complex topic. I doubt the vegetation system as is can support it because it uses transform feedback to do GPU culling, so if vegetation could essentially move in any way, then it vastly complicates how this is handled (maybe even impossible in some cases).
That being said, you could place objects and take advantage of instancing (which is automatic in Leadwerks) to make destructible vegetation. You would need to place vegetation by hand. What did you have in mind exactly?

I'm not really sure what you mean by the shaders are handled very differently per vendor. Sure, they can do specific optimizations per architecture (like reorganize instructions to allow for more spaced texture lookups and better instruction-level parallelism), but that should be about it. I'm not sure you can say NVidia handles this the best though. The GPU performance in the initial benchmarks could be due to many things including suboptimal image layouts and poor memory management.
Yeah, you're not going to get much speedup with multithreaded OpenGL drivers. You should be able to see this if you run an OpenGL program, but it might depend on the vendor. However, drivers can be used to do GLSL compilation on multiple threads and can create command buffers since they OpenGL typically has many frames of latency. So it should be possible to cache these command buffers. How much this is done in practice though (if at all), I'm not sure, and it varies per vendor.

@Crazycarpet I think you are arguing the same point as me. My point was that you need to synchronize at some point.
You're point about drivers being easier to write because of Spir-V shouldn't be the reason. GLSL is pretty clearly defined, and you can easily write a shader that compiles on all platforms because of this. The compilation from GLSL to Spir-V isn't that difficult, as you can see from human-readable Spir-V code. Spir-V was made so that shader compilation on each platform would be faster (for some games this is a big deal) and so that other languages (e.g., HLSL) could compile down to it. A common trend with Vulkan is letting the application come up with abstractions. The driver still has to convert Spir-V to it's internal GPU format.
OpenGL drivers are often multithreaded and do this command buffer generation behind the scenes. One of the problems is that they don't know what commands the application will give next. So driver writers take it on themselves to create heuristics guessing what patterns of commands will be sent. If you look at driver releases, they will often give performance metrics for speedups for certain games. This is because they change heuristics for certain games, something indie developers don't really have access to. Vulkan seeks to largely remove this disconnect, and it largely does if you are familiar with the API.
There are other things that go on in drivers as well such as how to handle state changes and how memory is managed. Again, these are heuristic based. For state changes for example, certain state combinations can be cached in OpenGL drivers. Vulkan makes it the application's responsibility to cache these (with immutable pipeline objects). Maybe this cached state gets discarded in OpenGL and is needed again, so there will be lag. Yet, you may have expected this state combination to be needed, but the driver doesn't know this.
The problem is that the implementation for certain things might involve more changes that you would expect, but you don't get many opportunities in OpenGL to work around this. For example, changing blend modes could force your shaders to change behind the scenes. Yes, your shaders, because many architectures implement blending as programmable.
And don't forget about validation because OpenGL tries to prevent undefined behavior :). Validation is expensive since you have to account for edges cases that many applications will never run into.

Leadwerks allows up to 16 (1-layer) textures to be bound to a material. You can bind your own OpenGL texture array to one of these units, giving you at least 256 texture (I think that's the minimum in the specification). They all must be the same resolution, have the the same number of mipmap levels, and be the same format though.

Yes, at 4K resolution, which at that point has little to do with the API as you are so GPU limited. Also, considering those drivers were out for 3-4 months and the devs were probably still learning Vulkan, I don't think that's a fair conclusion. Also, their engine was basically structured similarly to how they structured their OpenGL engine (I know because they gave a talk about it at a Vulkan conference). The drivers are also much easier to write for Vulkan, so there will likely be less problems going forward.

Not sure you can do this for any benefit. Yes, you can have command pools per thread, but you need to synchronize the submission of command buffers or you can get undefined behavior. Building command buffers is what multithreading is intended for.
Are there any games that support only DX12? And no, DX12 only runs on Windows 10, so they are neglecting a huge chunk of their own consumer base.
No, that is not how command buffer APIs work. The point of a command buffer is to bind renderpasses, descriptor sets, pipelines, draw objects, etc. They are basically a list of commands that are "precompiled" in a way. For instance, a post-processing stack would benefit from this since it rarely changes.
Vulkan offers improvements over OpenGL in many areas. Your deferred rendering with MSAA can be improved by Vulkan by using input attachments, which would substantially reduce memory bandwidth. Subpasses also help with this. Vulkan doesn't have validation, and this was a huge performance hit in OpenGL. Vulkan has immutable state in the form of pipeline object, again a huge performance improvement. Notice that none of these even talk about multithreading. So regardless of how you design your renderer, you will be able to benefit from Vulkan features. I'm not sure how you can say AMD cards only benefit from Vulkan when it's largely CPU improvements (so driver improvements). You can even download NVidia demos if you don't believe me.
I'm not saying you should use Vulkan, use whatever you like, but the reasons you are dismissing it have no backing.