About Me

Website

Role

Twitter

Github

Twitch

Steam

Hello there!
So I've followed along with the Vulkan Tutorial here https://vulkan-tutorial.com/ and I've finished it outside of the multisampling section.
And I definitely feel like I've learned a lot. However, it sort of contains everything in one monolithic class. While it has a section for rendering "models" it doesn't actually do that, it's for rendering "meshes."
If there's a good guide that sort of starts at the end of this tutorial that explains how to break this up into more manageable pieces, that'd be great, that's what I'm looking for. So if you don't feel like reading further that's the main gist of it.
But I'll sort of go over what I'm thinking about here.
So for me a mesh is something that comprises of a single vertex buffer and optionally an index buffer. There is one material per mesh. A model is comprised of multiple meshes. Each material would likely contain descriptor set information that helps forward information about the samplers and other data that's needed by the shader, to the shader. Each shader would be a static instance that's setup once and is updated through UBOs (uniform buffers) and push constants. (though I haven't learned about those yet)
Meshes would contain a command buffer (primary or secondary?) and a command pool and the drawing commands would be setup for that mesh. Then I suppose I'd want to submit each command buffer in a single vkQueueSubmit() call. Or maybe I'd have a single command buffer and pool for all meshes?
Where I'm a bit hung up on is the UBO stuff. All the drawing commands are setup in advance in the command buffers. The MVP matrix for example of course could change every frame per mesh. (per model maybe?)
How would I go about updating UBOs per mesh object? Is that something that I could map to memory with Vulkan and then update it with some sort of command in the command buffer?
The last thing I notice that I'm worried about are the clear values.
VkRenderPassBeginInfo renderPassInfo = {};
// Other render pass code here
// Clear values
std::array< VkClearValue, 2 > clearValues;
clearValues[ 0 ].color = { 0.0f, 0.0f, 0.0f, 1.0f };
clearValues[ 1 ].depthStencil = { 1.0f, 0 };
renderPassInfo.clearValueCount = static_cast< uint32_t >( clearValues.size() );
renderPassInfo.pClearValues = clearValues.data();
The Vulkan Tutorial does it something like that. I'm sort of thinking that there's one VkRenderPass object per shader. In OpenGL you just sort of set glClearColor once and you're done with it usually. Why would I do this for each render pass? Does that make sense? Am I missing something here?
Anyway, any help is greatly appreciated!

We are pleased to announce the release of Matali Physics 4.3. The latest version introduces significant changes in support for DirectX 12 and Vulkan. Introduced changes equate the use of DirectX 12 and Vulkan with DirectX 11 and OpenGL respectively, significantly reducing the costs associated with application of low-level graphics APIs. From version 4.3, we recommend using DirectX 12 and Vulkan in projects that are developed in the Matali Physics environment.
What is Matali Physics?
Matali Physics is an advanced, multi-platform, high-performance 3d physics engine intended for games, virtual reality and physics-based simulations. Matali Physics and add-ons form physics environment which provides complex physical simulation and physics-based modeling of objects both real and imagined.
Main benefits of using Matali Physics:
Stable, high-performance solution supplied together with the rich set of add-ons for all major mobile and desktop platforms (both 32 and 64 bit)
Advanced samples ready to use in your own games
New features on request
Dedicated technical support
Regular updates and fixes
You can find out more information on www.mataliphysics.com
View full story

We are pleased to announce the release of Matali Physics 4.3. The latest version introduces significant changes in support for DirectX 12 and Vulkan. Introduced changes equate the use of DirectX 12 and Vulkan with DirectX 11 and OpenGL respectively, significantly reducing the costs associated with application of low-level graphics APIs. From version 4.3, we recommend using DirectX 12 and Vulkan in projects that are developed in the Matali Physics environment.
What is Matali Physics?
Matali Physics is an advanced, multi-platform, high-performance 3d physics engine intended for games, virtual reality and physics-based simulations. Matali Physics and add-ons form physics environment which provides complex physical simulation and physics-based modeling of objects both real and imagined.
Main benefits of using Matali Physics:
Stable, high-performance solution supplied together with the rich set of add-ons for all major mobile and desktop platforms (both 32 and 64 bit)
Advanced samples ready to use in your own games
New features on request
Dedicated technical support
Regular updates and fixes
You can find out more information on www.mataliphysics.com

I'm creating a 2D game engine using Vulkan.
I've been looking at how to draw different textures (each GameObject can contain its own texture and can be different from others). In OpenGL you call glBindTexture and in vulkan I have seen that there are people who say that you can create a descriptor for each texture and call vkCmdBindDescriptorSets for each. But I have read that doing this has a high cost.
The way I'm doing it is to use only 1 descriptor for the Sampler2D and use a VkDescriptorImageInfo vector where I add each VkDescriptorImageInfo for each texture and assign the vector in pImageInfo.
VkWriteDescriptorSet samplerDescriptorSet;
samplerDescriptorSet.pNext = NULL;
samplerDescriptorSet.sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET;
samplerDescriptorSet.dstSet = descriptorSets[i];
samplerDescriptorSet.dstBinding = 1;
samplerDescriptorSet.dstArrayElement = 0;
samplerDescriptorSet.descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER;
samplerDescriptorSet.descriptorCount = static_cast<uint32_t>(samplerDescriptors.size());
samplerDescriptorSet.pImageInfo = samplerDescriptors.data();
//samplerDescriptors is the vector
Using this, I can skip creating and binding a descriptor for each texture but now I need an array of Samplers in fragment shader. I can't use sampler2DArray because each texture have different sizes so I decided to use an array of Samplers2D (Sampler2D textures[n]). The problem with this is that I don't want to set a max number of textures to use.
I found a way to do it dynamically using:
#extension GL_EXT_nonuniform_qualifier : enable
layout(binding = 1) uniform sampler2D texSampler[];
I never used this before and don't know if is efficient or not. Anyways there is still a problem with this. Now I need to set the number of descriptor count when I create the descriptor layout and again, I don't want to set a max number you can use:
VkDescriptorSetLayoutBinding samplerLayoutBinding = {};
samplerLayoutBinding.binding = 1;
samplerLayoutBinding.descriptorCount = 999999; <<<< HERE
samplerLayoutBinding.descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER;
samplerLayoutBinding.pImmutableSamplers = nullptr;
samplerLayoutBinding.stageFlags = VK_SHADER_STAGE_FRAGMENT_BIT;
Having said that. How can I solve this? Or which is the correct way to do this efficiently?
If you need more information, just ask.
Thanks in advance!

I was looking at some of Sascha Willems' examples, specifically `multithreading.cpp`, and was surprised to see that he is creating a secondary command buffer per object. I'm curious to know if this is a fairly standard approach?
Also, it made we wonder how expensive it is to bind the same pipeline (phong), once per secondary command buffer, but once these buffers have been executed (concatenated into a primary cbuffer) you effectively have a pipeline bind per object. Being that pipelines are immutable, is it fairly cheap after the first phong pipeline is bound and therefore any subsequent phong binds don't really impact things much?
https://github.com/SaschaWillems/Vulkan/blob/master/examples/multithreading/multithreading.cpp
Thanks

I am wondering if it would be viable to transfer the SH coefficients calculation to a compute shader instead of doing it on CPU which for our engine requires a readback of the cube map texture. I am not entirely sure how to go about this since it will be hard to parallelize as each thread will be writing to all the coefficients. A lame implementation would be to have one thread running the entire shader but I think that's going into TDR territory.
Currently, I am generating an irradiance map but I am planning on switching to storing it inside a spherical harmonics because of the smaller footprint.
Does anyone have any ideas on how we can move this to the GPU or its just not a viable option?

Would anyone be able to point me to an object picking example with Vulkan using the render each object using a different color to a texture in a separate rendering pass and read those texture pixels to get the color value approach.
I'm very new to Vulkan, and I'm starting to port my OpenGl apps over, and I use this technique a lot.

Hello,
I am university student. This year I am going to write bachelor thesis about Vulkan app using terrain to render real places based on e.g. google maps data.
I played World of Warcraft for 10 years and I did some research about their terrain rendering. They render map as grid of tiles. Each tile had 4 available textures to paint (now 8 by expansion WoD) However I found issue about this implementation, that's gaps between tiles. Is there any technique which solves this problem? I read on stackoverflow that guys find only solution in using smooth tool and fixing it manually. Main question: is this terrain rendering technique obsolete? Is there any new technique that replaces this method to render large map as small tiles?
Should I try to implement rendering terrain as grid of tiles or should I use some modern technique (can you tell me which is modern for you?). If I should implement terrain as one large map to prevent gaps between tiles how textures are applied for such large map?
Thanks for any advice.

I'm writing a rendering system for our in-house game engine. My idea at first was to include only a Vulkan backend,but then Apple refused to port vulkan to MacOs and Microsoft relased their DXR raytracing for DirectX12. There is stil Radeon Rays for vulkan, but DXR is directly integrated with the graphic API. So we were thinking of a multiple backend rendering system with Vulkan for windows and Linux, Directx12 for Windows and Metal2 for MacOs. But this system would lead to an incredible amount of code to write than a single API, so my questions were:
Should we stick to a Vulkan and maybe use a translation layer like MolteVk to port it to macOs ?
Is it worth tl write the multiple APIs renderer ?
Should we write a different renderer for each platform and then ship separate executables ?
(Sorry for possibly bad English 😁)

I have been coding since the 90's, and have released a few minor succesfully 3D engines over the years, Vivid3D/Trinity3D/Aurora - Probably not heard of them, but I am quite skilled in the concept.
So upon the announcement of RTX cards and their features, I was highly motivated to create a modern 3D engine, in C++(Visual studio 2017) using Vulkan. Atm I have a GTX 1060 - which is v.fast and is more than enough to build the base engine. In a few months I'll be getting a RTX 2070 to implement raytracing into the engine etc.
The engine has only been in dev for a week or so, but it already has a basic structure, using classes. It will be a easy to use 3D engine, with support for model imports using AssImp.
So my point is, I am looking for any other Vulkan coders, who might be interested in helping develop the engine? The code is on GitHub - open but I can make it private if we decide to commercialize the engine to make money/support futher development.
I want to make a Deus Ex like mini-game to test and promote the engine, so you can help with that too, even if you are a 3D artist and interested, because I am just a coder atm.
So yeah, if anyone is interested pls drop me a email or reply here, and I'll add you to the gitHub project(I'll need your github username)
Note - a C# wrapper is planned also, if you are a c# coder and would like to help with that, demos/wrapper etc, that would be very cool.
My email is antonyrwells@outlook.com
Thank you.

Folks continue to tell me that Vulkan is a viable general-purpose replacement for OpenGL, and with the impending demise of OpenGL on Mac/iOS, I figured it's time to take the plunge... On the surface this looks like going back to the pain of old-school AMD/NVidia/Intel OpenGL driver hell.
What I'm trying to get a grasp of is where the major portability pitfalls are up front, and what my hardware test matrix is going to be like...
~~
The validation layers seem useful. Do they work sufficiently well that a program which validates is guaranteed to at least run on another vendor's drivers (setting aside performance differences)?
I assume I'm going to need to abstract across the various queue configurations? i.e. single queue on Intel, graphics+transfer on NVidia, graphics+compute+transfer on AMD? That seems fairly straightforward to wrap with a framegraph and bake the framegraph down to the available queues.
Memory allocation seems like it's going to be a pain. Obviously theres some big memory scaling knobs like render target resolution, enabling/disabling post-process effects, asset resolution, etc. But at some point someone has to play tetris into the available memory on a particular GPU, and I don't really want to shove that off into manual configuration. Any pointers for techniques to deal with this in a sane manner?
Any other major pitfalls I'm liable to run into when trying to write Vulkan code to run across multiple vendors/hardware targets?
~~
As for test matrix, I assume I'm going to need to test one each of recent AMD/NVidia/Intel, plus MoltenVK for Apple. Are the differences between subsequent architectures large enough that I need to test multiple different generations of cards from the same vendor? How bad is the driver situation in the android chipset space?

Hi
I'm searching for a good implementation of Vulkan in C#, Don't need to be the fastest as it's for an Editor, but i need the most complete and most active one.
Saw a few ones but with no update in the past 5 months. Don't want to implement a "dead" library as my usage is for the next 3/4 years.
Any idea ? thanks

Hi everyone,
I think my question boils down to "How do i feed shaders?"
I was wondering what are the good strategies to store mesh transformation data [World matrices] to then be used in the shader for transforming vertices (performance being the priority ).
And i'm talking about a game scenario where there are quite a lot of both moving entities, and static ones, that aren't repeated enough to be worth instanced drawing.
So far i've only tried these naive methods :
DX11 :
- Store transforms of ALL entities in a constant buffer ( and give the entity an index to the buffer for later modification )
- Or store ONE transform in a constant buffer, and change it to the entity's transform before each drawcall.
Vulkan :
- Use Push Constants to send entity's transform to the shader before each drawcall, and maybe use a separate Device_local uniform buffer for static entities?
Same question applies to lights.
Any suggestions?

I'm writing a small 3D Vulkan game engine using C++. I'm working in a team, and the other members really don't know almost anything about C++. About three years ago i found this new programming language called D wich seems very interesting, as it's very similar to C++. My idea was to implement core systems like rendering, math, serialization and so on using C++ and then wrapping all with a D framework, easier to use and less complicated. Is it worth it or I should stick only to C++ ? Does it have less performance compared to a pure c++ application ?

Cannot get rid of z-fighting (severity varies between: no errors at all - ~40% fail).
* up-to-date validation layer has nothing to say.
* pipelines are nearly identical (differences: color attachments, descriptor sets for textures, depth write, depth compare op - LESS for prepass and EQUAL later).
* did not notice anything funny when comparing the draw commands via NSight either - except, see end of this post.
* "invariant gl_Position" for all participating vertex shaders makes no difference ('invariant' does not show up in decompile, but is present in SPIR-V).
* gl_Position calculations are identical for all (also using identical source data: push constants + vertex attribs)
However, when decompiling SPIR-V back to GLSL via NSight i noticed something rather strange:
Depth prepass has "gl_Position.z = 2.0 * gl_Position.z - gl_Position.w;" added to it. What is this!? "gl_Position.y = -gl_Position.y;", which is always added to everything, i can understand - vulcans NDC is vertically flipped by default in comparison to OpenGL. That is fine. What is the muckery with z there for? And why is it only selectively added?
Looking at my perspective projection code (the usual matrix multiplication, just simplified):
vec4 projection(vec3 v) { return vec4(v.xy * par.proj.xy, v.z * par.proj.z + par.proj.w, -v.z); }
All it ends up doing is doubling w-part of 'proj' in z (proj = vec4(1.0, 1.33.., -1.0, 0.2)). How does anything show at all given that i draw with compare op EQUAL. Decompile bug?
I am out of ideas.

I have a rather specific question. I'm trying to learn about linked multi GPU in Vulkan 1.1; the only real source I can find (other than the spec itself) is the following video:
Anyway, each node in the linked configuration gets its own internal heap pointer. You can swizzle the node mask to your liking to make one node pull from another's memory. However, the only way to perform the "swizzling" is to rebind a new VkImage / VkBuffer instance to the same VkDeviceMemory handle (but with a different node configuration). This is effectively aliasing the memory between two instances with identical properties.
I'm curious whether this configuration requires special barriers. How do image barriers work in this case? Does a layout transition on one alias automatically affect the other. I'm coming from DX12 land where placed resources require custom aliasing barriers, and each placed resource has its own independent state. It seems like Vulkan functions a bit differently.
Thanks.

bs::framework is a newly released, free and open-source C++ game development framework. It aims to provide a modern C++14 API & codebase, focus on high-end technologies comparable to commercial engine offerings and a highly optimized core capable of running demanding projects. Additionally it aims to offer a clean, simple architecture with lightweight implementations that allow the framework to be easily enhanced with new features and therefore be ready for future growth.
Some of the currently available features include a physically based renderer based on Vulkan, DirectX and OpenGL, unified shading language, systems for animation, audio, GUI, physics, scripting, heavily multi-threaded core, full API documentation + user manuals, support for Windows, Linux and macOS and more.
The next few updates are focusing on adding support for scripting languages like C#, Python and Lua, further enhancing the rendering fidelity and adding sub-systems for particle and terrain rendering.
A complete editor based on the framework is also in development, currently available in pre-alpha stage.
You can find out more information on www.bsframework.io.

bs::framework is a newly released, free and open-source C++ game development framework. It aims to provide a modern C++14 API & codebase, focus on high-end technologies comparable to commercial engine offerings and a highly optimized core capable of running demanding projects. Additionally it aims to offer a clean, simple architecture with lightweight implementations that allow the framework to be easily enhanced with new features and therefore be ready for future growth.
Some of the currently available features include a physically based renderer based on Vulkan, DirectX and OpenGL, unified shading language, systems for animation, audio, GUI, physics, scripting, heavily multi-threaded core, full API documentation + user manuals, support for Windows, Linux and macOS and more.
The next few updates are focusing on adding support for scripting languages like C#, Python and Lua, further enhancing the rendering fidelity and adding sub-systems for particle and terrain rendering.
A complete editor based on the framework is also in development, currently available in pre-alpha stage.
You can find out more information on www.bsframework.io.
View full story

Hello everyone!
For my engine, I want to be able to automatically generate pipeline layouts based on shader resources. That works perfectly well in D3D12 as shader resources are not required to specify descriptor tables, so I use reflection system and map different shader registers to tables as I need. In Vulkan, however, looks like descriptor sets must be specified in both SPIRV bytecode and when creating pipeline layout (why is that?). So it looks like I will have to mess around with the bytecode to tweak bindings and descriptor sets. I looked at SPIRV-cross but it seems like it can't emit SPIRV (funny enough). I also use glslang to compile GLSL to SPIRV and for some reason, binding decoration is only present for these resources that I explicitly defined.
Does anybody know if there is a tool to change bindings in SPIRV bytecode?

Hi, I am having problems with all of my compute shaders in Vulkan. They are not writing to resources, even though there are no problems in the debug layer, every descriptor seem correctly bound in the graphics debugger, and the shaders definitely take time to execute. I understand that this is probably a bug in my implementation which is a bit complex, trying to emulate a DX11 style rendering API, but maybe I'm missing something trivial in my logic here? Currently I am doing these:
Set descriptors, such as VK_DESCRIPTOR_TYPE_STORAGE_BUFFER for a read-write structured buffer (which is non formatted buffer)
Bind descriptor table / validate correctness by debug layer
Dispatch on graphics/compute queue, the same one that is feeding graphics rendering commands.
Insert memory barrier with both stagemasks as VK_PIPELINE_STAGE_ALL_COMMANDS_BIT and srcAccessMask VK_ACCESS_SHADER_WRITE_BIT to dstAccessMask VK_ACCESS_SHADER_READ_BIT
Also insert buffer memory barrier just for the storage buffer I wanted to write
Both my application behaves like the buffers are empty, and Nsight debugger also shows empty buffers (ssems like everything initialized to 0). Also, I tried the most trivial shader, writing value of 1 to the first element of uint buffer. Am I missing something trivial here? What could be an other way to debug this further?

Hi, running Vulkan with the latest SDK, validation layers enabled I just got the following warning:
That is really strange, because in DX11 we can have 15 constant buffers per shader stage. And my device (Nvidia GTX 1050 is DX11 compatible of course) Did anyone else run into the same issue? How is it usually handled? I would prefer not enforcing less amount of CBs for the Vulkan device and be as closely compliant to DX11 as possible. Any idea what could be the reason behind this limitation?