Hello!
I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. Shader source code converter allows shaders authored in HLSL to be translated to GLSL and used on all platforms. Diligent Engine supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub.
Features:
True cross-platform
Exact same client code for all supported platforms and rendering backends
No #if defined(_WIN32) ... #elif defined(LINUX) ... #elif defined(ANDROID) ...
No #if defined(D3D11) ... #elif defined(D3D12) ... #elif defined(OPENGL) ...
Exact same HLSL shaders run on all platforms and all backends
Modular design
Components are clearly separated logically and physically and can be used as needed
Only take what you need for your project (do not want to keep samples and tutorials in your codebase? Simply remove Samples submodule. Only need core functionality? Use only Core submodule)
No 15000 lines-of-code files
Clear object-based interface
No global states
Key graphics features:
Automatic shader resource binding designed to leverage the next-generation rendering APIs
Multithreaded command buffer generation
50,000 draw calls at 300 fps with D3D12 backend
Descriptor, memory and resource state management
Modern c++ features to make code fast and reliable
The following platforms and low-level APIs are currently supported:
Windows Desktop: Direct3D11, Direct3D12, OpenGL
Universal Windows: Direct3D11, Direct3D12
Linux: OpenGL
Android: OpenGLES
MacOS: OpenGL
iOS: OpenGLES
API Basics
Initialization
The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode:
#include "RenderDeviceFactoryD3D12.h"
using namespace Diligent;
// ...
GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr;
// Load the dll and import GetEngineFactoryD3D12() function
LoadGraphicsEngineD3D12(GetEngineFactoryD3D12);
auto *pFactoryD3D11 = GetEngineFactoryD3D12();
EngineD3D12Attribs EngD3D12Attribs;
EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024;
EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32;
EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16;
EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16;
EngD3D12Attribs.NumCommandsToFlushCmdList = 64;
RefCntAutoPtr<IRenderDevice> pRenderDevice;
RefCntAutoPtr<IDeviceContext> pImmediateContext;
SwapChainDesc SwapChainDesc;
RefCntAutoPtr<ISwapChain> pSwapChain;
pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice,
&pImmediateContext, 0 );
pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc,
hWnd, &pSwapChain );
Creating Resources
Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer:
BufferDesc BuffDesc;
BufferDesc.Name = "Uniform buffer";
BuffDesc.BindFlags = BIND_UNIFORM_BUFFER;
BuffDesc.Usage = USAGE_DYNAMIC;
BuffDesc.uiSizeInBytes = sizeof(ShaderConstants);
BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE;
m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer );
Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example:
TextureDesc TexDesc;
TexDesc.Name = "My texture 2D";
TexDesc.Type = TEXTURE_TYPE_2D;
TexDesc.Width = 1024;
TexDesc.Height = 1024;
TexDesc.Format = TEX_FORMAT_RGBA8_UNORM;
TexDesc.Usage = USAGE_DEFAULT;
TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS;
TexDesc.Name = "Sample 2D Texture";
m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex );
Initializing Pipeline State
Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.)
Creating Shaders
To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member:
SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes.
SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details.
SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter.
To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables:
Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers.
Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc.
Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly.
This post describes the resource binding model in Diligent Engine.
The following is an example of shader initialization:
ShaderCreationAttribs Attrs;
Attrs.Desc.Name = "MyPixelShader";
Attrs.FilePath = "MyShaderFile.fx";
Attrs.SearchDirectories = "shaders;shaders\\inc;";
Attrs.EntryPoint = "MyPixelShader";
Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL;
Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL;
BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories);
Attrs.pShaderSourceStreamFactory = &BasicSSSFactory;
ShaderVariableDesc ShaderVars[] =
{
{"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC},
{"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE},
{"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC}
};
Attrs.Desc.VariableDesc = ShaderVars;
Attrs.Desc.NumVariables = _countof(ShaderVars);
Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC;
StaticSamplerDesc StaticSampler;
StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR;
StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR;
StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR;
StaticSampler.TextureName = "g_MutableTexture";
Attrs.Desc.NumStaticSamplers = 1;
Attrs.Desc.StaticSamplers = &StaticSampler;
ShaderMacroHelper Macros;
Macros.AddShaderMacro("USE_SHADOWS", 1);
Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4);
Macros.Finalize();
Attrs.Macros = Macros;
RefCntAutoPtr<IShader> pShader;
m_pDevice->CreateShader( Attrs, &pShader );
Creating the Pipeline State Object
To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format:
// This is a graphics pipeline
PSODesc.IsComputePipeline = false;
PSODesc.GraphicsPipeline.NumRenderTargets = 1;
PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB;
PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT;
The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below:
// Init rasterizer state
RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc;
RasterizerDesc.FillMode = FILL_MODE_SOLID;
RasterizerDesc.CullMode = CULL_MODE_NONE;
RasterizerDesc.FrontCounterClockwise = True;
RasterizerDesc.ScissorEnable = True;
//RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded)
RasterizerDesc.AntialiasedLineEnable = False;
When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO:
m_pDev->CreatePipelineState(PSODesc, &m_pPSO);
Binding Shader Resources
Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object:
PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV );
Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state:
m_pPSO->CreateShaderResourceBinding(&m_pSRB);
Dynamic and mutable resources are then bound through SRB object:
m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV);
m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB);
The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details.
Setting the Pipeline State and Invoking Draw Command
Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context:
// Clear render target
const float zero[4] = {0, 0, 0, 0};
m_pContext->ClearRenderTarget(nullptr, zero);
// Set vertex and index buffers
IBuffer *buffer[] = {m_pVertexBuffer};
Uint32 offsets[] = {0};
Uint32 strides[] = {sizeof(MyVertex)};
m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET);
m_pContext->SetIndexBuffer(m_pIndexBuffer, 0);
m_pContext->SetPipelineState(m_pPSO);
Also, all shader resources must be committed to the device context:
m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES);
When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example:
DrawAttribs attrs;
attrs.IsIndexed = true;
attrs.IndexType = VT_UINT16;
attrs.NumIndices = 36;
attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST;
pContext->Draw(attrs);
Tutorials and Samples
The GitHub repository contains a number of tutorials and sample applications that demonstrate the API usage.
Tutorial 01 - Hello Triangle
This tutorial shows how to render a simple triangle using Diligent Engine API.
Tutorial 02 - Cube
This tutorial demonstrates how to render an actual 3D object, a cube. It shows how to load shaders from files, create and use vertex, index and uniform buffers.
Tutorial 03 - Texturing
This tutorial demonstrates how to apply a texture to a 3D object. It shows how to load a texture from file, create shader resource binding object and how to sample a texture in the shader.
Tutorial 04 - Instancing
This tutorial demonstrates how to use instancing to render multiple copies of one object using unique transformation matrix for every copy.
Tutorial 05 - Texture Array
This tutorial demonstrates how to combine instancing with texture arrays to use unique texture for every instance.
Tutorial 06 - Multithreading
This tutorial shows how to generate command lists in parallel from multiple threads.
Tutorial 07 - Geometry Shader
This tutorial shows how to use geometry shader to render smooth wireframe.
Tutorial 08 - Tessellation
This tutorial shows how to use hardware tessellation to implement simple adaptive terrain rendering algorithm.
Tutorial_09 - Quads
This tutorial shows how to render multiple 2D quads, frequently swithcing textures and blend modes.
AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface.
Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc.
The repository includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures.
Integration with Unity
Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.

Hello everyone.
I'm following lessons from learnopengl.com and concludes the chapter on "Deferred Shading". I confess that I am a lighting enthusiast in games. And unfortunately I did not find anything to explain with using as many lights as I want at runtime, I just found examples showing with limited light source:
for (int i = 0; i <NR_LIGHTS; ++ i)
{
vec3 lightDir = normalize (lights [i] .Position - FragPos);
vec3 diffuse = max (dot (Normal, lightDir), 0.0) * Albedo * lights [i] .Color;
lighting + = diffuse;
}
Looking at google I found some things about accumulating information in framebuffer, however I did not find a code or anything else explained.
Could someone explain to me how I could do this?
A pseudocode with the openGL commands would be fine.
Right now I thank you all.

Hello!
For those who don't know me I have started a quite amount of threads about textures in opengl. I was encountering bugs like the texture was not appearing correctly (even that my code and shaders where fine) or I was getting access violation in memory when I was uploading a texture into the gpu. Mostly I thought that these might be AMD's bugs because when someone was running my code he was getting a nice result. Then someone told me "Some drivers implementations are more forgiven than others, so it might happen that your driver does not forgive that easily. This might be the reason that other can see the output you where expecting". I did not believe him and move on.
Then Mr. @Hodgman gave me the light. He explained me somethings about images and what channels are (I had no clue) and with some research from my perspective I learned how digital images work in theory and what channels are. Then by also reading this article about image formats I also learned some more stuff.
The question now is, if for example I want to upload a PNG to the gpu, am I 100% that I can use 4 channels? Or even that the image is a PNG it might not contain all 4 channels (rgba). So I need somehow to retrieve that information so my code below will be able to tell the driver how to read the data based on the channels.
I'm asking this just to know how to properly write the code below (with capitals are the variables which I want you to tell me how to specify)
stbi_set_flip_vertically_on_load(1);
//Try to load the image.
unsigned char *data = stbi_load(path.c_str(), &m_width, &m_height, &m_channels, HOW_MANY_CHANNELS_TO_USE);
//Image loaded successfully.
if (data)
{
//Generate the texture and bind it.
GLCall(glGenTextures(1, &m_id));
GLCall(glActiveTexture(GL_TEXTURE0 + unit));
GLCall(glBindTexture(GL_TEXTURE_2D, m_id));
GLCall(glTexImage2D(GL_TEXTURE_2D, 0, WHAT_FORMAT_FOR_THE_TEXTURE, m_width, m_height, 0, WHAT_FORMAT_FOR_THE_DATA, GL_UNSIGNED_BYTE, data));
}
So back to my question. If I'm loading a PNG, and tell stbi_load to use 4 channels and then into glTexImage2D, WHAT_FORMAT_FOR_THE_DATA = RGBA will I be sure that the driver will properly read the data without getting an access violation?
I want to write a code that no matter the image file, it will always be able to read the data correctly and upload them to the GPU.
Like 100% of the tutorials and guides about openGL out there (even one which I purchased from Udemy) where not explaining all these stuff and this is why I was experiencing all these bugs and got stuck for months!
Also some documentation you might need to know about stbi_load to help me more:
// Limitations:
// - no 12-bit-per-channel JPEG
// - no JPEGs with arithmetic coding
// - GIF always returns *comp=4
//
// Basic usage (see HDR discussion below for HDR usage):
// int x,y,n;
// unsigned char *data = stbi_load(filename, &x, &y, &n, 0);
// // ... process data if not NULL ...
// // ... x = width, y = height, n = # 8-bit components per pixel ...
// // ... replace '0' with '1'..'4' to force that many components per pixel
// // ... but 'n' will always be the number that it would have been if you said 0
// stbi_image_free(data)

Hello!
I was trying to load some textures and I was getting this access violation atioglxx.dll access violation
stb image which i'm using to load the png file into the memory, was not reporting any errors.
I found this on the internet explaining that it is a bug from AMD.
I fixed that problem by changing the image file which i was using. The image that was causing this issue was generated by this online converter from gif to pngs.
Does anyone know more about it?
Thank you.

Hello everybody! I decided to write a graphics engine, the killer of Unity and Unreal. If anyone interested and have free time, join. High-level render is based on low-level OpenGL 4.5 and DirectX 11. Ideally, there will be PBR, TAA, SSR, SSAO, some variation of indirect light algorithm, support for multiple viewports and multiple cameras. The key feature is COM based (binary compatibility is needed). Physics, ray tracing, AI, VR will not. I grabbed the basic architecture from the DGLE engine. The editor will be on Qt (https://github.com/fra-zz-mer/RenderMasterEditor). Now there is a buildable editor. The main point of the engine is the maximum transparency of the architecture and high-quality rendering. For shaders, there will be no new language, everything will turn into defines.

Hi, I've recently been trying to implement screen space reflections into my engine, however, it is extremely buggy. I'm using this tutorial : http://imanolfotia.com/blog/update/2017/03/11/ScreenSpaceReflections.html
The reflections look decent when I am close to the ground (first image), however when I get further away from the ground (the surface that is reflecting stuff), the reflections become blocky and strange (second image).
I have a feeling that it has something to do with the fact that the further the rays travel in view space, the more scattered they get -> therefore the reflected image is less detailed hence the blockiness. However I really am not sure about this and if this is the case, I don't know how to fix it.
It would be great if anyone had any suggestions around how to debug or sort this thing out. Thanks.
Here is the code for the ray casting
vec4 ray_cast(inout vec3 direction, inout vec3 hit_coord, out float depth_difference, out bool success)
{
vec3 original_coord = hit_coord;
direction *= 0.2;
vec4 projected_coord;
float sampled_depth;
for (int i = 0; i < 20; ++i)
{
hit_coord += direction;
projected_coord = projection_matrix * vec4(hit_coord, 1.0);
projected_coord.xy /= projected_coord.w;
projected_coord.xy = projected_coord.xy * 0.5 + 0.5;
// view_positions store the view space coordinates of the objects
sampled_depth = textureLod(view_positions, projected_coord.xy, 2).z;
if (sampled_depth > 1000.0) continue;
depth_difference = hit_coord.z - sampled_depth;
if ((direction.z - depth_difference) < 1.2)
{
if (depth_difference <= 0)
{
vec4 result;
// binary search for more detailed sample
result = vec4(binary_search(direction, hit_coord, depth_difference), 1.0);
success = true;
return result;
}
}
}
return vec4(projected_coord.xy, sampled_depth, 0.0);
}
Here is the code just before this gets called
float ddepth;
vec3 jitt = mix(vec3(0.0), vec3(hash33(view_position)), 0.5);
vec3 ray_dir = reflect(normalize(view_position), normalize(view_normal));
ray_dir = ray_dir * max(0.2, -view_position.z);
/* ray cast */
vec4 coords = ray_cast(ray_dir, view_position, ddepth);

Hello dears!
It's been a month and a half since my last diary, a huge amount of work has been done during this time. So that was my task sheet, without considering the tasks that I perform on in-game mechanics:
All tasks were performed not in the order in which they were located in the list, and there are no small tasks that had to be solved along the way. Many of the tasks did not concern my participation, such as Alex slowly changed to buildings:
Work on selection of color registration of a terrane:
The option that we have chosen to date will show a little below.
The first thing I had the task of implementing shadows from objects on the map and the first attempts to implement through Shadow map gave this result:
And after a short torment managed to get this result:
Next, the task was to correct the water, pick up her good textures, coefficients and variables for better display, it was necessary to make the glare on the water:
At the same time, our small team joined another Modeler who made us a new unit:
The model was with a speculator card, but the support of this material was not in my engine. I had to spend time on the implementation of special map support. In parallel with this task it was necessary to finish lighting at last.
All as they say, clinging to one another, had to introduce support for the influence of shadows on the speculator:
And to make adjustable light source to check everything and everything:
As you can see now there is a panel where you can control the position of the light source. But that was not all, had to set an additional light source simulating reflected light to get a more realistic speculator from the shadow side, it is tied to the camera position. As you can see the armor is gleaming from the shadow side:
Wow how many were killed of free time on the animation of this character, the exact import of the animation. But now everything works fine! Soon I will record a video of the gameplay.
Meanwhile, Alexei rolled out a new model of the mine:
To make such a screenshot with the approach of the mine had to untie the camera, which made it possible to enjoy the views:
In the process of working on the construction of cities, a mechanism for expanding the administrative zone of the city was implemented, in the screenshot it is indicated in white:
I hope you read our previous diary on the implementation of visualization system for urban areas:
As you may have noticed in the last screenshot, the shadows are better than before. I have made an error in the calculation of shadows and that the shadows behind the smallest of objects and get the feeling that they hang in the air, now the shadow falls feel more natural.
The map generator was slightly modified, the hills were tweaked, made them smoother. There were glaciers on land, if it is close to the poles:
A lot of work has been done to optimize the display of graphics, rewritten shaders places eliminating weaknesses. Optimized the mechanism of storage and rendering of visible tiles, which gave a significant increase and stable FPS on weak computers. Made smooth movement and rotation of the camera, completely eliminating previously visible jerks.
This is not a complete list of all solved problems, I just do not remember everything
Plans for the near future:
- The interface is very large we have a problem with him and really need the help of specialists in this matter.
- The implementation of the clashes of armies.
- The implementation of urban growth, I have not completed this mechanism.
- Implementation of the first beginnings of AI, maneuvering the army and decision-making and reaction to the clash of enemy armies.
- The implementation of the mechanism storage conditions of the relationship of AI to the enemies, diplomacy.
- AI cities.
Thank you for your attention!
Join our group in FB: https://www.facebook.com/groups/thegreattribes/

I'm trying to write a game on OpenGL using C++. From third-party libraries for creating windows (widgets?) Inside OpenGL, I was able to add ImGui to my project, create a window and attach some functions to it. But I did not find information like changing the style of this window. Specifically, in this situation, I need to create the starting window of the game (start the game, settings, exit, etc.) and create in-game windows (inventory, character menu, minimap, chat, etc.). I heard about Qt, but given the size they require, my program will weigh 3-4 times more than we would like. In addition, I do not need any super high-quality graphics and a large set of visualization capabilities. I would like to understand what my program consists of and have a set of basic concepts about how this is implemented. Could you advise: is there a similar library with the ability to create and edit the style of in-game windows (maybe in ImGui this function is still there?) with open source code in C++?

Hi,
I was studying making bloom/glow effect in OpenGL and following the tutorials from learnopengl.com and ThinMatrix (youtube) tutorials, but i am still confuse
on how to generate the bright colored texture to be used for blur.
Do I need to put lights in the area of location i want the glow to happen so it will be brighter than other object in the scene?, that means i need to draw the scene with the light first?
or does the brightness can be extracted based on how the color of the model was rendered/textured thru a formula or something?
I have a scene that looks like this that i want to glow the crystal
can somebody enlighten me on how or what the correct approach is?
really appreciated!

Hi,
I'm trying to produce volumetric light in OpenGL following the implementation details on "GPU Pro 5 Volumetric light effects in KillZone". I am confused on the number of passes needed to create the effect.
So I got the shadow pass which renders the scene from the light's POV, then I have the GBuffer pass which renders to texture the whole scene. and finally a 3rd pass which computes the ray marching on every pixel and computes the amount of accumulated scattering factor according to its distance of the light in the scene (binding the shadow map form the first pass). Then what ? blending these 3 buffers on a full screen quad finally pass ?? or maybe should I do the ray marching in the resulting buffer of blending the shadow map and the Gbuffer?
Thanks in advance

This is just a brief note on my participation to the challenge. The game developed to that end is a 3D remake of the original frogger concept and has been made available as an open source product under the BSD (3 clause) license.
It is a small casual game, in which the player is controlling a frog. The frog has to get to the other side of a road, avoiding passing cars, and a pond in which wooden planks float. The cars can crush the frog and, if it fails to use the planks when crossing the pond, it drowns. As a side-note I have always wondered why this is so since the 80s. It is an amphibian after all...
The game works on Windows, MacOS and Linux and I have used my own small3d framework to develop it, in C++. small3d is also provided as an open source, BSD licensed project.
This is not a masterpiece, but I think it's ok for something developed over the course of a couple of weeks. I only noticed the gamedev.net challenge when an announcement was made about the extension to the deadline for submissions.

Well I have a peculiar problem. When I set my global variable screen=0.0f and I put my drawcollision_one() function in my display function it only draws a static collision sprite. However when I set my screen=0.0001f and I put my drawcollision_one() in my display function it draws an animated collision sprite. Also when I use my screen=-0.0f variable and I put my drawcollision_one() function in my collision detection function it draws a static collision sprite. However when I use my screen=0.0001f variable it does not draw anything when the drawcollision_one() function is in my collision detection function. What I want is to draw the animated collision sprite when the collision detection function is used. Let me know if you need further explanation.I am using freeglut 3.0.0 and SOIL in my program.

I'm totally new to Game Dev. and i wanna say "its not difficult", but sometimes i get stuck in tiny holes with nothing to dig my way out.
Basically, I've been following this Java OpenGL (using JOGL) 2D series on YouTube for a while, as an attempt at my first game in java. But clearly its not going well.
I followed the series up till episode 18, but after that; in episode 19, the tutor had implemented a KeyListener in order to move the player around.
I did the same thing he did, but when i hold down the up/down/right or left key, the player moves for a bit but stops after like 2 seconds. For it to move again, i have to release the key and hold it down again.
I personally think this is a problem with JOGL (the library I'm using). But would like to have a solution to the problem since I have already gone through the trouble of making an entire game engine.
Anyways, here's the link to the video: Java OpenGL 2D Game Tutorial - Episode 19 - Player Input
The code i used for player input is exactly similar to the one used by the tutor!
Thanks...

Hello dears!
In this short extraordinary diary we decided to tell you about the new system of city expansion, which was added to the game. As you may remember, the original structure of urban areas had a pronounced square structure:
To begin with, this was quite enough, but since this graphic element was quite conspicuous and caused natural questions from some users, it was decided to give it a more meaningful form, especially since it was already in our immediate plans.
To do this, it was necessary to develop a set of urban areas and their parts, which would give the growing city a visually more natural and pleasant look, as well as the logic of their interrelations.
The end result is a set of models that, in theory, should take into account all possible expansion options for flat terrain:
Now the starting version of the city looks like this:
As you can see, due to additional extensions of residential areas, which are not actually them and serve only for decoration, the city got a more natural and visually pleasing silhouette.
For those who are interested in the logic of the use of district models in the expansion of cities, under the spoiler will be attached a number of technical screenshots with explanations:
The initial version of the residential area:
To smooth its square appearance, additional elements are added to it. These elements, as noted above, are not independent areas, but serve only as a graphic design:
The city can expand in any direction. For example, suppose the following urban area appears to the right of an existing one. The current right extension will disappear, and the following type of area will appear in its place:
If the area appears at the bottom left diagonally, it will have a square shape, and its additional extensions the following form:
If the area is built on top of the original, the city takes the following form:
Built a lower right diagonal from the starting. The second built area is replaced by another modification, and an additional area from the bottom to the U-shaped:
Although all possible options for development are not clearly presented here, following this logic, theoretically, allows you to build cities of any possible form.
Summarizing, you can see a screenshot of a large city, built in the game in this way:
As an added bonus, a screenshot of the Outpost:
Thank you all for your attention and see you soon!

Hello.
I'm tring to implement opencl/opengl interop via clCreateFromGLTexture (texture sharing)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, nullptr);
With such texture I expected that write_imagei and write_imageui would work but they don't. Only write_imagef works. This behaviour is same for intel and nvidia gpus on my laptop. Why is it and why there is no such information in any documentation and in the entire internet? This pitfall cost me several hours and probably same for many developers.