Hot questions for Using Lightweight Java Game Library in render

I am currently following ThinMatrix's OpenGL tutorial on rendering with VAOs and VBOS. I copy the code almost exactly (the only difference being I make a factory class static instead of just having it normally). The only technical difference I can see between my version of the program and his is that I am using lwjgl 3 instead of lwjgl 2.

Upon looking around I've found this stackoverflow post: Java OpenGL EXCEPTION_ACCESS_VIOLATION on glDrawArrays only on NVIDIA, which seems to have the same problem and the OP of that question actually posted a solution.

I am using an AMD graphics card, but I still gave it a try to see if it fixed my problem, however it does nothing, as it still comes up with the same error message.

Putting some breakpoints in I have found that the problem lies in the createVAO() method, more specifically the call to glGenVertexArrays() fails for some reason. I have tried explicitly telling glfw to use OpenGL 3.0 but it still doesn't help.

At this point I am completely out of ideas.
Any guidance on what I should do?

Answer:

The problem is this:

RawModel model = modelLoader.loadToVAO(vertices);
initApp();

You need to flip it around:

initApp();
RawModel model = modelLoader.loadToVAO(vertices)

The problem with that is that when you call modelLoader.loadToVAO(vertices) it calls glGenVertexArrays() (as you've observed). At this point you however don't have a current context set. Which you do in initApp() with glfwMakeContextCurrent().

You must have a current context set before calling any OpenGL functions.

I'm having some trouble with rendering Master Cheif in Java using LWJGL and GLSL shaders where the is some flickering, dissapearing of polygons and strange colouring. And for the life of me I can't figure out why.

What you describe and is shown in your sample images are typical symptoms of a problem that is often called "depth fighting" or "z-fighting". This is caused by precision limitations of the depth buffer.

The most common scenario where this becomes an issue is if the range covered by the depth buffer is large, and the scene contains polygons with very similar depth values.

For example, picture a polygon A that is slightly in front of polygon B in world space. Pixels from polygon A and polygon B can end up with the same depth value after all transformations are applied, and the resulting depth is rounded to the available depth buffer precision. Depending on the order of drawing, the pixel from polygon A or polygon B will be visible in this case. The typical result is that a mix of pixels from polygon A and polygon B will show up where polygon A should cover polygon B.

There are a number of ways to address this:

Reduce the depth range. In a standard perspective projection, this is controlled by the near and far plane distances, where the relative far/near value is the critical quantity. Which values cause depth fighting depends heavily on the scene and depth buffer precision. The safest bet is to keep the relative value as small as possible. In most cases, values up to about 100 tend to rarely cause problems, and values of 1000 and higher can start causing issues.

Increase the depth buffer precision. The most common depth buffer sizes are 16-bit and 24-bit, and many GPUs support both. If things get problematic, choose at least 24-bit. Depending on hardware and OpenGL version, higher resolution depth buffers might be available.

Avoid rendering polygons with almost identical depth. Either remove hidden polygons that are very close in depth to visible polygons, or at least move them farther apart.

If the above is not enough, the solutions get more complex. There are scenarios where there is really geometry with a large range of depth that must be visible at the same time. Methods for dealing with these (relatively rare) cases include logarithmic depth buffers, and multi-pass rendering approaches.

Note that my answer is purely about the case where the original polygons in world space have different depth. If polygons with exactly the same depth (i.e. coplanar polygons) are drawn, depth fighting will almost always be the result, and this situation needs to be avoided with other methods. Since this doesn't look like the scenario here, I intentionally did not cover it.

Note, the projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. At Orthographic Projection the coordinates in the view space are linearly mapped to clip space coordinates.

Made it 100x bigger, still no sign of it!

If you just scale the triangle triangle to (0, 0, 0), (0, 100, 0), (100, 100, 0), you won't see the triangle either, because of the view matrix:

lookAt(0, 0, -1, 0, 0, 0, 0, 1, 0 );

With this view matrix the x-axis is inverted in a Right-handed coordinate system. The view coordinates system describes the direction and position from which the scene is looked at. The view matrix transforms from the world space to the view (eye) space. In view space, the X-axis points to the left, the Y-axis up and the Z-axis out of the view (Note in a right hand system the Z-Axis is the cross product of the X-Axis and the Y-Axis).

Your view is defined as follows

position = (0, 0, -1)
center = (0, 0, 0)
up = (0, 1, 0)

the y-axis and z-axis are so:

y-axis = up = (0, 1, 0)
z-axis = position - center = (0, 0, -1)

the x-axis is the cross product of the y-axis and the z-axis:

x-axis = cross(y-axis, z-axis) = (-1, 0, 0)

This means, that you look at the back of the triangle. This causes that the triangle is "flipped" out of the viewport. It is drawn at the left out of the screen, so you can't see it:

I am trying to figure out why I can't get any textures to render with LWJGL 3. I've tried multiple ways to load (PNGDecoder, STB, BufferedImage) and to render textures. But the result is always a white quad.

Your are missing the propagation of the texturecoordinatesin in the vertex shader. Add this anywhere in the vertex shader:

texturecoordinatesout = texturecoordinatesin;

Also, you are doing a mistake when you set the vertex array pointer. The last parameter of glVertexAttribPointer for attribute 2 should be 32. 4 floats offset for position and 4 floats offset for color is a total of 8 floats offset, or 32 bytes:

I began using LWJGL and I watched som tutorials on yoututbe. I have a problem with my quad not rendering. Whenever I run the program I would just get a black screen. I searched a lot for the answer but i couldn't find it. I don't know what I did wrong. Could you help me?

I solved it. The quad was not rendering. In the model class, at glBufferData(GL_ARRAY_BUFFER, buffer, GL_STATIC_DRAW) I had to put instead of GL_ARRAY_BUFFER, GL_ELEMENT_ARRAY_BUFFER. Thanks for the help!

I started watching these tutorials for creating a 2d top-down game using LWJGL and I read that VBO's should be fast but for rendering 48*48 tiles per frame I get only about 100FPS which is pretty slow because I will add a lot more stuff to the game than just some static, not moving or changing, tiles.

What can I do to make this faster? Keep in mind that I just started learning lwjgl and opengl so I probably won't know many things.

Anyways, here are some parts of my code (I removed some parts from the code that were kinda meaningless and replaced them with some descriptions):

I don't know what else to put here but the full source code and resources + shader files are available on github here.

Answer:

With your current system, what I would recommend doing is grouping your tiles based on texture. Create something like this:

Map<Texture, List<Tile>> tiles = new HashMap<Texture, List<Tile>>()

Then when you go to render your map of tiles, you will only need to set the texture once per group of tiles, rather than once per tile. This saves PCI-E bandwidth for pushing textures/texture ids to the GPU. You would achieve that like this (pseudo code):

Something else I see along these lines is that you are pushing the projection matrix to each tile individually. When you are running a shader program, the value of a given uniform stays the same until you change it or until the program ends. Set the projection matrix uniform once.

It also appears that you are calling this every renderTile(...). Given the value does not change, calculate it once before the render pass, then pass it in as a variable in the renderTile(...) method rather than passing in camera and world.

How can I render Text in LWJGL/OpenGL with FreeType ?
I find only tutorial is C++, but I don't understand C++.
If there is no tutorial for rendering text with FreeType.
How I can rewrite the C++ code in Java?

Answer:

There is a small FreeType wrapper in libGDX (Java OpenGL related library). The FreeTypeFontGenerator class will use FreeType to generate bitmap font of given size. Then you may use it to render your text through libGDX facilities or standard OpenGl facilities:

For binding I use
glBindVertexArray()
and
glBindBuffer(GL_ARRAY_BUFFER, )

For drawing I use
glDrawArrays()

Is the way I setup my VAOs wrong, or do you need my code to solve my problem?

Answer:

When using glDrawArrays, the count parameter specifies the number of vertices, not, unlike some other functions, the number of values (floats) or bytes. OpenGL does not check whether the count exceeds the capacity of the VBO. Therefore, when you specify a larger count value than there is storage in the VBO, OpenGL will silently take values from adjacent memory, which in this case contains your other VBOs.

So you have to be sure you pass the number of vertices, because the number of floats is three times as large, drawing all your data instead of just the data for one chunk.

Original Post

It seems that you have depth testing disabled, which is a technique that discards all fragments that are farther away than existing ones to prevent farther objects from covering closer ones.

I've been trying to get into OpenGL with LWJGL and I've run into an issue that I cannot find a solution to. When trying to draw a triangle with the code below, the window opens correctly and begins flashing a shape that isn't necessarily the intended triangle (sometimes it appears briefly, but often there are rectangles in one of the quadrants of the window).

Part of my hesitation is in how OpenGL, by my reading of various posts and docs online, has changed within recent memory to use a less functional and more an object-oriented approach (VBOs and GLSL?) with GL4. Am I correct in this understanding and what are the preferred resources for learning this newer OpenGL for LWJGL?

vBuff.put(tri) transfers the the data to the buffer, beginning at the current position (which is the start of the buffer in this case). The buffer position is incremented by the size of the data. So the new buffer position is at the end of the new data.

flip() sets the limit (length) of the buffer to the current position and then the position is set to zero.

Further, it is not necessary to create and fill the buffer continuously in the loop, it would be sufficient to do that once before the loop:

Background: I have been working on a game engine in LWJGL. I normally work on my desktop with an NVidia graphics card. When using that card, everything works properly (i.e. the scene renders, UI renders, and everything updates). However, when I use my surface pro 4 and its integrated graphics (Intel 530), the scene and UI seem to render at least 2 times (to fill both front and back buffers), then the scene and UI stop updating. I can confirm that the application is still running, as my in console FPS counter still works.

I use the Nuklear demo provided by LWJGL here. Does anyone have any ideas on why this is, or is this most likely a hardware issue?

Thanks in advance!

Answer:

To answer my own question, it seems that a driver update fixed the issue. Another issue that I just found was that the inputs seem to be several pixels off. I can confirm this with other, 3rd party software such as Blender. Seems that there is nothing I can do, so hopefully this response will be useful for someone else experiencing similar issues.

You usually draw them in counterclockwise order for a quad (or triangle) that is facing you (this is called "winding") and clockwise for one that is facing away from you. In your case, you have drawn them as if they were a letter "Z" which is invalid.

I am trying to learn how to use LWJGL3 and I just got to a state where I want to render something (a test quad for now). I have a class that represents a mesh where I set up the VAO with vertex, colour and indices buffers and another object later takes the mesh instance, retrieves its VAO ID and attempts to render it.

The problem I have is that no matter what I try, nothing renders in the window. I can change the background colour through the glClearColor() method but the quad never shows up.

The problem was not in the parts of code shown, but in the main loop that I copied from a book without thoroughly thinking through what it did. I ended up with a glClear call right before glfwSwapBuffers call, which cleared the buffer right before showing it.

Lesson of the day: don't just copy from a book, think thoroughly about what you're doing

(Thank you to the people of LWJGL formus for helping me discover this mistake)

I am using LWJGL 3 and Shaders with VAO's. My current implementation does not draw the VAO when using the shaders. However the VAO will draw when not using shaders but the VAO will be white. My question is what am I missing in my shader setup that is preventing me from seeing the VAO when using shaders?

As the title suggests, I am trying to render two three-dimensional objects simultaneously using LWJGL, however, only the second one is rendering. I did a bit of searching around and added glPushMatrix() and glPopMatrix() before and after each render. In addition, I am trying to render each object dynamically, rather than statically, with the use of an ArrayList.
Actual rendering code, called once every frame:

Size, Position, Rotation, and CoordinateFrame are all classes with X, Y, and Z values. MainProgram.renderer.parts is an ArrayList that holds the objects to be rendered. I suspect that the problem lies somewhere in the actual rendering (glBegin() to glEnd()), as I don't really see how the rotation could be a problem. Both objects render separately just fine, they just don't both render at the same time. They are two different sizes, and one is larger than the other. No matter the order they are in, however, only the second one renders.

Answer:

I solved the problem: I was clearing the buffers every time I wanted to draw something. I've moved the clear now, and it works.

When I run the screen, I just get a window filled with black, and I cannot understand why a white triangle won't render to the window. I've messed around with glClearColor(), and changing its values does succeed in changing the window background color, but I can't make a white triangle appear in the window. Also, I tried using glBegin()...glEnd() to render in immediate mode, with none of the code having to do with buffers in there, and it still didn't render anything. I'm really confused about this, what am I missing?

To create a virtual world I am using Lightweight Java Game Library (LWJGL) (Java + OpenGL). I want to load my terrains into graphics card memory on worker thread, while on main thread I want to take these, already loaded terrains, and render them. In order to do that I have to create Vertex Array Object (VAO), create Vertex Buffer Object (VBO), add VBO into VAO attribute list and finally render everything. This works perfectly on single-threaded system, however I am having problems implementing it on multi-threaded system. I know that VBO can be shared between OpenGL contexts, while VAO cannot be shared (reference1; reference2). Therefore to accomplish my goal I:

I am sure that I do not render not loaded terrains, because I load terrains when they are outside render scope. I have read many articles, questions and blogs about OpenGL shared contexts and concurrency, but did not managed to find a solution. I would be very grateful for any help.

Answer:

As you already stated, VAOs are NOT shared between contexts, so it is also impossible to modify them from multiple threads.

I am working on a project in LWJGL, and the OpenGL setup works on 2.0, but whenever I try to render on LWJGL 3.0, it returns Function is not supported.

The methods that have returned this error:

glColor3f();

glVertex3f();

glColorPointer();

glVertexPointer();

glBegin();

glEnd();

Our project setup is fine, and the window shows without these methods, but whenever we use them, LWJGL spits out that error. We need help and quick, so if you know why this is happening, please tell me.

Answer:

According to this: Why a new version These version(I mean 2.x and 3.x) have not backward compatibillity. Also there is some major changes to API between these version. So you can not just change library.

I'm learning LWJGL and OpenGL by following this tutorial i found, i tried my best to change the code to be compatible with the never versions and hadnt a problem with it so far. But now my Tilerenderer wont render more than one type of tile/one texture for the VBO's and i tried to fix it now since 3 days (in the beginning it didn't render anything at all) but couldn't find anything that fixes this problem.

I think that enumerating over a series of BufferedImage[] is your best bet for a homemade solution. Someone made a simple example over here. Pull the images from your spritesheet to create the array, then just swap between the sprites as desired. Possibly building an AnimationManager to move between Animations could help.

If you want to start using the modern shader way, you can't use immediate methods. Everything will be calculated in the shader ( which you have to write yourself, no opengl predefined goodness ). So instead of calling

glTranslate3f(x, y, z);

you'll have to create your own Model Matrix which gets send to the shader, where it is applied to the position of the models vertices. The whole point of Modern OpenGL is to minimize the Interaction with it. Instead of having CPU and GPU work hand in hand ( GPU wants to run faster, CPU is bottlenecking ), you let the CPU do some math, and when done push it to the GPU in one go.

Whilst your Vertex Shader was already expecting the Matrix, it didnt get it because it was missing in your Java code, you never passed any Matrix.

I haven't seen your Shader compiling code, but ill assume its right... ( Generate Shader id's, put source into shaders, compile, generate program and attach both shaders to it )

After compilation you'll have to tell OpenGL where the output color goes:

glBindFragDataLocation(program, 0, "fragColor");

When you load your model's, you'll also want to switch to modern Vertex Buffer Array's. By default a VBA can store 16 Vertex Buffer Object's which can contain pretty much any data you want to store for a vertex ( position, color, texture coordinate, giv'em names if you so desire... )

( Yes thats really already it )
Jokes aside, yes thats a bunch of code, learn it, love it. And we haven't even used any Matrix with the Shader, but thats really simple. Everytime you want to push a Matrix ( in form of a FloatBuffer ) to the Shader, using the UniformLocation

We're supposed to use LWJGL 2.9.3 to create a simple application that displays a wireframe object. I created a test class based off of the first example code from http://www.glprogramming.com/red/chapter01.html What I end up with is a program that flashes a white square for a brief moment and then disappears. I'm not sure what I'm doing wrong.

OpenGL is a state machine. This means, there are lots of states which you can set, and which will influence the final rendering result in some well-defined way. These states are never automatically reset, they just stay as you set them, until you change them. There are no such things as "frames" or "scene objects", just a stream of state-setting or drawing commands.

OpenGL's matrix stack is also just a state. The function glOrtho will multiply the current top element of the currently selected matrix stack with that ortho matrix, and will replace the top element by that result.

Assume the Ortho matrix is called O. Initially, all of GL's matrices will be identiy. So when you call render for the first time, you'll get: M = M * O = I * O = O. However, the second time, you will get M = M * O = O * O. And so on.

You have to explicitely reset your matrix back to identity at the beginning of the frame:

glLoadIdentity();
glOrtho(...);

You should be aware that the code you are using is not very good. It uses the MODELVIEW stack for projection matrices, which is meant to go to GL_PROJECTION. But before you try to learn about that staff, be warned that all of that is completely deprecated in the GL since a decade. Modern core versions of OpenGL do not support these functions at all.

I want to create a 2D Game with Java and LWJGL. It is a retro styled RPG game. So there is a really big map(about 1000x1000 or bigger). I want to do it with tiles but I don't know how to save it/how to render it.

I thought at something like a 2D-Array with numbers in it and the render just sets the right tile at the right place.
But i think the bigger the map gets the more it will slow down.
I hope you can help me. :)

My second suggestion was to make a big image and just pick a part of it(the part where the player is) but than its hard to know where I have to do a collision detection, so this ist just an absurd idea.

Thank you for your suggestions!

Answer:

As one of the comments mentioned, this subject is far too large to be easily covered with a single answer. But I will give you some advice from personal experience.

As far as saving the map in a 2D array, as long as the map is fairly simple in nature there is no problem. I have created similar style maps (tiled) using 2D integer arrays to represent the map. Then have a drawing method to render the map to an image which I can display. I use multiple layers so I just render each layer of the map separately. Mind you most of my maps are 100x100 or smaller.

I would recommend for such large maps to use some sort of buffer. For example, render only the playable screen plus a slight offset area outside of the map. E.g. if your screen if effectively 30x20 tiles, render 35x25, and just change what is rendered based on current location. One way that you could do this would be to load the map in "chunks". Basically have your map automatically break the map into 50x50 chunks, and only render a chunk if you get close enough that it might be used.

I also recommend having the drawing methods run in their own thread outside of the main game methods. This way you constantly draw the map, without having random blinking or delays.

I'm using GL11.GL_TRIANGLES cuz that's how I can make models' lines show up instead of faces. But when I set color for a vertex, it just colors surrounding lines to the color set, in some cases all of it's lines just take the color of surrounding vertices. How could I make it so that it kind of combines those 2 colors depending on the distance of each vertex and the colors?

I'm using GL11.GL_TRIANGLES cuz that's how I can make models' lines show up instead of faces.

Well, GL_TRIANGLES is for rendering triangles, which are faces. If you only want the models' lines, you can use one of the line drawing modes (GL_LINES, GL_LINE_LOOP, GL_LINE_STRIP etc).

However, a better way is to enable

glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);

, which makes only the outline of the triangles show up.

You can switch it off again with

glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);

How could I make it so that it kind of combines those 2 colors depending on the distance of each vertex and the colors?

I'm not sure what you mean with this; by default values passed from the vertex shader to the fragment shader are already interpolated across the mesh; so the color a fragment receives already depends on all the vertices' color and distance.

Edit:

In the vertex shader:

if(selected == 1){
colour = vec3(200, 200, 200);
}

I assume you want to assign an RGB value of (200, 200, 200), which is a very light white. However, OpenGL uses floating-point color components in the range 0.0 to 1.0. Values that go above or below this range are clipped. The value of colour is now interpolated across the fragments, which will receive values with components far higher than 1.0. These will be clipped to 1.0, meaning all your fragments appear white.

So, in order to solve this issue, you have to instead use something like

I am attempting to render a cube with different colors on certain sides as a practice exercise, but the problem is that as I rotate the cube along the y-axis, I can still see the different color side through the sides facing the camera. I've tried splitting up the code into seperate glBegin() blocks for each side and I've tried looking around on google for the answer with no luck. According to the Microsoft documentation on glColor3f, "glcolor3 variants specify new red, green, and blue values explicitly and set the current alpha value to 1.0 (full intensity) implicitly.", so the transparency shouldn't be a problem...

OpenGL draws shapes in the order that you tell it to, so if you draw the red face last, its fragments (i.e. pixels) overwrite the green ones that were drawn earlier. Since you (presumably) want to see the faces that are "in front" to actually appear in front, you have to tell OpenGL not to draw fragments that are "behind" things that have already been drawn.

This tells OpenGL that each time it's about to draw a fragment, it should compare its depth against the depth of what's already been drawn, and discard the new fragment if it's farther away than the old one. It also clears the depth buffer so that initially every pixel on the screen is "infinitely" far away, so that the faces of the cube will all be in front of the background. This will prevent the red face from appearing when there's a green one closer to the camera.

I'm not exactly experienced with LWJGL, or even OpenGL for that matter, however, this code looks proper to me, as a Java developer and having studied a few bits of example source codes...

Ultimately, my question is "How can I fix this?" I added the "glEnable(GL_TEXTURE_2D);" while typing this, which had caused the drawing area to go from white to a blood red color...

Answer:

In OpenGL texture coordinates are given from [0,0] (meaning bottom left corner) to [1,1] (upper right corner). When texture coordinates are out of this range and GL_TEXTURE_WRAP_[R|S|T] is set to GL_REPEAT (as by default), the actual lookup positions into the texture are calculated by

lookup.xy = fract(texCoord.xy)

In the special case given here, the texture coordinates range from 0-w, which will result in w repetitions of the texture. Since the viewports width is also set to w, each of this repetitions will only be of 1px width.

I have made a simple application in lwjgl and created simple gui. For now I have frame and panel. But there is a problem.
Because (Display 800x600) when I make panel on Panel(x,y,w,h) (0,0,64,64) everything works fine, but when I create it on other position (x,y where point 0,0 is in left upeper corner) it render moved panel.
The white space is my panel which should change color when I drag mouse on it. It is created on (417,417,64,64), but it's rendered on somethink like (90,90).
I have rendered fonts to show all of itss positions. The blue box I draw on this image is where it should be and it looks like there the panel is, because the white space is changing color when I drag mouse there, but this white space should be there.
My code looks like that:
I am adding all components to HashMap like Panels.

Your whole program structure looks fairly unusual, and I believe this is part of what is tripping you up. For example, while I'm all for encapsulation, wrapping a single uniform in an object, like you appear to be doing with your model variable, is pushing it too far IMHO. A uniform value is really an attribute of a shader program, not an independent object.

Anyway, without going too deep into design aspects, I believe your main problem is in this code sequence (with parts omitted):

The second of these calls will overwrite the value written in the first one, without the first one ever being used. When you later render object 1 and object 2, they will both use the second value for the uniform.

As long as you use the same shader program for both objects (which is a good thing, unless they really need different shaders), you will have to set the uniform value before you draw each of the objects.

So the calls to set the uniforms should go into the draw function, where the structure will look roughly like this:

So the issue i'm getting is that it should have drawn four triangles to screen to form a pyramid, however my output looks like this.

I have rotated the image to try and see the depth but it is a flat object.

I have tried to identify where the issue may be coming from by attempting to draw each triangle individually by changing my render method to gl.drawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, 0); So that it only draws one triangle. I have tried to draw all four faces individually and all are drawn to screen.

Answer:

I found the issue. I was originally scaling the triangle such that

model.scale(new Vector3f(0.2f, 0.2f, 0.0f));

As a result the z axis was being multiplied by 0. Silly mistake, hope this helps someone in the future.

I've got some code that's supposed to render text to a texture so that I don't have to render each character each draw step, and can instead use the rendered texture. However, my code does not as it's supposed to, and the texture is left blank. After a few hours of trying different things, I cannot figure it out, and so I bring the question to you.

I'm fairly certain the problem is somewhere in this code chunk below, but if you think it's not, I'll gladly post whatever other samples of code you would like. I just really want to get this done already. The exact problem is that the created texture is blank, and never is rendered to (it seems like). I've tried just drawing one massive quad on it, and that didn't seem to work either.

Edit: After flipping the buffer, I can get some color to be rendered to the texture, but it's all just one color (which makes me think it's only sampling one pixel), and I can't figure out how to get the actual image I want to render to show on it.

I am currently working on sub dividing my icosphere and it ends up looking crazy (see below). It works fine if I don't subdivide it at all so I believe the error is either in my recursion for loop or the getMiddlePoint method (see below). My thought on why this is happening is that I am adding the vertices and indices in the wrong order. If this is the case what order should I be adding them in? Any ideas on how to fix this?

When I draw a texture with transparency(in file) over ShapeRenderer any shape isn't being updating. When I set batch.setColor(1f, 1f, 1f, 0.5f) result is almost the same: I see stuck shapes with 50% transparency and also see the same animated shapes underneath.
I've tried to use Gdx.gl.glEnable(GL20.GL_BLEND) but it didn't help.

When you created the initial vbos you bound the VAO then unbound it which allowed the drawing to take place, but when you add the texture coordinates no VAO is bound leading to your vbo being bound to nothing. When render is then called and the texture coordinates called upon nothing is found and so the default 0;0 coordinates, aka the single corner pixel, are used.

I am trying to make a basic render engine using LWJGL, and I have recently run into a block. Whenever I try to change the shader uniforms that objects are being drawn with twice in a frame, the program just displays the clear color. Where I think it is screwing up is line 82 of the SceneLoader class, but I have no idea. This is my first time using Lwjgl, so any help with this would be greatly appreciated. Thanks!

When I try to execute my Java code that uses Lightweight Java Game Library (LWJGL) in one thread, everything works fine. However when I start second Java thread that simply constantly prints text (see my text thread class pseudocode below) my OpenGL program becomes unresponsive, but the text is still printed. No errors are shown.

I have read that OpenGL has problems with multithreading here and here and so on, however I do not try to separate OpenGL job in multiple threads. I use One thread solely for OpenGL and other thread to execute non OpenGL code. I did not find any suggestions on internet what is the cause of my problem, I tried changing thread priorities, but it did not help. Any help would be appreciated.

In the picture I have rendered my terrain with a few basic models (tree + shrub), but I had been noticing some glitching that was occuring with the models. Knowing this, I rendered a flat plane of "water" onto my world and it showed what I had thought. I am still unsure of what is wrong here, any insight would be helpful!

Note:
I am using LWJGL
The plane of "water" is flat, and the area that it is in is concave, so no part of it sticks above the "water"

Answer:

FIXED: This is the result of having too small of a zNear value when calculating projection method. If you dont know what that is, search up how to create a projection matrix 3d and it will explain :) Finally solved it and can get moving with my game, hope this can help others!

What I ended up doing is use VBOs to render. Didn't take too long to implement and the result was awesome. I use a VBO per GUI Element type. I truly recommend not to use GL11 even for this basic type of stuff. Just stick with the 'advanced methods'.

Everybody, have a great day!

I'm following an online tutorial regarding building a game engine using LWJGL, I'm coming along great and really try to understand the code I'm being taught, however I now get this error and I really don't know what I'm doing wrong.

Hi guys !
I want to generate a simple flat terrain in lwjgl but this code doesn't produce me anything. I am very new in this chapter so if you can explain why this code does nothing please do it! I am generating the terrain with the code shared by ThinMatrix , then i upload the vertices to the buffers and then render them in the main game loop . When i compile it , it shows me a black screen. I have searched for a lot of tutorials but I didn't find anything that can help me .

Answer:

I see several issues with your code:

Issue Number 1: You are generating a new instance of the man class on every iteration of the loop. Create a single instance of the object outside of the loop and use it in the loop.

Issue Number 2: You are using the old, static pipeline OpenGL. Following ThinMatrix's tutorials, you should be using OpenGL 3 or newer. Everything should be going through shaders, rather than using the GL_PROJECTION_MATRIX and the likes of that.

I am using OpenGL and LWJGL 3 to draw some quads onto the screen. I need to know when the mouse is over a quad. When I render the quads to the screen, I use the OpenGL coordinates, ranging from -1 to 1 for both X and Y and with (0,0) at the center of the screen. When I get the mouse position I use

glfwSetCursorPosCallback();

which gives me the coordinates ranging from 0 to the width or height of the window and with (0,0) at the top left corner (below the title bar). I then take the mouse coordinate and calculate the OpenGL coordinates.

For example if my window size is (800, 600) and my mouse was at (200, 200) I would get (-0.5, 0.33) [since (400, 300) would map to (0, 0) in OpenGL's coordinates].

So here's my problem:

OpenGL includes the title bar in its coordinates, where as glfwSetCursorPosCallback(); does not. This means that if I render a vertex at (-0.5, 0.33) [like in my example] it renders at around (200, ~210).

As you can see, because the two coordinate systems cover different areas, its more difficult to switch between the coordinate systems.

I have searched for ways to exclude the title bar from OpenGL's coordinates, to completely get rid of the title bar and to get the height of the title bar (so I can include it in my calculations and make the correct adjustments). I haven't been able to figure out how to do any of these, so I'm looking for a way to do so, or a different method that will resolve my problem.

EDIT 1: Adding Code

@Nicol Bolas informed me that this is not how OpenGL normally works so there must be something causing this in my code. I believe I've provided the parts of my code that would be responsible for my problem:

Here is my Renderer class [I am using the drawQuad() method]

Note: I am not currently using the view, model, or projection matrices in my shaders.

I was able to use glfwWindowHint(GLFW_DECORATED, GLFW_FALSE); to remove the entire border from the window, title bar included, which fixed the issue. Now however, I obviously don't have the options to close, minimize, etc., on my window, although I suppose I can program those in myself if necessary. Will update if I find out any other solutions.

Answer:

GLFW functions typically work with the client area of a window (the inside window area not including titlebars, scrollbars, etc.) so glfwSetCursorPosCallback is giving you the expected values. If your OpenGL framebuffer is for some reason rendering things behind the title bar (whether it's an improper setup or just a platform specific detail) you should still be able to get the title bar size using glfwGetWindowFrameSize:

I am rendering a simple quad and have been experimenting with shaders. It worked fine, but I followed a texturing tutorial to add textures and I must have changed something. Because now, even though all the texturing code is commented out, the quad doesn't render when I turn shaders on. I suspect it has something to do with the data binding, as I am still trying to learn it. I can't for the life of me find what's wrong!
Shaders (vertex and fragment):

Hi I am new to LWJGL and I need help with drawing items on the screen. I have a problem when I render my 2D tiles my background's colour is changed to the one the tile is coloured with. So for example if I make a green tile the background goes green too! Here is my code:

I have followed ThinMatrix tutorial. I did everything excatly how he does in tutorial. I don't think there is mutch wrong with the code because I have checked it few times and I copied it from the comments and still same result. I am pretty sure it is something to do with the model, but hes model worked but my blender cube doesnt. Mine draws only half of the triangles.

Unanswered Questions

We will update and show the full solutions if these questions are resolved.

Java fatal error when calling glDrawElements()

I met this problem first time, everything has worked fine before today. To my mind, the problem is in the memory management, or something similar.I compressed all my opengl code in a single class, ...

How to start my Game from NiftyGUI Main Menu

I am trying to start my game from a Main Menu class that implements NiftyGUI. I am using OpenGL (LWJGL) and Java. The problem is that I am never able to get the main menu to disappear and then start ...

LWJGL: Texture renders with background color

I'm working on a game project I've been planning for a long time. I set up the basic things and want to make the textures work. I use a custom written TextureLoader (with a code snippet you might know)...