I am wondering arround, if there are some game projects, which are using AppGameKit to creating a minecraft like environment. With building and digging or with blocks, but with other intentions.

I saw these things:

And since AppGameKit Tier 2 is C++, it could be possible, to convert or assimilate

But, maybe there is something arround in AppGameKit Tier 1 Basic as well. Would be nice to see.

Maybe we could do a community project, and create a simple beginning minecraft and we can evolve from this point into other ideas or gameplay mechanics. Using a basic "Minecraft"-Clone for things like:

- Logical Stones

- Shaders

- a easy modifable shooter or 3D sokoban, a Portal Clone or something

Like "Small Game Templates"-Second Wave or so. (or second edition, or "Small Game Templates" gone wild or bigger or "Not so Small Game Templates"

I'm building a MMORPG and the client uses AppGameKIt... and it's a voxel world. But, instead of first person, it's going to be 3rd person strategy/survival.
There development server sits here: http://www.dymoria.com
So, yeah, it's possible to build Minecraft-like worlds with AppGameKit.

This might just be an interesting project to work on. The idea of do something like "Ludum Dare" or "A Game in a Week" is interesting as well. I like short projects. lol

If we could get people interested in taking small steps... and not trying to make the greatest thing ever right off the bat... it might even be possible to do community projects where we design a full albeit simple game like a micro minecraft or Crossy Road or whatever. And we figure out out what is needed... put some thought up front... break the work down into modules and multiple people here tackle it. Like one person takes player control, one enemy control, one world control, one HUD control and so forth. And just plug it all together when done. Such a thing could be very interesting to see what comes out of it, would be highly modular out of necessity and fairly clean code as well since each module would need documented "exposed" entry points for other people's modules to call and so forth.

It seems strange that rendering about 5k cubes would be about the limit for 60 fps. Of course big part of it is the crazy high camera far distance you set. I think maybe 150 would be good for that in this context.

Must be something going on... maybe different materials are being internally generated for each cube? Or some fx on by default?

I mean I get ultimately of course there is a real limit just don't see why about 5k cubes not being updated animated or anything but simply existing in 3D space and being rendered would be the limit on Windows. I'll play around with the code later and try different things see if they make any difference.

I tried to build something with Cubes in 2005, I think, and there we figured out, that a cube behind a cube, that couldn't be seen by the camera, could be rendered and will cost performance.

So we created combined objects. e.g. 3 x 3 x 3 cubes only have 6 sides (three are not drawn, because of backclipping, so only three), and wie tiled the texture and scaled it. It works, if combined cubes are have the same texture. If we place or distruct one cube, we had keep in mind, to reverse this process, so a 3 x 3 x 3 cube, has generate 27 cubes and clears the one, which is destroyd.

There is much optimization potential.

An easier example would be, to have two cubes next to eachother. They only need two walls fewer. [_][_] would be [_.._] . I think, there is also a lot potential, to do Imposters, and or Instances.

The way minecraft ist build, I think, it's possible, to have only "real" cubes on the surface. Maybe it would be more complicated, if you dig deeper and have caves. I think, there have to be some "zones" in a way. So you don't have to render the complete world with all cubes every frame. But figureing out, which cubes you can see and or which cubes connected and can be one combined-object ... could also cost performance.
But this Optimazition could be done at start or "compiled" into the level before. And don't have to be done every frame.

Late in 2005 on notebooks, it was time consuming, but the optimaziation we have tested with some combined cubes paid off in better frames/sec.

I set the Sync-Framerate to 6000, in hope, to see, if I got a difference. With Copy, I had about 300 Frames, with Instances, I had about 500 Frame. But I can't say it for sure, because everytime I "walk" arround and start the prototype again, I got different frame-rates. between 250 and 600. Don't matter, if I use "Clone" or "Instance".

I change a part. And the interesting thing I found out, even if all Blocks are invisible, the Framerate get not much better. The diffrence is from 40 (visible) to 65 (invisible)
So set Cubes to invisible, which are hidden by others, would only have a little impact. I thing, it is the sheer amount of objects, AppGameKit "strubbles" with.
I also tested, if setting the texture before instancing would have a significant impact on loading time, but I can't verify that.

I dont think it would be good to delete - cause if you hit a block - then we would need to reposition it somewhere in the world - like underneath the block your hitting - so it doesnt look like there is space underneath

If we hit a block, we could put the blocks missing back, and delete the beaten block. We only have to look at its surroundings, not the complete world. Mostly we only can hit one block after the other. So the calculation can't be so complicated, to Re-Create-the-Object/Objects arround.

Imagine, a cube out of cubes, 100 x 100 x 100, that would be 1.000.000, 1 Mio. Blocks.
But we could only see barly the outside with 100 x 100 by 2 and 98 x 100 by 2 + 98 x 98 by 2 ~ 58.808 so we would save a lot of blocks inside. And breaking one block on the outside, would be to create only one block more. So the same amount of blocks would be there. And if we dig deeper, we need maybe evry dig + 4 blocks.

Imagine, how often you can dig, before havaing a Million Blocks. I think, the number of blocks, could be nearly constant.

Maybe we could generate a 100 x 100 x 100 Cube-Array and "meltdown" it with "hot" lava to a landscape (simulated) and get a nice terrain, and only a few cubes needed. (about 50 k Objects maybe, without further optimisation. After that, maybe some Cubes could have charing Sides we don't need ... and so on)

without caves, you would only need the high of a hight map for the cubes positioning. Maybe 100 x 100 Cubes, if the are at the same hight, that would be it, if they change there high, there is a little calculation needed for the cubes next to it, or under, if the angle is very steep

function generatemap(sizex, sizey)
// Create Ground first
for x=1 to sizex * sizey
data[x].height=1 // make all blocks at ground level first
data[x].texture=1 // grass
next
// Add height on random areas
summit = random(1,7) // Hight peak of any mountain top
summitlocation = random(1,sizex * sizey) // where on the map do we position the start of that mountain
// Now at that location - make the summit point the full height and then surround blocks around the summit - position one level below the previous block until
// reach the ground level
//..
//..
//..
endfunction

We could add a bit of fog - so anything further than the fog field of view - can be invisible blocks - or blocks that are not generated what so ever - they "could be" potentially created as an when they come into view with InstanceObject

And then as the player moves, it procedurally builds new blocks as they come into view in the distance depending on where they are from the map generated

So this would mean that we could have potentially a lot bigger map to, say 1000,1000 (not all generated at same time) just as and when come into view

Hey, I think you want to store the blocks in a 3 dimensional array so you can run algorithms over it, which need to know the neighbor blocks.
You can then use Flood Fill, Breadth First Search and other nice algorithm.
For example you can save much faces between neighboring cubes, so you end up only rendering the hull, but therefore you need to split the cubes into 6 faces.
After that you can split the map into 16x16 chunks and use BFS to deem the chunks to render which can be seen from your viewing direction.
Sounds a bit like view frustum culling, but it is not, it can stop rendering chunks in the frustum but behind obstacles (i.e. another chunk blocking the view)

- about 100 Boxes with each 44 Polygones, and I got ~ 50 Frames / s -> 4400 Polygones
but I also could put 13 Soldiers in the Scene, each more than 8.000 Polygones (the Physics-Demonstration-Soldier)
and that are about more than 100.000 Polygones and I got about 56 Frames / second.

So with this in mind ... I think, we have to pre-bake the Objects or something.

FPS are from GTX 770. Have to test that on a Intel 4000 maybe. It would show more, what I mean in performance.

Attachments

Afyter copy/paste the last code from puzzler2018 , the poor performance of 200x200 blocks (66618 vertices processed at 27 fps) is surprising me ... The performance are the near the same when moving the camera out of the map with only 2700 vertices processed ..

For example, With my WIP 3D Editor, i can process 7 000 000 vertices at 50fps on an Intel HD Graphics...

It's the amount of objects that affect the frame rate so bad not the polygons ...well the polygons too but you have to find a good balance.
You don't want to switch between states for all the different Objects so much.
I wonder if the only way to combine objects is via memblocks ?
You want as few Objects as possible, so I suggest, if you go with the 16*16*16 chunks to combine all faces in this chunk to one object so agk can still run view frustum culling and draw call batching on the chunks/objects.

40,000 separate objects I can see where that would be a huge amount to process. I remember a huge thread over on the Unity forums where they wrestled with the same thing and yes need to create almost like a terrain map holding many blocks. Using individual blocks the world will have to be very small to keep up performance.

Like @Janbo mentioned combining many blocks say a 2x2, 3x3, 4x4 etc into a single object should help too. Actually I think that video above and articles on Minecraft would show these are things he wrestled with and solved as well. And people on Unity forums have too. So makes sense to look at all of that stuff.

I'll interject a little about how I am doing the blocks for my MMORPG while 'attempting' to conserve memory.

I wanted the ability to build upward, but if I had a full array of cubes like 100x100x100 (for instance) that wouldn't be very dynamic for various region sizes... and that'd be a heck of a lot of memory wasted, since I assumed most of that space would be empty and we only want to use memory when there are structures to fill it.

So, what I did is create a single 1 dimensional array to represent my X,Y ground plane... and then each element of that array has a Z array of elements to count for upward blocks. So, here I could adjust my 2D array to whatever X*Y length I needed, and then each individual element at every X,Y position had its own dynamic Z array up. So, the only time memory is used is if there's an actual block there. Setting any element to 0 meant there was empty space. I store as much data about that block type I can inside the array for quick access and information about any particular block.

By not using 3 dimensional arrays and using dynamic allocation of memory of single arrays... I can create larger voxel spaces while minimizing memory usage. the only problem would be, over long term usage memory would get fragmented and you'd need to restart the game every few hours if you were doing a lot of builing... but, it hasn't been an issue in my voxel world yet.

Quote: "It's the amount of objects that affect the frame rate so bad not the polygons ...well the polygons too but you have to find a good balance."

Ho yes, you're right, sorry.

If i understand the problem, the solution resides in having visible cubes only in the pipeline (so creating visible and destroying invisible ones). So the visible frustrum need to be manually and mathematically deducted before each swap depending of camera position/angle.
maybe it's possible to manage virtual "sectors" (big cubes) which are the "bounding boxes" to deduct which inherited cubes are visible or not ... if the big cubes returning GetObjectInScreen(sectorCube) = 1, so we create the cubes which are in ( if they are visually invisible or destroyed by the player, we pass them in the process and determine this on a second pass). Or a kind of binary space partitioning (BSP).

In all cases, the solution lies in mathematics and grouping to determine which cubes are to be created or destroyed. That the only way indeed. (combined with an hash table to have each cube's properties quickly)