Most people don't realize that the demo footage was made for GDC , which means it was meant for developers and not for gamers , so they end up saying shit like " meh this looks shit blah blah blah " , when in fact what they have achieved is quite stunning , the editor , the particle effects, lighting, can't wait to see more of the engine !

2: Man I remember UED2 and how long it took to recompute the lightning, and it just got slower and slower on every run. I was in awe of some UT maps that had great lightning. And I remember those frustrating moments when I realized the light I've added to my scene is in correct XY position but is stuck in a wall because I didn't check the Z position, so I had to recompute the lighting again. That editor was buggy as hell too.

Well he was updating the movement code (or player code), not the rest of the game, but yes the fact that the game was able to reload the newer versions of the code after compiling without reloading the map was sick.

What people need to realize is that the faster and easier they make quick testing like this, the higher the quality of the game produced will be. If it takes me 2 minute to tweak a number, I'll maybe get it roughly close to 1-2% error, if it takes me less than a second to test it, I'll adjust that shit up to 0.0001% precision.

When you get tired of implementing your game logic in Kismet for every single level you make and decide to put it in UScript.
(Well, I suppose you could put the game logic in an empty, always loaded level and stream in new levels whenever you want to switch, but that would be a mess to work with.)

Edit: According to another comment in this thread, they're dropping UScript for C++, but what I said still applies.)

If you are looking to program on UDK (Unreal Scripting ) then you need a good experience in programming ,the syntaxes are a mix of java and c++ , but in order to program well on UDK you need to understand how Unreal Scripting works , good understanding of Object Oriented Programming concepts. :)

Don't get too accustomed to the way things are done in Java, there are some fundamental differences in the programming paradigms between Java and C, less so with C++ but C++ is a lot closer to C than Java on that spectrum. I'd recommend doing a little bit of C/C++ as you do your Java, so you can always have it in the back of my your mind.

I'm not in the games industry but in an industry pretty close to it, and I use both Java and C++ along with OpenGL

This is the important part. Speaking for business software, there is no lack of amazing tools for programming, but as you move further away from the code you make some things easier, but at the cost of making other things much much harder. Visual SQL editors are the one example that comes quickly to mind. I'm amazed at the number of people I encounter who only know how to build queries using visual tools and don't really know how to write SQL. Or they use visual controls in their code, and don't understand how they are executed.

I still remember getting a big "attaboy" for fixing a piece of software that used one of those query tools. The tool was pulling down the entire table and posting it back just to change one record. Shielding the programmer from the implementation is a noble goal, but there should always be access and a good developer should always understand what's happening under the hood.

Facilitating faster iteration seems especially important for graphics engines these days. I think it's a sign of maturity; that just having the best graphics isn't enough to win.

However, my feeling is that whoever is developing graphics engines for ARM GPUs will win. Phones/tablets are just about to pass console graphic quality.

I suppose they can easily target a different platform, especially since they all support the same standards... but if there's anything by ARM developers, including secondary considerations like how they pay for it, upgrade cycle, that could give them an ongoing advantage that's hard to match. We'll see.

The next GPU versions, Imaginations's PowerVR 6000 series (Rogue) and ARm's Mali-T658 are about 10 times more powerful than the previous versions (SGX543 in iPhone/iPad and Mali 400 SGIII, in Tegra Soc etc). This puts them on par with the xbox360's xenos GPU, if you have 2 or 4 of them.

Now, maybe there'll be other issues like bandwidth between CPU and GPU (which is getting addressed in the next ARM, cortex-A15); or they might need to reduce clock rates for reasonable battery life etc (but process shrink reduces power consumption considerably). And of course, console game developers have had years of experience in tweaking out performance. But even so, it will be close.

I've read that the 6000 series should be shipping in products in the second half of this year... so it seems pretty likely for the next iPad (i.e. early 2013).

The xbox360 and ps3 gpus were weak when they came out compared to existing pc gpus. It's impressive what phones can and will do no doubt, but it's not a fair comparison really, especially talking about 2014. I don't imagine next gen consoles will be much behind that and they'll likely be about as powerful as the high end pc gpus are right now which are largely going to waste with most newer games, in a big part due to consoles holding them back.

I think Hammer is quite nice compared to something like, let's say GtkRadiant. But you know, in my opinion, the entire Source SDK is nice and well-written, too bad it's not as state-of-the-art as it used to be.

No it isn't. GtkRadiant's lineage comes from the original Quake 1 editor on NeXTStep (AFAIK it was a conversion/remake of the original QuakeEd as QE4 for Quake 2, then QERadiant, then Q3Radiant, then GtkRadiant and with each iteration having a bunch of forks/branches like DoomRadiant, DarkRadiant, ZeroRadiant, CoDRadiant, etc).

Hammer is a descendant of WorldCraft, a shareware map editor originally for Quake 1. At the time it was very user friendly (and compared to many other game editors, the UI is still very minimalistic) and Valve liked it enough to buy it and make it the default editor for Half-Life 1 (it was distributed in the game's CD). At some point later it was renamed to Hammer.

Interestingly with some modifications you can still create Quake 1 maps with Hammer.

I haven't tried it myself tbh. However i think you also need to convert the Quake textures to a format Hammer understands and also add definitions for Quake entities so that Hammer can place them properly.

No, Hammer is based on Worldcraft, which is/was used for Quake I and II and GoldSrc. Though they probably share parts in their codebase, since the original HL engine was based on id's current engine at the time (id2 I think, not sure on that). However, the Source engine was a complete rewrite from scratch, but even that bears a lot of similarities with id's engines. Also, I totally read the "I just thought it was interesting" in GLaDOS' voice.

It's not a great alternative, but the Croteam has probably one of the best returns on investments that you can purchase. They've had the work real time in the editor since before 2001. Same all variations of the CryEngine, but that came a lot later.

This is pretty amazing, but keep in mind that the linked example is Java, and this is a feature of working with Java. The stuff being done with Unreal Engine 4 on the other hand, is much much more complicated, I think, and a problem a lot of programmers could not solve.

I haven't seen much yet, but I would be amazed if UE4 allows you to compile changes in realtime as you edit the game; being able to edit values in code is a big problem that can be solved in many different ways, but code compiling in realtime seems to me to be a beast of a problem, at least with C++ or C.

[Edit] Just realized they did actually show code recompiling... holy shit!! I'm going to be wringing my brain trying to think of how they did it.

It is not technically that insanely difficult to recompile at real time. The problem is designing the entire application around being able to reinitialize it self from a new library/dll/assembly/whatever while still maintaining state. That is not easy to do at all.

That's essentially what I was referring to, thanks for correcting my poor wording.

So from reading around it seems this problem is solved by doing this: the game code is compiled into a DLL so it can be reloaded at runtime; with the editor running, make some changes in code and recompile it to the DLL; there's a code reflection system to save the existing state of the game; the DLL gets reloaded, and the reflection system fills in the data that was saved from the previous state. Debugging time has thus been saved, yes!!

Oddly enough, I'd be more impressed if it weren't able to do that. The live recompilation still involved a traditional compilation step - recompiling the whole file, relinking the whole codebase - and thus took several seconds; if it really delved into the compiler and recompiled only the changed functions and binary patched them in, it could update instantaneously.

Basically a mixture of shadow mapping to calculate what surfaces are directly illuminated and a small amount of ray tracing to estimate which surfaces are indirectly illuminated and interpolate (aka blur) the hits over the entire screen.

By default you only get one bounce, but that seems like enough to me and you could always make it do extra bounces.

That makes enough sense, but how are the bounced light samples applied to the scene geometry? In the video, he mentions rasterizing a photon volume around each sample. Would you need some kind of technique such as depth fail (commonly used for shadow volumes) to color the scene with these volumes?

Indeed, this is the demo that came to mind when I saw references made to "voxel lighting" in the UE4 editor in the video.

It's impressive that the authors got an animated mesh working at interactive framerates at all, but if you read the paper, they mention:

[...] the interactive update of the sparse octree structure for dynamic objects (in our case the Wald's hand 16K triangles mesh) takes approximately 5.5ms per frame. This time depends on the size and number of dynamic parts.

There goes 1/3rd of your 60 FPS budget on a single (creepy) animated model with 16,000 triangles.

This is the important thing - that demo shows "traditional" raster rendering of a polygonal scene with voxels used for indirect lighting only (though they're no doubt raycasting through the voxel structure) - the animation of the hand is done the old-fashioned way (skeletal animation, control points, all that jazz).

This basically means that their voxel structure doesn't have to be nearly as detailed as the polygon meshes because you never really see them except as vague splotches of colour on the wall. This makes animating them somewhat simpler.

I would love to know what kind of computer that guy's running — it's probably a $10,000 liquid-cooled rig, but I'd be curious to know if Joe Programmer will get the same performance with his $2,500 machine.

From the comments i've seen in several places it is either a GTX 680 or a 690 (which makes a lot of difference, however, since 690 has two GPUs while 680 has one). You can have a $2.500 computer with any of those :-P.

I actually doubt that. Their editor is basically equivalent to being in game (assuming its not actively compiling something). However, in game (as opposed to just level editing) will have tons more going on at once (many enemies, physics, AI, networking).

As someone said earlier, this is just as much a demo of their editor as it is the improved effects, something that the average gamer isn't going to think about, so I'm not too surprise they didn't appreciate the demo for what it is.

Very true, the average gamer doesn't really think about the implications of usability -> faster iterations -> faster improvements in game development. My biggest gripe was people complaining about beautiful effects/graphics, acting as if this demo dictated what games would look like and that it was thus Epic's fault.

Having worked with 2.5 and 3, this is a major leap forward. The GPU stuff I am not as excited about as that would have come anyways. I am excited about them getting off their ass to make the editor a million times better. So happy.

This is an awesome step forward. Also some of the better GI I've seen for a real-time application. Any ideas on how they cleanly compiled and reloaded libraries while the application was still running? (I thought I had heard before that replacing a loaded symbol while the application is running can be complex? Never tried...)

If the C++ code is compiling to a DLL, then the application can detect the DLL has changed on disk and FreeLibrary then LoadLibrary the new one. Pretty incredible stuff to see in action though- so seamless!

A PC game generation a whole step ahead of consoles for a few years might be a good change up in the market. What still amazes me is how the old hardware in consoles, when programmer well direct to metal can look as good as it does. I can't imagine the power of a GTX 680 but programmed like a console without wasteful layers of Windows and directX in the way

I dont know anything about how to code a game, but this honestly looks insanely impressive. I didnt think an editor like that could exist, so efficiently streamlined and having the ability to dynamically change things and play test it right there on the go, jesus...

It's been around for quite a while. I don't know if Far Cry was able to do it, but the versions of the cry engine were able to. I don't know what the earlest implementation was, but I used it back in 2001 with Serious Sam. CroTeam's favorite thing seems to be incorporating as many technologies as possible.

Thank you Intel. For purchasing and shutting down the most promising game engine/video game I've ever seen. Project Offset. Unreal Engine 4 does today(shy of the sick dev software) what Project Offset was doing 7 years ago.

Right when he says "you can see the reflection of different floor colors on the spehere itself" and sets the sphere down on a bright red carpet, the bottom of the sphere is black. Something is wrong here.

The more advanced video game engines an editors become the more I think that individual simulators will no longer be required, because an individual engine will have the capabilities to simulate any real-life scenario. I don't know why, but I find that amazing.