It's been a while. But we've been hard at work. Gonna have some really cool stuff for you in the upcoming weeks. However, I want to show you this first:

This is a screenshot of Lambaste mode - a survival/wave mode where you build up your center defenses against endless waves of Guardians. By investing Catalysts (in-game currency) you can earn them back even more by hiring rebels to help fight with you. It's a neat little system that will be good for the final game when you want to make a little extra money for your town-building projects.

We have a little thing called the Variable system I've added too - which is a lot like the skulls in Halo. But a lot of this will be elaborated on in our January 1st video. So stick around, this is gonna be good.

I'm gonna keep this brief since you'll get a complete lowdown in our next video - but until then.

We're still working - I want to try and have something for you guys on the first, but no promises. The team is kind of busy with real life, and since we have literally no funding I can't just pay anyone to work for us. This is an act of love, so a lot of us have to do this in our free time...

But I digress.

Anyway, Forrest and I have been experimenting with motion capture using some Kinects we had lying around. It's been pretty successful - so you should see some cool stuff come out of that. We plan on taking these mocaps and using that bases to work off of for our more subtle animations (i.e. NPCs talking). More complicated animations like combat and movement will still be hand-animated with key-frame animation, however.

I'm also developing the wave-mode along with all that comes with that. Unfortunately, again, my artists are busy with their jobs and own projects, so progress is gonna be slow until I can release something that's presentable to hopefully garner interest. No worries though, NOKORI·WARE is still going strong. No one's quitting or anything, it's just a slow month with finals and holidays - you know how it is.

I've been doing work on AI as usual - but also working on the tower population/war'ing calculations. I've managed to successfully remove almost all of the random elements form the system, and it's all now based on careful calculations to statistically determine how wars are fought when you aren't around to see them.

To do this, I take the DNA of each AI on the floor and pair them with opponents, and then calculate who would likely win the battle based on their individual personal traits. I also calculate tension levels on the floors and also decide miner casualties based on how reckless the AI are and how honorable they are - all providing for some neat effects. I'm trying to add mroe graphs and data as well so you can carefully monitor population data.

None of the AI in the game respawn - so the war really matters. So I need a good way to present it!

Along with the war'ing systems, I added an "event" system that notifies the player of random happenings on certain floors. It also warns you of the tension levels on the floor so that you can personally go and prevent battles if needed.

This week saw the new pathfinding system being finished, in addition to some small overall improvements. The new pathfinder should allow our AI to find more accurate paths, especially through narrow corridors that the old pathfinder could path through. It has a large number of tweakable features, so we should be able to get a good balance between performance and precision of the pathfinding, which is important to avoid stuttering in the game when a large number of NPCs pathfind simultaneously.

Other changes includes small improvements to the SSAO. The flickering during motion has been reduced by weighting the SSAO intensity based on the view direction. We've also added threading to some bottlenecks which should improve performance a little bit more on multicore CPUs.

That's all for this week! Hopefully I'll get a chance to implement the new order-independent transparency system soon!

So a little over two weeks ago I got this idea on how to draw transparent stuff correctly without having to sort them first. Transparent geometry has always been a big problem in 3D graphics in general. All games have some problems with the ordering of transparent geometry. As an example, it's possible to see frag grenade explosions through smoke (the explosion is always drawn on top) in CS:GO, and in the Call of Duty games you can see through smoke by looking through transparent glass. I've managed to come up with an algorithm that lets me draw my transparent stuff in any order, and the algorithm works the rest out. Such techniques are called "Order-Independent Transparency" techniques and it's been an unsolved problem for over 20 years.

This algorithm could, with some luck, lead to getting something published in a real computer graphics journal as it does certain things that no other algorithm can do, which, again, if I'm lucky could lead to some acknowledgement in the games and computer graphics industry. As a third-year student at a university, this would be quite an accomplishment, which is why I decided to dedicate most of my time to it for the last two weeks. I've gotten in touch with a professor at my uni who's been willing to assist me, and the whole thing seems quite promising. Yay!

As things have calmed down a bit, it's time to start working a bit more on WSW and Insomnia again. The new pathfinder is still under development, but there has been some progress there. There's been some more optimizations and improved threading of various parts of the engine, which gives some solid performance wins. In addition, I've done some bug fixing and improvements on a number of shaders, so a quite a few special effects now have significantly less noise.

Sadly, that's all for this week. Again, apologies for the lack of progress.

This week I focused on making new attacks and refining our UI. I've been plowing down the game script and also working closely with our artists to get a new look for the game that will be revealed in Demo 7.

I'll be focusing on my new interaction wheel that I actually just finished in this post.

We Shall Wake is a crazy action game - yes - but that's just one of the focal points of the game. It's obviously taken the longest to make, and with it out of the way, I can focus on content and also adding in our other focal points - atmosphere and AI.
Atmosphere comes a lot in to play with Daniel's portion of the game - where he's been programming his graphics engine for the last year while I've been focused on the game itself. The artists are now working on concept art and models to fill our environments to help shape the feelings conveyed in our game.

As for the AI, you've all hopefully read my past few posts on the improvements to their personalities and fighting abilities, but what I'm expanding on now is the player's various interactions with the AI. When I wrote them, I wrote them with games like The Sims in mind - where each AI should be a different person and have a degree of feeling real.
So the best way to attain this, is to make an interaction wheel that provides four different ways to interact with the AI to expose these various traits they're each born with.

Right now it's pretty early, but once you guys get your hands on it, I'm sure you'll get a little of entertainment out of it. Each AI will give you dialogue based on what kind of person they are - and sometimes they'll even give you gifts if you're pretty respected among them.

Other times you won't have to trigger the conversation with a dialogue wheel - sometimes you'll come across wandering rebels sitting around a campfire reminiscing over old times - or maybe something philosophical or inspiring.

Anyway, that's it for now. Daniel had to skip Wednesday because he's working on something really cool. So keep your eyes peeled!

This week's been a little slow on the game itself. Daniel and I solved some pathfinding issues with the AI, and now he's gone off to make a cool waypoint pathfinder to see if that'd fit our needs better than our basic traditional grid-based one does. Other than this, I spent the week refining the AI even more - and now we're at a point where I think it's pretty acceptable. However, I still need to code some reactions based on environmental and context-sensitive situations, along with experimenting with some adaptive systems for Decem so that he can be more lifelike than he already is in battle. I found a paper on this, so I'm going to look into it and see if it fits our needs when I get some time.

I also solved an issue with WSWSound that was causing crashes due to too many sources being active at once. OpenAL's specification promised 256 if my memory serves me correct, however I was experiencing issues at 226 active sources. So I've capped it at 200 to play it on the safe side. This has solved all of the issues I've had with it in the past so far, but I'm keeping a keen eye on it regardless.

Due to all of our optimizations, I've also bumped up the maximum amount of AI per floor from 100 to 250. Technically I could push it to 300, but I want to give us some breathing room on the CPU. We've went from a 2 ms budget to 4 ms, which is actually pretty awesome for me because it means I can experiment even more with smart AI systems and other gameplay features.

I've been writing up a script for the story due to not being able to find a writer that'd stick. So I'll talk a little a bit about writing the story and dialogue for the game too.

We Shall Wake aims to have a story that's well written but doesn't push itself on the player. The way I'm writing it takes this into account, making dialogue only forcefully happen every ten or so floors when a boss shows up. Otherwise, to talk to people, you can interact it with them in towns and what not to get more info on the game world and everyones perspective on what's going on within it.

So, along with having more linear written campaign dialogue, I also have a chapter in the script solely dedicated to just random encounter dialogue, where the AI will say something based specifically on their DNA and the personality data that it contains. Right now I've written around 172 lines of this kind of dialogue, each modifiable in around 1 to 5 ways. Each AI will have a limited amount of dialogue they can use, but for the sake of statistics, if each AI could say 5 things, and we only considered just the base of the 172 lines, we'd have around 1,183,009,464 possible combinations of dialogue from each AI.

Sure is a lot of posts today. With all the Java performance myths out of the way, let's take a look at the threading system of WSW.

TL;DR of this entire post: Insomnia achieves approximately 3.5x scaling on a quad core. It fails to reach 4.0x because the graphics driver's thread is slow and competing with the game's threads. Future versions of OpenGL (pretty much the same as DirectX but cross-platform) should allow us to reach 4.0x scaling.

So what's a thread? To simplify this a lot, threads are essentially what allows your (single-core) processor to run multiple programs at the same time. If a processor has 4 threads to run, it'll switch between them extremely fast, so from the programmer's perspective, it looks like all 4 threads are running at the same time, but 1/4th as fast. However, it's also useful to have a single program using multiple threads. This can allow the program to do heavy calculations in the background while keeping the user interface responsive.

At some point, hardware developers realized that increasing the clock speed of CPUs was starting to become unsustainable. CPUs were getting too hot and used too much power. What they realized was that it was much cheaper to drop the clock rate a little bit and instead have more cores in them. Doubling the clock rate essentially increases power usage (and therefore heat) by a factor of 8. This means that at the same power consumption, you can get

a single core processor at 1.00 GHz with 1.0x total performance.
a dual core processor at 0.80 GHz with 1.6x total performance.
a quad core processor at 0.63 GHz, with 2,52x total performance.
an octa core processor at 0.5 GHz with 4x total performance.

These 4 processors all use the same amount of power, but efficiency increases massively as more cores are added.

So now, all of a sudden our quad core processors can take those 4 threads our original single-core processor had and actually run all 4 of them at the same time at full speed. It doesn't matter that each core is slower; running 4 of them at more than half speed is still 2.5x faster. This is the theory behind it of course. It's worth noting that in practice, the CPU cores actually share a lot of their resources (particularly the RAM and memory controller), so you're not gonna see a perfect 4x performance boost from utilizing all 4 cores. At best, you might see a 3.5-3.9x increase in performance.

The problem today is that games aren't good at using the resources they have available. Having more cores doesn't mean anything unless you have threads to run on them. Even today, many years after the introduction of multi-core CPUs, most games still don't utilize more than 1 or 2 cores (*cough* Planetside 2 *cough*), but some games do show that it's doable (the recent Battlefield games for example). Insomnia's not going to lose when it comes to threading.

Insomnia's thread system is based on splitting up the game's rendering and logic code into distinct tasks. These tasks are organized similar to a flow chart, with certain tasks requiring other tasks to be completed before they're executed. These tasks are then put into a queue, and a number of threads can be created that runs these tasks one by one from the queue.

Insomnia directly or indirectly uses a large number of threads.

- N generic worker threads, where N is the number of cores the CPU has.
- 1 main thread, which is the only thread allowed to communicate with the graphics driver.
- The graphics driver has its own internal thread which is beyond Insomnia's control. Insomnia's main thread offloads work to this thread so that the main thread can work on AI and other stuff.

For the graphics and physics code, almost everything can be run on any number of cores. The only tasks that cannot be run on multiple threads are the tasks that require communication with the graphics card. Almost all of these are just small high-fives with the driver to ensure that everything's still correct, but some are pretty large. This is where the graphics driver's thread comes in and splits the work with the main thread automatically. It took a lot of work to avoid stepping on the driver's thread's toes, but I've managed to let the driver thread work completely undisturbed. It's not perfect (as will be evident later), but I'm not sure it's possible to improve this with the current version of OpenGL.

Here's a rather large flow-chart-like visualization of the tasks that the rendering code is split up into. Tasks marked with red are tasks that require communication with the graphics thread, so they must be run on the main thread.

How much does this improve performance though? If I run this on a quad core, do I see 4 times higher FPS? Almost.

Here are some of the results I get on my Intel i7-4770K quad core CPU:

- The rendering code achieves 3.64x scaling.

- The physics code achieves a 3.19x scaling.

- The actual increase in frame rate is only 2.82x (which is still a 182% increase).

I blame this on the driver's internal thread, which competes for CPU time with Insomnia's threads. This is evident by the fact that the engine spends around 1/3rd of its time waiting for the server thread to finish its work. The next generation of OpenGL (pretty much the same as DirectX but cross-platform) should remove the restriction of the red tasks and also remove the internal driver thread, which would allow us to improve this scaling even further, but until then, this is about as good as it gets.

Holy shit, someone read all the way down here. Uh, not sure what to say... Hi, mum?

As some of you know, we're using Java to develop WSW and Insomnia. No, we're not using Unreal Engine, but thanks for the compliment. ^^ Now, a lot of people are skeptical to our choice of programming language. Java doesn't exactly have a flawless reputation when it comes to performance (and security, although that's only applies to the Java browser plugin, which is not required in any way for Insomnia), but I thought I'd kill the two most common misconceptions about Java here.

a + b is equally fast in Java and C++.

Any basic arithmetic operation is equally fast in Java and C++. The Java Virtual Machine (JVM) compiles those instructions to exactly the same assembly code as C++ is compiled to in the end, although Java requires a few seconds after starting up for all the code to be compiled for optimal performance when the game is first started. There are some special instructions that can be used from C++ that can improve performance in some math intensive areas (for example matrix math). In our case, we actually take advantage of some of these by using math libraries that have native C++ code for the most performance heavy places like skeleton animation, so again, our performance with Java is in the 90+% of C++ here.

Java's garbage collection is not a problem.

Many games written in Java have problem with performance and stuttering due to the Java garbage collector, which automatically frees memory that is no longer in use. An automatic collection pass can suddenly trigger and interfere with the game's smoothness. There are three reasons why this is not a problem.
First, garbage collection only happens if you're actually generating garbage. It's not hard to make a completely garbage free game loop that allocates all its resources once and then reuses them indefinitely, and this is what we're aiming for.
Secondly, the garbage collection passes are fast and run in parallel with the game mostly, so the actual time that the game is paused for a collection is in the range of a few milliseconds, which the CPU easily handles without dropping a single frame in almost all cases. The stuttering we get from garbage collection is 1/10th as frequent and intensive as the stuttering we get from deep within the graphics driver, far out of any game developers control.
Thirdly, that allocating and freeing memory is slower in Java than C++ is a myth in the first place. The fact that the memory management is completely left to the JVM is actually an advantage as it can avoid fragmenting the heap, which is a common problem for C++ programs that degrades performance over time. Another massive advantage of garbage collection is that it's a lot easier to work with for us developers, so we can spend more time on new features and optimizing our algorithms instead of figuring where that memory leak that causes the game to crash and burn after 30 minutes of playing is.

So where is Java actually slower then? The biggest loss of performance in Java compared to C++ comes from the memory layout. In C++ you can use a number of techniques to force memory locality so that memory that is often used together lies in a continuous place in RAM. This makes the program more cache-friendly, as the CPU always loads in memory in relatively large blocks so it'd "accidentally" load in and cache all the required information when the first piece of memory is accessed. In Java, we have no way of forcing this as the placement in memory is left to the JVM, and the JVM may even reorder things later (again, this has other advantages). If you're aware of all this, it's not that difficult to minimize the impact of this. In addition, many Intel CPUs have hardware that pretty much eliminates this difference, which I'll go into detail about in my next post.

Due to my lack of time, this week was limited to small optimizations here and there, mostly triggered by the release of a 166 page PowerPoint presentation by the developers of Call of Duty: Advanced Warfare, which focused on the improvements they've made to the post processing of CoD, which I have to admit are pretty damn impressive.

Motion blur has received a much improved brand new next-gen algorithm based on the one used in Call of Duty: Advanced Warfare. Objects in motion are blurred much more accurately with more correct blur weighting, eliminating the sharp edge that was visible at times. The new algorithm is also around 94% faster. I fixed a number of visual artifacts introduced by the new algorithm, the most glaring problem being a clearly visible line that appeared along edges when combined with TSRAA. In addition, I added an optimization that checks what parts of the screen that actually need the complete motion blur algorithm that can handle difficult overlapping motion and sharp edges in front of objects in motion etc. The majority of the scene does not usually need this, so for these parts a simpler algorithm is used instead. This resulted in yet another 88% performance increase. Compared to the old motion blur algorithm, the new one is approximately 275% faster (!!!) during fullscreen motion, meaning that motion blur no longer cuts your FPS by a large amount when the camera starts moving.

The TSRAA shader also got some love this week. I identified some rather simple bottlenecks that especially slowed down the shader when using a high number of samples as they tricked the compiler into generating very inefficient code, and reworked those parts. This resulted in a massive 132% performance boost for 8xTSRAA, while 4xTSRAA saw a much smaller 10% boost.

Another post processing effect that the CoD slides mentioned was bloom. Thanks to some tips and tricks there, I managed to halve the VRAM usage and increase performance by 13% with just a few simple changes. In addition, as I was looking through the bloom shaders, I noticed a typo which was accidentally making the bloom flicker more than it should. I fixed that and also added more anti-flickering counter measures.

On the CPU side, I worked together with Brayden in an attempt to improve performance of the game logic. We realized that we were doing some redundant updating in the main logic loop, which turned out to account for around 40% of the time it took to run each update. This change will mostly affect slower computers that are limited by their CPUs when there are a significant number of AI enemies around, but in those cases it can increase your frame rate by over 60%.

Finally, shadow filtering was also mentioned in the CoD slides. Although their shadow filtering was not faster or better looking, it was more flexible, so instead of an on-off switch for shadow filtering, we now have a off-low-medium-high quality setting.

That's all for now. As you can see, these cumulative optimizations actually had a surprisingly large impact on overall performance. The reduced performance requirements of some of our more advanced graphics effects improves performance a lot for high-end graphics cards while also making it possible for weaker hardware to enable them, while our CPU optimizations mostly reduce the minimum CPU requirement to get smooth frame rates.

Hello. I spent this last week working on some new particle effect systems, along with my usual work on NPC and AI interactions. I also did some UI work, but that's so minimal it's not really worth mentioning.

A large part of action games lie in their particle effect systems. They're meant to give you a visual representation of each attack, to help it be more unique and give it its own identity. Since we're a small team, I can't really afford to give each attack its very own graphical effect, but I can make a large array of effects and give them size and color differences that can make them seem like they're unique.

Funnily enough, for the last six demos, I've used our blood particle effect for literally every effect in the game. Our dust, thrusters, explosions, actual blood effects, and sparks were all made with the blood texture. This is due to me not having enough time to actually implement the use of our other effects we have in the textures folder.

However, we've recently picked up an effects artist who can provide us with an array of effects that are also animated, which has provided us a pretty cool result so far.

So basically, I spent this week writing some custom animation systems to layer over Daniel's implemenation. This includes an "AnimatedGeoSystem" and an "AnimatedParticleSystem," which extend the particle and geo systems already in place. These act as a really nice framework for me to layer on a bunch of other effects that can work somewhat the same.

AI work mostly consists of me bug-fixing as per usual. It continually impresses me with stuff I didn't expect it to do, such as AI walking and interacting with dead bodies as if to inspect the area, or the pack system I mentioned in the last post. Most of this is a result of its relatively open (yet simple) decision making algorithms, which apply to a large number of activities. So now it's just a matter of me providing a large amount of animations for the AI to have at its disposal to make it even more lifelike.

Apologies for the late post. It's the last week before my exams, so sadly I haven't had much time to work on Insomnia. Most of the time I did spend on Insomnia went into bug fixing the new anti-aliasing and other parts of the engine.

I've modified how the engine stores motion vectors. Before, the engine stored the motion vectors as normalized values, which means they were stored as a percentage of the screen resolution, so a movement of 1 pixel to the right at 1920x1080 was stored as 1/1920. Now it stores them in actual pixels. This has a number of advantages. First of all, the precision does not depend on resolution, which caused problems at extremely high resolutions. At 7680x1440 (3 monitors >__>) I noticed a significant degradation in motion blur quality because of that. Secondly, it's actually very slightly faster, as the motion vectors were converted to pixel values in the end anyway, so this saves a few operations later.

TSRAA got a few bug fixes and improvements as well. A few temporal reprojection (=estimating where the triangle was in the previous frame using the previously mentioned motion vectors) bugs that reduced quality in motion have been fixed. I've also added a new ghosting prevention system which disables the temporal component for parts of the scene that are in fast motion. This turned out to actually provide a significant performance improvement when the scene is in motion as I skip quite a bit of work there. When paired with motion blur, any remaining aliasing is handled by the motion blur, so this shifts some freed up resources from unnoticeable anti-aliasing over to motion blur which only activates during fast motion. A win-win, as they say.

In addition, we've been getting some quite sneaky problems when utilizing multiple CPU cores in the engine, and I've been doing quite a bit of investigating into what's causing it. I still haven't figured out exactly what the cause is, but I have rewritten and simplified the threading system to reduce the chance of bugs and hopefully fix the problem. So far the bug has yet to appear again, but these kinds of bugs are hard to debug, as they happen randomly due to the timings and distribution of tasks over CPU cores, which is up to the OS to decide. Nothing we can't handle though. =3

That's all for this week. Next week I thought I'd write a more in-depth rant about how Insomnia's threading system works and how it allows the engine to utilize any number of CPU cores for almost linear scaling of the parts of the game that Insomnia takes care of.

Brayden here. This week I'm going to talk a little about refining code.

Basically, what generally takes us the longest when working on the game is refining code. We add features every other week generally, and then spend the following tweaking and fixing everything we made the week before. That's why it's so hard to keep these blog posts frequent, is because we often only get to add new cool stuff worth mentioning every other week.

This week, I refined primarily the AI. I did work on the Input systems some more and did fixes all around the engine, and I'm quite proud of the result. The AI now works in a somewhat pack-like state, which is kind of interesting because I didn't intend this. For example, in what I'm calling the "warzone floor," which is the second floor of the game and part of the tutorial, I noticed that the rebels would fight off guardians and then move forward in a pack to help out there friends who were being pushed back above. Traveling with them and helping turn the tides of the short battle was pretty fun for me, because it felt like I was really part of a transgression.

Ricky has been doing his thing with the texturing, in fact, Novem's new textures are done. I just need Forrest to set up his rig with the new UV coordinates and it'll be in the game. These textures not only add detail, but also establish an artstyle we're going to start aiming for. The model itself received fixes from Ivan on the normals. Although, no screenshots just yet, we're saving the new artstyle for the upcoming 7th playable demo and video.

Rafael is also working on his new tracks for the game to go with the tutorial, and he hooked us up with some new sound designers who are going to be doing some cool stuff soon as well. Which, by the way, Rafael did some concept tracks a while back for We Shall Wake, so if you're interested in hearing what it's going to be like, check it out!

This week saw a lot of progress, with much of my time spent on the new anti-aliasing technique that will be featured in Insomnia, but first I'd like to mention a few minor things.

SSAO has been improved even further to smooth out the normal of the surface being tested, so the intense shimmering and aliasing that occurred in some rare cases have been mostly eliminated. This had a minor performance hit, but the quality improvement allows me to cut back on other parts of SSAO, so performance remains the same.

The UI renderer got a complete rewrite and is now faster and better than ever. It turns out that some of the special effects that we applied to the UI did not result in any visible visual change. They also turned out to be surprisingly expensive on weak hardware (God damn it, Intel; you can't spend 4 of my precious 16 milliseconds on the user interface!) The revamp both makes these special effects look better AND allows them to be turned off as a desperate last resort to save some GPU power.

Other parts of the engine got some minor optimizations as well, but nothing near as significant as last week. For example, particle rendering performance was improved a little bit.

As a result of our optimization push, we've managed to optimize the game for low-end hardware to an almost ridiculous point. Insomnia used to run at under 10 FPS on a weak Intel GPU, but thanks to our optimization efforts, we're now getting almost 40 FPS on that same GPU, meaning that as long as the game starts (e.g. the GPU supports OpenGL 3.2), you should be able to tweak it to playable FPS.

With that out of the way, let's get down to what this post will mainly be about: Anti-aliasing! Be prepared that this is a pretty long text, so if you're not ready for a deep dive you may want to just skip to the middle where there are some screenshot links.

Anti-aliasing is almost always featured in games in some form. I suspect that if you find my blog posts interesting enough for you to read this far, you already know what anti-aliasing is and does, but for the sake of it I'd like to give a brief explanation of what anti-aliasing is.

Anti-aliasing techniques are techniques that attempt to minimize the aliasing in the scene. Aliasing occurs when there are details or movement that are less than a pixel in size. A 3D model is made out lots of mathematical triangles described by 3 points in 3D space. We project this 3D model to the screen and check which pixels that the triangle overlaps. However, we do not have an infinite number of pixels, so the result not entirely accurate. If a thin triangle happens to fall between the pixels, it'll completely disappear, and if a triangle moves less than a pixel, the pixels it covers may change in weird ways patterns that simply put looks unnatural and unpleasant, and our brains interpret these unnatural pattenrs as flickering, shimmering and even objects changing shape.

There are two approaches to anti-aliasing today. The first one is to simply generate and gather more information about the triangles. This is what the famous MSAA, also called multisampling, does. Usually the graphics card simply tests if a pixel's center lies inside the triangle to determine whether the pixel is covered or not, but to better describe the shape of the triangles we can test multiple points inside the pixel. For 4x MSAA, we get 4 times as much information to work with, so we can get much higher quality result.

The second one is the filtering approach. This involves looking at the information we already have and attempting to make the most out of it. FXAA, another famous anti-aliasing technique, falls into this category. FXAA attempts to analyze the final image and reduce the aliasing in it by detecting edges and smoothing them out. It's often called inferior to MSAA, and for good reasons. It's cannot restore triangles that never covered a single pixel center, as its limited to the information at hand.

Another filtering technique called Temporal Supersampling combines the current frame with the previous frame. This essentially grants us more information to work with for free, but introduces a lot of problems, like ghosting or blurry textures. Regardless, techniques that use the previous frame have been used in many successful commercial games, like Crysis 2. It's also used in Nvidia's new anti-aliasing technique, MFAA, but I digress.

Up until now, Insomnia has only supported FXAA, and for good reasons. FXAA may have low quality, but it's cheap and extremely easy to implement for any engine. MSAA on the other hand is quite the opposite. MSAA requires modifying approximately 60% of the graphics engine to implement. This is not something that your average indie developer can afford the time to do. We'd essentially be having TWO engines to maintain at that point, which means adding more features becomes even more time consuming. In addition, MSAA has problems when working with high-contrast edges, effectively causing bright edges to bleed over less brighter edges, which reintroduces aliasing again! These are the reason why Insomnia doesn't, and won't, support MSAA.

I believe that the answer lies in hybrid solutions that both generate additional data and do proper filtering of it. Therefore, Insomnia features a new anti-aliasing technique developed by yours truly which combines all three of the above described techniques. I call it TSRAA, Temporal Subpixel Reconstruction Anti-Aliasing, and as of now it is officially part of the Insomnia source code.

Note: Still images are not enough to fully appreciate the benefits of TSRAA. What's REALLY going to blow your mind is how smooth it looks in motion.

What TSRAA builds on is that these three techniques compliment each other very well. Thanks to this, the quality of 4xTSRAA in almost all cases is higher than that of 4xMSAA, and in the rare case where there is clashing information in a pixel the quality drops to ghosting-free temporal supersampling only (which also could be argued to provide higher quality than 4xMSAA as well). TSRAA also implements a counter-measure for high-contrast edges (also an innovation from me =p), ensuring smooth gradients at all times. Here's a comparison!

Performance and memory usage are both excellent. Working with the data you already have is always fast, and the extra data we do generate is both cheap to generate and requires little memory. 4xTSRAA has around half the performance impact and memory usage of 4xMSAA. Even 8xTSRAA is faster and uses less memory than 4xMSAA, and less than 1/3rd of the memory used by (shiver) 8xMSAA.

Finally, TSRAA was a lot less complex to integrate into our existing engine. While MSAA essentially requires complete rewrites of large parts of the engine, TSRAA integrates and builds on top of our existing code in a much cleaner way.

So there you have it! Congratulations for making it this far! I know that this got a bit long, but I'm a bit of an anti-aliasing fanatic, and it's really exciting for me to actually get the chance to come up with and even implement all this. This is the culmination of over two months of work! Thank you for your time!

PS:
[04:42:25] Mokyu (TheAgentD): Can I write "buttload" on the blog?
[04:42:29] Mokyu (TheAgentD): Will you get mad at me?
[04:42:44] Brayden: no lol
[04:42:46] Brayden: why would I?
[04:42:51] Mokyu (TheAgentD): BUTTLOAD IT IS
[04:42:52] Brayden: Also why did you put buttload in quotes
[04:43:01] Brayden: I'm sitting here giggling like an idiot lol

We Shall Wake's first twenty minutes of gameplay are nearly complete. The tutorial, second floor, and the first encounter with Decem have all been coded.

Along with this, I spent the last week doing major optimization and revision work on the combat engine. Not only have I made the Input systems more responsive, I've also tried to give the combat more weight in general with more hit reactions and flinch animations - along with less floaty knockbacks.

Specifically, I've removed tapping from the combo systems and replaced it with a "press to activate" system. This is more traditional and has proven to be more responsive than what we original had.

For hit reactions, I've added a dynamic flinching system that blends in with enemies current animation. That means if you shoot an enemy while they're walking towards you, their chest will flinch back as they're walking. However, if they're stunned, a more traditional flinch animation will interrupt whatever they were doing. This allows for more hit feedback as to the current condition of your enemy.

I've reduced the slideback from basic hits so that combat will be less slippery, and more stationary. However it's still possible to move across the map at high speeds during combat, as the combat AI has been increased exponentially. In fact, when I first coded Decem, Daniel couldn't tell which MORS was the player (so, technically I passed the turing test! Ha).

We also improved our pathfinding systems to be more robust. It's a 3d pathfinder instead of a 2d one now, so AI react appropriately to ceiling and ledge heights. This solves the old problem of AI jumping into ceilings and jumping up on 30 foot ledges with no strain.

Once I lay the finishing touches on your first encounter with Decem, I'll turn my focus back to the general gameplay and dungeon generation systems. Demo 7 will be released when this is done and we've added in new graphics to hopefully make things a bit more colorful and interesting - and while we're leaking modelers as usual, the other artists are hard at work to make some cool concept work and music. We're lucky and thankful to have them - thank you Ricky, Anthony, Ivan, andRafael!

We've gotten a lot of questions in terms of kickstarters and donation boxes, and we're going to try and work something out that works conveniently for us and you so that you can contribute cheaply without hassle. But we'll come back to this later - if you have any suggestions yourself, feel free to email us. We love to hear feedback from people interested in our game regardless, so even if you want to just say hi, feel free.

First of all, sorry for missing the deadline on my first post. x___x Let's hope uni is less intensive in the future...

This week saw the addition of a number of new features. I thought I'd write about the two most visual ones.

The first feature is procedurally generated moss applied to the terrain. It can be used to essentially override certain parts of the terrain's textures with a moss texture. Even cooler, the moss changes the light properties of the terrain, so moss growing on metal will reflect light as moss should, not as the underlying metal.

With proper lighting applied, it all looks decent, but when there is no direct lighting and only ambient lighting is applied, the result is really unpleasant.

Every single detail in the terrain melts together to one incomprehensible mess. I can't even see where the floor ends and the wall starts.

The solution was the (re)implementation of SSAO into the engine. SSAO is short for ScreenSpace Ambient Occlusion. It's a technique first introduced in Crysis 1 that essentially analyzes the scene and dims pixels that are deemed to be partially occluded by surrounding pixels. Not only does this look pleasant, it also massively increases the depth perception of the scene and makes it easier to understand the shape of the objects in it, as it more closely mimics how the scene would be shaded in real life.

Notice especially how the details on the floor and the walls are much easier to see. I also spent quite some time optimizing the shaders and packing the data together more efficiently to improve performance to less than 1ms at 1080p at high quality. The SSAO can also be tweaked to run on significantly weaker hardware than mine at reduced quality.

In addition, for the last month or so we've been doing extensive optimizations of nearly all parts of the engine.

We identified a massive GPU bottleneck when there were lots of AI entities on the screen. Even though they were barely visible, they were made out of tens of thousands of triangles, which slowed things down immensely. To prevent this, I recently added a LOD (Level Of Detail) system to the model renderer, allowing us to switch out high-quality models for lower-quality models once they're at a sufficient distance from the camera that the difference is negligible. Our modelers will be providing multiple versions of each models with different triangle counts, that Brayden in turn will incorporate into the game through the LOD system I made, so this was essentially something that everyone in the team helped accomplish.

On my end, I've also done a number of optimizations specifically targeting low-end hardware. I've gone through almost all shaders the game uses and optimized them as much as possible. The post-processing pipeline has been restructured to be faster, more modular and enable new features in the future. Many features (for example distortions and shadows) that were permanently enabled before can now be toggled on and off to cram out some more frames per second on the slowest of hardware out there.

Here are some examples of performance wins you'll see in the next version.
- LOD system: 200-300% increase in FPS with 100s of AI on the screen at lower graphics settings.
- Motion blur: Approximately 10-20% faster depending on hardware.
- Postprocessing: Several stages have been merged to a single stage, providing a small 5-10% boost.
- Transparency effects: Small optimizations, plus quality improvements to particle motion blur.
- Rendering: Reduced VRAM bandwidth usage by 15-30% depending on settings.
- Lighting: Reduced VRAM bandwidth usage by 15%.
- VRAM usage: Reduced by approximately 10-15%.

We're going to be trying something a little different with the blog now that things are picking back up. From now on, Daniel will be posting on Wednesdays, whereas I'll be posting on Saturdays as per usual! So now you can expect updates not only on the game front, but also on the engine front.

So,Saturdays = Updates on We Shall WakeWednesdays = Updates on Insomnia
More for you to read, and more incentive for us to not be lazy and to actually get things done!

Now that the formalities are out of the way, I'd like to talk about what I've done this week now that Daniel has gone and asserted his programming dominance. Funnily enough, we joke about having a "programming rivalry" so that we'll both stay at the top of our A-Game - so these blog posts will be taking it to a whole new level. Ha!

I've been doing AI and combat work - along with my usual bugfixes. We Shall Wake was designed from the beginning to have a huge dynamic world, with equally dynamic AI to inhabit it - so I've been making placeholder models and the necessary programming to place them in the world and have all of these aspects interact with each other.

We also have a lot of factions to consider as well, such as the miners, rebels, guardians and the relationship between the MORS brothers themselves. Each faction has its own characteristics that impact how the world is built and what these interactions are - for example, miners are not nomads, so the dungeon generator builds a house for each Miner present at the very first generation of the world. However, when these miners are killed - they never respawn, unlike their houses.

What exactly do these interactions consist of, you may ask? Some of them are more mechanical, such as how the pathfinder decides how to build its paths based on what static entities are where - along with the general topography of the land. Others can be more visible, such as AI's of the same faction walking around together, talking to each other, hanging out at campsites, or visiting the graves of dead allies and friends.

Interactions are also influenced by the personality of each individual AI - as each one is given a unique DNA set that is saved permanently. This impacts small things such as an AI being impatient and running to his destination rather than walking - to whether or not a Guardian will challenge you to a battle or ignore you if you're hurt and not fit for a true duel. DNA also has values such as their stubbornness, skill at combat, social tendencies, honor, pride, bravery - and many more - which all come into play at some point in their logic systems.

Many times you can transverse the floors of Yarib, and return and see familiar faces and personalities - and that will be because they are infact the same AIs. However, a lot of the time you won't see these faces, as while you are gone, they may be killed or murdered in the midst of a transgression between the guardians and rebels.

These battles and deaths are not random either - each floor operates while you are gone, so if you were to destroy all of the guardians on the floor, and the floors bordering it, there would be no battles while you are away because its protected by its neighbors. This is what We Shall Wake wants to achieve, a world that you, as the legendary MORS 09, can change and impact.

That's all from me for this week, take care!

P.S. On an unrelated note, Siliconera wrote two articles on us, check them out!

My name is Daniel and I'm the graphics and physics programmer of the We Shall Wake project and the Insomnia engine. I'm here to ramble about the tech that the Insomnia engine uses to maintain playable frame rates and get better graphics to make it look like we know what we're doing. =P These posts are going to be pretty tech-intensive, but I hope some of you appreciate this kind of stuff anyway.

One of the most performance sensitive areas of graphics programming is getting your data the graphics card. Insomnia uploads a large amount of data each frame, and it's very important that this is done in a timely manner. This data includes position data for 3D models, and also a large amount of skeleton animation data that needs to be updated and streamed to the graphics card each frame.

Let's start with some history. The easiest way to upload data to the GPU is to simply pack up the data into a data buffer and tell the driver where the data is. This also happens to be the slowest way. This is because of the number of times the data has to be copied.

The engine has to pack up the data.

The engine tells the graphics driver where the data is. The driver creates its own copy of the data to ensure that the data doesn't get modified by the engine before the driver has a chance to upload it to the GPU.

The driver sends the data to the graphics card's video RAM.

The data is essentially copied 3 times! What a waste!

A much more modern way of uploading data to the GPU is to map memory. This gets rid of one copy, but it also introduces some new problems. Buffer mapping essentially works like this: Instead of telling the driver where the data is, we ask the driver where it wants the data to be placed. The driver gives us a memory pointer which tells us where to place our data. We can then write directly to this place, eliminating point 2 in the above list. When we're done writing, we tell the driver we're done and it ships it off to the graphics card as usual. The problem with buffer mapping is that it requires a large amount of validation and error checking from the driver. It has to ensure that the memory pointer isn't in use already, and that the copy to the graphics card is already done. This can cause some nasty cases where the CPU has to wait for other operations to finish, which is called a "stall".

Just a few months ago, a new technique for uploading data to the graphics card was developed, and driver support for it has finally been implemented by AMD, Nvidia and Intel. The technique is called ”persistent buffer mapping” and is quite revolutionary. This allows the driver to instead of giving us a pointer to the driver's memory (which is still in normal RAM), the driver literally hands the engine a pointer directly to the graphics card's video memory. When we write some data, the driver guarantees that it'll immediately be send to the graphics card, without any additional copies. Even better, the driver now allows us to keep this magic memory pointer forever instead of having us ask for a new one each frame, so the expensive map operation is also gone. The data is therefore directly sent to the GPU without any unnecessary copies. The cost is that we, the game developers, have to take care of the validation that the driver has traditionally done for us, but this is a small price to pay. Since we know exactly how the data will be used, we can skip almost all the validation the driver used to do, so performance is much higher.

But enough theory! Let's see some numbers! The following is the test results from a very heavy scene, with hundreds of shadow-casting lights, a bumpy terrain and over 50 skeleton animated 3D models.

Traditional buffer mapping:

FPS: 46

Frame time: 22.8425 ms

Render time: 20.3548 ms

Persistent buffers:

FPS: 72

Frame time: 13.9157 ms

Render time: 5.5067 ms

The difference is HUGE. We see a large difference in raw FPS (a 56% increase), with the time taken to render a frame dropping from 23 ms to 14 ms. However, the gains are actually for more substantial than just a raw increase in frame rate. The time it took to submit all rendering commands dropped to almost 1/4th of what it originally was. We've essentially shifted the bottleneck from the game's rendering code to the driver. This is very beneficial to us, as this leaves a lot more CPU time for the physics engine and AI to play around with.

Sadly, not all graphics cards that we want to support can utilize this new method of uploading data to the GPU. Therefore the engine can easily fall back to traditional buffer mapping if persistent buffers are not supported. In addition, from the testing all you guys have done for us, we've determined that many of you have old drivers with a buggy implementation of persistent buffers, so the engine even tests them to make sure that they work properly before enabling them. Fighting driver bugs - just another day at the office (we don't have an office).

I've also linked Daniel to this blog, so hopefully he'll be making some posts soon to elaborate on his graphics engine optimizations and changes for you nerds out there.

As for AI, I've added a new personality and trait system. Each AI will have its own DNA that is randomly generated, and all sorts of facets such as confidence, bravery, combat smarts, common sense, respect for authority, and many more are taken into consideration during combat. Every battle will be different in We Shall Wake simply because some AI will opt to different styles of fighting. And this is simply on top of their randomly generated movesets (that follow the same rules as yours!)

You'll be able to memorize each enemies attack patterns, but it'll take some minor observation time if you don't plan on winging it.

The team is also now going to be called Nokoriware, or Nokori for short. We've made new emails that you can contact us at:

Sorry for the lack of news - I hope you haven't lost interest in the project!

Since the release of Demo 6, I've been focused on AI improvements, combat, and also the beginning of the game as a whole. I've implemented various dungeon systems; as of now the game will initialize your game at the first time of running and generate your personalized tower to explore - along with its various communities and landmarks. While most of your experiences will be somewhat the same, the order of which events transpire may be varied; and for those of you who are put off by "randomly generated," be assured that our generation is only a means to put (hopefully fun) fluff between the various boss floors and pre-made areas.

Improvements to AI have mostly been optimization - and I've had a major increase in the allowed AI on a floor with just the updating. Using LOD logic systems, we can fit about 100 AI on one floor without LOD models being enabled before I experience slowdown on my rig - compared to the maximum of 60 in Demo 6.

As for combat, it's mostly been just bugfixes and new moves.

And finally, I've added our new triangle-list collision systems into the game, meaning that complex environments are now possible on top of our pre-existing tile system.

Once our ideas are more realized and we feel we have something to show, I'll upload a new video. Blog updates may not be consistent for a while, but when I do decide to resume making consistent updates, I'll make sure to leave a post here saying when.

So, I know I didn't update last week. That's OK. We're working on a new video that's going to have a lot of progress in it. This means that we'd like to refrain from posting too many screenshots for a while so you can get the full effect of the massive amount of progress we've made in one quick burst that's going to blow you out of your chair.

We've got a concept artists, two modelers, and a rigger working on assets. Of course, Daniel and I are programming and I'm also doing the animation.

This may also mean that soon we'll gave enough for the sound designers to go off of in terms of engineering us some sound effects.

This week has been rather slow, we've been doing a few internal things but mostly just rough-drafting a lot of ideas. We've got Joshua working on some environmental art, but other than that, there's not much to report.

I've also been animating a ton of combat animations, and I'm almost ready to start implementing them.

On another note, a lot of people have been asking if we have a kickstarter or donation page - so I implemented advertisements into the blog. If you see one that interests you even remotely, clicking on one can benefit both us and the advertiser.

Forrest is currently rigging MORS, and I've been vastly expanding and modifying the combat engine.

The rigging is going well, we ran into a minor problem with the skeleton being too small, but that's an easy fix. So we should be good to go soon.

Along with that, I've adopted a new method of gaining combat momentum by giving you two basic combos to lead into the rest of your attacks, so you can either start slow and work your way up in speed, start fast and slow down, or start slow and end slow, or start fast and end fast. You mix and match your punches and kicks to activate different moves as they become contextually available.

It's pretty dang neat.

We also picked up an artist who's going to be hooking us up with some crazy environmental art:

Although this picture in particular is insanely early. I'm just showing we're actually going to have like, graphics soon. Isn't that awesome?

Anyway, that's all for now. More to come! Stay tuned! Saturday updates are now full throttle again.

While our regular Saturday Updates still won't be resuming until next Saturday (May 31st), I'd like to discuss a little more about what We Shall Wake is and where we're taking it. I often get E-Mails and comments on YouTube asking what it really is.
This has changed a lot since the last time I made one of these posts, so I'll elaborate on it carefully.

First of all, I'd like to mention that Circadian isn't just me, we're actually a couple of guys developing the game. I'm Brayden; the director, gameplay and logic programmer, sound programmer, and animator. My best friend and partner is Daniel, who is the graphics engine programmer (and his work alone amounts to all of the different things I'm doing for the most part).

We also have two modelers and a concept artist at the moment.

One of the things a lot of people complained about with Demo 5 was that it was hard to play. As in, the AI was too hard, the controls were difficult to understand, and it was a tad rigid. We also got complaints of the animations being too fast to really enjoy. I actually agree on a lot of this points, so I've gone back and touched up a few of the animations and even made some new ones to make the combat crazier than it already is.

So, what's my new philosophy on directing the game?

First of all, I need to make the controls less of a hassle to use. I want anyone to be able to come in and mash and be successful at playing - but I also want a high skill ceiling for anyone who actually dedicates themselves to learning the system.

This will be achieved with dial-based combos being used in cohesion with Devil May Cry's more "command-based" approach to combat. Which means certain attacks will be chained together, and others will always be available. So, in example, you do your new ground-based combos that can lead you into the air after about the fourth animation, or you can automatically activate your "high-time" move about two attacks into that ground combo.

As for the movement, a lot of people also complained about the controls being too difficult, and that the game, in general, was too fast. I won't be changing MORS' running speed, but I will make the launch slower. Which means his speed is going to start off slower, but build back up into the max speed that was already set in place. That way he doesn't just explode off and make it hard for the player to adjust.

We Shall Wake is going to be an action game with exploration-aspects; each floor is generated with its own communities and events, and you'll be free to explore and watch the various AI personalities interact within these environments. There will be various factions at war, and it'll be up to you to decide which side ultimately wins. I want you to feel the combat, I want you to feel like an unstoppable force of nature unleashing its wrath - but I also want there to be a challenging aspect to the game. I'll just have to keep experimenting until I find this balance - but I will find this balance.

I'm not looking to do anything revolutionary with WSW, I just want to make a fun game that anyone can pick up and have fun with after a hard day at work. And that's all there really is to it.

Lastly, I've gotten some E-Mails and comments asking if we have a kickstarter. We do not plan to make anything of the sort, but I'm going to look into alternative methods of ways for anyone who's interested to financially assist the team. I don't believe in free handouts, so I'll have to think of a way to compensate you guys for anything you give to us.

The team has finally decided the route we're taking. We have a lot of things planned, and thanks to some minor publicity we got thanks to the TwoBestfriendsCast, we were able to expand our team. So, thanks you guys, it really meant a lot to us.

We'll be taking a faster simplified approach to the combat. Realize that the simplification isn't necessarily bad, but also realize the current system is very complicated. I'll be implementing more dial-combos so you button-mashers out there can have just as much fun as the control freaks can with the combat engine.

Weekly updates will resume on May 31st. We'll be having a lot to show, and demo 6 is going to blow you away.

Daniel and I sat down this afternoon to try and kill that AI performance bug I mentioned in our last post - and I'm glad to report we fixed it. The game should run smoothly on a variety of hardware now.

You see, we have pretty powerful AI that was taxing on the performance of the game - it was dropping our frames down by like a factor of ten on MY computer (which is unacceptable!) so we had to sit down and work on it. We set up a control variable of 500 Artificial Intelligence's in one room, and ran the game.

Here is the initial data. As you can see, it takes almost 7 milliseconds to update the entities (which are all of the living creatures in the game), which is pretty bad considering our "budget" is around 2 milliseconds. The total update time was 8 milliseconds.

We dropped it by more than half with some quick work - and now it's pretty acceptable. We did this by decreasing the amount of raycasting the AI was doing, and also optimizing their sensory systems and the physics engine to accommodate their large sensory radii (how far they can see and hear)

However, this still was no acceptable. We were still going over our 2 millisecond budget. So we decided to set precedence over certain thought processes of the AI's brain. For instancing, they should only consider sensory stimuli every tenth of a second, instead of 512 times per second.

We're releasing DEMO 5 Wednesday. This is why there has been no Saturday updates for about two weeks now - we've been preparing.

I'll be releasing the game and a new demo video on YouTube, so get ready! It's going to be fun. But we need to talk about some things first.

This demo only represents the combat engine and some basic AI - along with parts of the dungeon generator in its infancy. We had to do a lot of engine restructuring and overhauling since our last admittedly crap demo. Of course, we've made a ton of under the hood progress - but you likely won't notice a lot of that.

The purpose of us releasing Demo 5 is for our friends on /cgg/, and those who have been watching us since the start. We've gotten our stuff together - and redone almost everything. Animations, lighting, loading systems - and it's pretty nicely optimized. This is also giving us a chance to get some feedback on combat and hear your ideas, along with getting some bug reports so we can fix anything that goes wrong.

To send bug report, simply email us the log.txt that is outputted beside WSW.jar to weshallwake@outlook.com.

Some problems I've noticed however is that the AI is taking a huge toll on performance - so FPS may not be perfect. I'll have this fixed next week, but likely won't have time to fix it before Wednesday. The reason the AI is doing this is because they use a lot of raycasting for their occipital lobes in their "brains," so that they can see and react to external stimuli. As cool as this is, I may need to simplify it due to the bad performance (it can drop the FPS by 10!)

I'll try my best to fix it, but with school taking up a lot of my time recently, I can't promise anything.

We'll be including a profiler for the graphics engine, so while playing you can press P to output data to the console (I'll include a batch file for opening the game with the console as well). You'll be able to see just how powerful Insomnia is in a textual fashion - and how fast it is. We also have a timestep button mapped to 2, so you can slow down the game and see the motion blur and all of that working - it's really cool.

I made another video with that cool new webM format if you're interested - really crisp crystal clear quality, and at 60 FPS. It's only 18 megabytes too so it won't take long to download it.

Here's the link, just be sure you actually download it and open it in Chrome/Windows Media Player so that you don't get stuck using the YouTube player (which bottlenecks the crap out of the video)

What have we done this week? Quite a bit.

I added some new moves to the game (a basic uppercut that extends the basic hand combo) along with some new flips and abilities for Auto-Motion. Along with this, I finished the Auto-Slash to look more Casshern-like (which was my original goal). There's a webM below showing it's usage (download it here since the YouTube player butchers the quality), and you can see the scene from Casshern I pulled it from here. Of course, I still need that crazy effects and sounds for it to feel complete, but I'm getting there.

I've also added combat AI to the game, which could be seen in the larger video I posted above. It's actually really basic and needs some tuning - one of the major problems being that the AI is too hard, and I can generally only take out about three of them before getting killed. Which really isn't acceptable for normal mobs.

I'll be keeping the AI and intensifying it's difficulty for a few key boss fights against MORS' brother however.

Along with all of this, Daniel and I have been working on an editor for the game engine that will be providing a few tools for working with Insomnia for when we release it as an open source project. As of now this editor is called the Circadian Toybox.

The official deadline for the demo is April 30th. I will be releasing it regardless of how complete it is by then. So it'll probably be full of bugs - but that's okay, that's the point of demos. You guys can break the game and email crash logs to us at weshallwake@outlook.com so that we can patch them in later releases.

Hey guys, just a heads up, I'm going to be gone until Tuesday because of a trip. So I'm posting this early, since I decided to make a weeks worth of progress in a day - I'd hate to let you all down. Plus I had a bad day, and bad days are excellent for programming. Your productivity soars.

Anyhow, here's what I did today:

Finished refurbishing and re-implementing our old physics engine

Overhauled our engines to be more organized, added WSW2D engine as a sister engine for WSW3D

Did some tweaking and general bug-fixing

This may not seem like much, but replacing a physics engine is no game. We spent the last weekend getting our old engine ready for implementation, and I've just now gotten the chance to finish overhauling the game.

Why did we do this? It's actually kind of a funny story. You see, when we released the first demo back in September of last year, we had been using our own custom physics engine. Daniel had a more experienced friend tell him that making your own engine is never a good idea, and suggested we use Bullet.

Daniel presented some facts and opinions, and eventually I gave in and we implemented it. At the time, replacing our entire engine with it seemed like a good idea - the reason why we did this was because he was going to need a more sophisticated physics engine to support features like physics-based particles and ragdolls.

Once it was implemented, I had a lot of trouble using it for gameplay - because as it turns out, Bullet limited us a lot when it came to using it for extreme things like going 40 miles per hour on 500 x 500 map - I'd often fly straight through walls or out of the map, and the only way to remedy this was to jack up the UPS, or Updates Per Second. It ended up at 256 before it was stable, and by then we had decided that was not going to work - it was too intensive.

So we both agreed, "If you want it done right, you have to do it yourself," which hilariously enough, has essentially been our motto from the get-go. We did the same thing with the original graphics engine because jMonkey is terrible, and I did it as well with the sound engine because Pauls 3D sound engine wasn't too good either in terms of it just working. Respectively, WSW3D and WSWSound are probably me and Daniel's magnum opus' in terms of programming anyway, so it all turned out pretty awesome.

Anyway, we spent last weekend refurbishing our old physics engine, dusting it off, and putting back in. We're turning the physics into a hybrid engine, where Bullet will do less important tasks like ragdolls and particles, whereas ours will do the heavy lifting - because unlike Bullet, our engine was built for high speed action crazy.

So lastly, what does this mean for the game? Well here are some features I'm going to have the pleasure of possibly adding and re-implementing:

I caught really caught up in programming, I completely forgot to update the blog for you fellas.

But man, I have a lot of progress this week. First of all, combat is going great. In fact, I didn't expect to be this far into development until like March 20th. So that means more time for refining and bug-fixing. What did I add?

Well, for one, I've filled out the combat quite a bit. Check out this GIF, you'll see a lot of new additional moves have been added. I've streamlined combat so that it functions more naturally, so for example, you can run and slide, and then pop back up into a running punch.

Note, by streamline, I don't mean simplify. Some of these moves may be hard to pull off well, in fact, I haven't mastered my own systems yet. But it's all really cool, and I think there are a lot of possibilities here for brutally murdering enemies.

I also added a new system that I've had planned for a while - this is called Auto-Motion Combat Context. I may have mentioned it on here before, but this context system allows you to shift the way Auto-Motion functions in combat situations.

For example, normally tapping the Auto-Motion key allows you to dash and do fancy flips (picture above) - but when you press the combat context shift key, it changes into a combat based context which allows your dashes to be offensive.

These dashes are called Auto-Slashes, mainly because the character teleports in a slashing like manner, doing minor damage to the energy. To counter how powerful it can be, it uses your energy - which is the same thing that powers Bio-Mechulus - a feature I'll discuss more in the future.

While in Auto-Combat Context, you also have access to Grapple Based Combat - which I'll actually get to implement a lot of for the combat demo in April as it turns out. Essentially, you'll be able to put grabbed enemies in your gravitational field - and this will be Vitality Shield. It'll suck their health away gradually, and you'll also be able to use their bodies as platforms for double jumping and firing them off like bullets. It was made in respect to Virgil's floating swords in DMC3. This feature isn't implemented yet, so no GIFs. But it will be. If it turns out too buggy near release time, it won't be in the demo however.

Lastly, in regards to the above GIFs and the demo in particular - note that a lot of the special effects haven't be implemented yet. We're missing the Auto-Slash effect, and a lot of the combat related effects like sparks and flinches. A lot of people have complained about "the combat not being weighted enough," this is actually due to the fact the AI can't get up off the ground when knocked down (because the AI is disabled and will remain disabled until AI Month during the summer season) - so that "seizure" animation that plays when they're hit is essentially temporary. The final AI will not be so easily knocked down - in fact, you'll probably spend more time on the ground than they will.

For the demo in April, AI systems will be turned off, and they won't react to anything you do. This is because we've focused on player controls for the last two months, and that's how I'd like to keep it.