Menu

A flowfield is a form of data structure used to save pathfinding calculations over time. This is useful when you are dealing with a lot of units as most pathfinding algorithms are quite expensive. In our case we plan to have about 1000 units in play at one time, which means that calculation time per unit is vital for performance.

The algorithm works by generating a node tree for the entire map and then making one instance of it for each separate destination. The instance is usually calculated with Dijkstra’s algorithm down to each node. After this, whenever a units wants to go towards their goal they simply sample their current position from the node tree. This means that flowfields trades speed for memory. Depending on the specific implementation and sampling routine, the worst case scenario in terms of cycles is in the regions of a few cache-misses per unit. The picture above shows blue lines from the center of the node towards the entry point in the next one, walking in the respective direction gets you to the target in the top.

We’ve tested two types of implementations, grid base and node based. The grid-based tree is pretty self explanatory and is what we used in the prototype. The node based solution is using a number of quads in order to build an arbitrary structure of traversable surface with a bunch of tables storing information on which nodes connects to which, sizes and so on. While a grid based solution benefits from an easier sampling solution and a good relationship to granularity for providing better behaviour, it cost proportionally more memory. A node based algorithm might need some additional functionality to provide a more normal behaviour, if a unit is not just to walking to the centre of the next node and so on. We refer to this as ‘sampling’ and is covered in a bit more detail below.

Our sampling algorithm builds on two cases. One, in green, when the units position is projected onto the exit edge of the current node. If the projection is on the edge (between the red dotted lines) the unit will approach the exit edge of the next node. The second case, in red, is when the units projection is outside the edge. In this situation the unit will move towards a point on the exiting edge of the current node. This will remove some of the sliding along the walls and solve most of the special cases in the corners. In yellow is the current observed movement when traversing a large cross-way, including a small turn radius.

The current implementation takes <0.1ms to assign directions for 1000 units, using 650 nodes, and requires about 0.4ms to generate a new instance. What might be interesting to note here is that the engine implementation covers a map that is a lot larger than the prototype, which was using a grid solution with 625 nodes. More so, there is most likely a trend in performance in respect to that a grid based solution might reach worst case scenario for cache utilization a lot faster that the node based one, especially for groups that spend a lot of time gathered together.

While the above numbers strongly speak for an advantage for the node solution, the problem with sampling remains. The current implementation uses some positioning checks and assigns a direction accordingly.

Further more, we are also using the node tree together with A* to handle individual pathfinding for the much fewer police units, so that they can chase separate units and stay in formation, etc.

Future improvement of our algorithm would involve using splines to smooth out the path along with ray-casts to find directly traversable paths. We are currently implementing a system for blocking certain nodes, which might be suitable for a future abstraction that would allow things like buildings collapsing over streets etc.: runtime alterations of the mesh.

As we wanted our graphics engine to be completely separate from the rest of the code, we needed a way to be able to assure order independent draw calls. To do this, we queue every draw call made to the graphics and store it for later. The actual drawing then happens at one time during the frame as opposed to at the time when the draw call is made.

The idea is to pack every piece of information needed for a draw call into a single key.

In our case we use a 64-bit key to describe object type, mesh, material, depth etc. Other information, such as position, scale, and rotation, will be sent as a pointer along with the key to use with the actual rendering. We are using this key/value-interface for all draw calls to our graphics, including lights.

The values are stored in order of the most significant bit. If we want to sort mesh before material, we put the mesh bits in more significant bits than the material. By compiling the key this way, a simple sorting algorithm can then be used to sort all keys.

For example, imagine we want to sort our draw calls by mesh, and for each mesh we want to draw all instances with the same material together to minimize state changes. Sorting the bitmask keys before rendering will give us the result we want.

So why did we decide to use this method of sending draw calls?

Using this method allows us to have a relatively “dumb” renderer, i.e., a renderer with minimal logic and branching, making it faster as well as making it easier to maintain. It also works well with a data-oriented system.

As we started to implement bitmask sorting, we were surprised to see that it was such an easy system to extend. For example, as we implemented instancing, all that had to be done was pretty much skipping a few of the draw calls in the sorted list. This even applied for lighting, which was easily implemented using the same interface.

To save on development time we decided to make our editor as a plugin for a 3D modeling application. This prevents us from having to reinvent the wheel, as we receive most of the basic functionality expected from a level editor for free. We have people familiar with 3DS Max, Maya and Blender, but in the end we decided to go with Blender.

Here is a video explaining some of the features of the level editor.

There are several reasons that we decided to go with Blender.

It’s free and open source, so if we would want to continue using the editor after school is over, we wouldn’t have to invest in expensive software.

It works on both Windows and Linux, which is pretty important since 3 out of 11 of us develop in Linux.

It has hotkey presets for Maya and 3DS Max, so none of our artists who all come from different software would have to relearn all the hotkeys.

Since before we begun development of Kravall we were aware that the game we were going to develop was going to be resource intensive, primarily due to the AI systems that would rely heavily on calculation intensive algorithms. One way we mitigate this is by moving some of heaviest algorithms over to the GPU, another is by trying to have a performance focused design philosophy for the primary code-paths of the games reoccurring (per frame) calculations.

In the architecture of Kravall’s engine we are following a design philosophy called DOD (Data Oriented Design). The primary idea of this philosophy is to try to minimize memory access delays by designing the program to have memory access patterns that maximize the utilization of the CPU cache. The primary way this is achieved is by trying to allocate memory linearly and tightly packed, trying to have the program access it linearly, minimizing cache misses and using modern processors automatic memory pre-fetching mechanisms.

A practical implementation of this philosophy and the primary way this thinking is realized is in the implementation of our Entity-Component Based Framework heavily inspired by the Artemis Entity System Framework.

The Entity Component Framework is the central core for all logic driving the game. An Entity is a generic object in the world, which consist of one or many instances of different components which, in turn, contain a set of data, this setup of components will be the identity of the entity and define how it will be treated by the game engine. As an example: in the game engine there might be a WorldPositionComponent and a VelocityComponent. If we instance (create) a new entity and give it an instance each of the aforementioned components the game engine will assume, simply due to the topology of the entity, that the velocity values contained in the VelocityComponent should be applied to the WorldPositionComponents world position data each frame making the entity move in the world space (which is then visualized when the rendering system uses the WorldPositionComponent and unmentioned GraphicsComponent). This form of programming allows for some interesting and dynamic combinations of components if the components are well designed (e.g. add a sound component to the entity and it will give off a sound from its position).

The manipulation of the components occurs in what we call Systems. Systems manipulate data and moves it between the components within (and sometimes between) entities. The systems define an Aspect, a list, of components that it is interested in, essentially subscribing to all entities that match the Aspect and perform its specific calculation on them in sequential order. For the previous example we would have defined a system which performs the VelocityComponent to WorldPositionComponent calculation, that are run each frame, moving the entity.

The reason why an Entity-Component Based Framework can work so well in combination with Data Oriented Design is the assumption that one component instance will usually be accessed and calculated close to (in time) other component instances of the same type, this assumption comes from the way systems are designed: only manipulating a single setup of components with some specific transformation. The way we use this information is by allocating all the component instances data sequentially in large memory blocks, one block for each type of component.

Another boost comes from running the same code block in quick succession, not moving to far around in the programs execution memory, which can also result in cache misses if the program is exceedingly large. In contrast, large Object Oriented Designs can do very poorly when the execution chains becomes large and inheritance chains are deep, possibly causing cache misses that add many unnecessary CPU cycles, waiting for memory.

In practice we allocate a large block of data for each type of component at the start of the program (reallocating it when we run out of memory), and then assign parts of these memory blocks to entities as they are created and given a component setup, minimizing allocation run times in the game engine. As a result, creating, destroying and reclaiming entities and component data is without any system memory allocation. Depending on the creation and destruction patterns of entities these memory blocks might become fragmented and the execution order might suffer which can create a performance loss, the effect of this problem and scope in common usage patterns is unknown and is therefore not yet addressed by our implementation but is of interest in our future work.

Creating textures for a model can either be ridiculously easy or devastatingly difficult, depending on the complexity of the model, the UV mapping of the model and the available textures. For this project I was really interested in creating our own texture assets. This means photographing or painting the textures ourselves. And since I’m not much of a 2D artist, I went down the photography road.

We had to establish the visual design of the game and a general idea of which models would be present in-game in order to decide which textures would be needed. After we had a list of various game objects and an idea of the architecture and different types of buildings in the game environments, I could proceed to jot down a list of textures that would be necessary. Then I simply took half a day of walking around town photographing the materials on my list. For example: brick wall, asphalt, ventilation system, window, door etc.

The camera I’ve been using is a Nikon D3100, a pretty standard DSLR camera with a basic 55mm lens. This is one of the photographies I collected. It’s a simple grid I found on the window of a storage room door.

So now we’ve got the stock photo. What next?

Camera lenses are concave, like the human eye. Thus, straight lines will be bent in the photography if the photo was taken up close, as you can see on the image above. It’s kind of bulging. In order to make a decent texture out of it, we will need to process it quite a bit in an image editing program. In this case I’m using Photoshop which has something called Lens Correction (found under the Filters tab) which fixes this issue, and even has predefined settings for my specific camera.

Here’s the result after using Lens Correction. You can probably spot the difference.

Fixing the white balance

I took this image on a cloudy day and even though I tried to fix the white balance in the camera, but the image still came out quite pale and bluesy. To fix this I’m doing the steps explained in this tutorial basically using an adjustment layer for Levels and eye dropping the dark and bright portions of the image.

Result.

Creating a repeating texture

In the case of this particular texture, lens correction might not be absolutely necessary but I still use it just in case. The reason it might not be necessary is that I will now cut out a small portion of the image and create a repeating pattern to make it into a usable texture. I then remove the spaces between the grid loops and add a green background color just to give you the idea.

This particular texture is supposed to be the mesh of a fence. So I will have to shrink it a bit horizontally to make the holes in the mesh uniform. After this I simply use the Filter > Offset function to make the texture seamless. This tutorial describes the basics of the technique but it’s really up to you to fix the details depending on the complexity of the image. I used a combination of the Brush Healing tool and the eraser which fixed this texture up nicely.

Here’s the final texture map.

And here is the texture applied on a 3D fence in Autodesk Maya.

Again, it’s not always the same for every texture. This particular one was very easy to create. You always find some wrinkles to iron out in ways you have to invent in the moment., but I hope this post provided some insight on how we create some of our textures.

Last friday our teachers swung by to pay us a visit and check up on our project. Actually, we invited them. We had prepared a demo of the project in it’s current state and thought it was time to show off our work. Of course, the game in it’s current state is a pre-alpha and not very heavy on the graphics side. The main point was to demo the group behaviour of the rioters and how two groups interact when crossing eachother’s path, layed out on a very simple map.

We got the feeling we made a good impression. Our programming teacher, Stefan, was mainly concerned with the delay time of the potential fields of rioters in the game. The potential fields are like invisible magnetic fields that makes rioters cling together in a group or move away from police units and other threats. These are updated in realtime with a delay of 14ms as of now. This is an optimization issue that we will deal with in time.

We also received a bunch of questions and suggestions from our project managing teacher, Torbjörn. As always he was very interested in and inquisitive about our organisation within the group. He raised the important issue that we should focus on locking down ideas and implementations as soon as we’re able to, and linger as little as possible on multiple choices of technical baselines (for example: choosing between two types of animation techniques).

We have quite a lot of zones containing ideas and work tasks for the game – our wiki, scrum board, idea board, user stories, agreements contained within specific group responsibilities and individual visions of the game. Even though we already have a rather tight work flow, Torbjörn wanted us to streamline it further and narrow everything down. It’s quite refreshing to get input like that and I think he pushed us in the right direction.

We received the question of how we’re going to make the rioters look varied and unique, which is always an issue in games containing large groups of individuals. We don’t want rioters looking like clones. Our solution for this is to have a base set of different human models, textures and accessories (for rioters to wear or hold on to), which we will tint with various colors to create variation in their visuals.

In general the teachers seemed to be happy with our progress, and we’re happy too. We’re on track and organized. Basic gameplay interaction is due for implementation in the weeks to come on the programming side, and on the art side we’re going to start creating content such as level design, level concept art, 3D models of entities in the game and mocapping. We’ve come far, but there’s still a long way to go.

There was never a choice whether to exercise agile methods for our project as it is dictated by the course the game is being developed for. Having that said, we probably would have used it given the choice. Agile development is a good fit for many projects and especially for game projects, in my opinion. The core idea is adaptation: the project is developed using iterations where the path towards a finished product can change direction as new requirements and requests are being catered to. This is well suited for game development as it allows us to easily add features, adapt to results from playtesting and quickly change goal if the game idea needs to be altered.

Our method

The process is mainly Scrum. First we create user stories for our project. A user story is typically written from the perspective of a player or developer, or another role significant for the project. It is a feature or technique that a person in that role would want to see in the game. We have divided our project into the smaller parts called sprints and decided on each sprint spanning two weeks. Each sprint has a goal with associated user stories and should work towards the very rough monthly milestones we have set up for the project to make sure we make enough progress to reach our final goal. This goal is decided upon during the sprint planning meeting at the beginning of a sprint. Then, the sprint’s goal is further broken up into smaller tasks (called Product Backlog Items or PBIs) which are estimated and put on our Scrum board for everybody to see. Our team has a velocity, which states how much work we can do each sprint and the total estimations for the tasks should not exceed this velocity.

To find out the velocity, we start by figuring out the unit for the velocity. Scrum suggests this is done by choosing a very basic task, giving that task an estimation (usually one unit) and then compare all future units to this one. That is, a task that takes twice as long as the basic task with an estimation of one unit takes two units – you get the drift. Then, we estimate how many basic tasks we could do as a group in the allotted time and that is the team’s velocity. As this is all very approximative, more so the less experience you have working with this method or working as a group, the velocity needs to be adjusted after a sprint. A good time to do this is during the retrospective, a more realistic value can be chosen as part of the evaluation. More on the retrospective below.

We also have the daily Scrum: a timeboxed, 15-minute meeting at the start of every work day where each team member accounts for what task(s) they worked on the previous day, what they are going to continue work on and what possible problems he or she may encounter during the day. This is a very good way of making sure all team members have an appreciation of what everyone else is doing and what state the project is. It also helps discovering and solving problems quicker. At the end of each sprint we have a retrospective meeting, as the Scrum process suggests, where we evaluate the sprint: what has worked well and should be continued, what did not work and should be changed and what new approaches do we want to try.

Lastly, we have an appointed Scrum master that for learning purposes is cycling through the team members interested in the assignment. Currently, I am the Scrum master and as such, my duties include facilitating meetings and solving problems that may slow team members down.

If we find bugs, these are put on the Scrum board immediately and discussed during the next daily Scrum to determine whether they are pressing and need to be resolved promptly, or if they can wait to the next sprint, as well as who takes care of it. This is in line with the Scrum policy that tasks are not finished until they are complete and bug free (as far as is possible to determine).

Apart from what we take with us from these agile processes, we also have an idea board for adding any ideas whenever you think of them. These ideas are then considered at the sprint planning meeting and either discarded, left for the future by going back on the board or incorporated into the process in the form of a user story, PBI or addition to the Game Design Document (GDD).

Disadvantages

We have had occasions where some team members have been stuck waiting for other team members to finish their tasks. This has left us less productive than we could have been although not inherently the fault of Scrum – some dependencies are inevitable, especially at the beginning of a project and some are due to our inexperience in planning and such problems will be avoided as we get better at planning.

Advantages

Several advantages have been mentioned above and include the easy addition and removal of features, incorporate changes based on playtesting, all team members being up to speed with what everybody else is doing and the use of an iterative process. If we come up with new ideas we would really love to see in the game, they are easy to add and conversely – if we find that time is running out on us we get the chance to cut features. Same goes for the result of playtesting.

That all team members know what everybody else is doing is very important and minimises misunderstandings, double work and error resolving. The Daily Scrum also propel communication within the group as an extra bonus for a newly founded group.

The iterative process allows us to change the direction of our project if necessary, scrap features and techniques that do not work in the desired way and even rollback entire sprints. Thanks to the retrospective meetings at the end of a sprint, we continuously improve the work process to make it more effective and less error prone.

Further Reading

If you are interested in the Scrum process and want to know more, I recommend the book Agile Game Development with Scrum by Clinton Keith (ISBN: 978-0-321-61852-8)

Edit: Removed an erroneous description of our method where Kanban was mentioned.

After the thumbnails for a specific scene or object has been created, we verify it with our graphics lead, Lukas. If necessary, we discuss the various pictures in the thumbnail with Tim, our game designer, to decide which concept would work in the game, which parts we should continue working on etc. When we’ve decided which concepts seems the most interesting and suitable, we finish the verification step and continue by making the actual concept art.

The biggest issue for me was to minimize the time spent on actually drawing the concept art, as I am not an artist in my own right. Finding a work process that would work for me was quite a challenge and each of the concept images made so far has ranged between 2-4 hours in the making.

Factory designed for a certain level in the game

I’ve been working mainly with the environments and architecture of the city that is the game world. The city, which is typically cyberpunkish, is divided into three vertically built layers: the top plane for the upper class citizens, the middle plane for trade, markets and middle class citizens, and the bottom plane for the working class and poor people, filled with slums, industries, smoke and dirt. The bottom layer is the part of the city we’re focusing on at this moment of the development.

The slum environments will look kind of like the industrialism of the early 1800’s: brick buildings, high chimneys spewing thick smoke, tall windows and dirty streets. I photomanipulated the picture below by adding in some buildings and factories to visualize how the people would have had to build their homes in chunks right next to the industries polluting their environment, because of lack of space and to portray the desperation of the people living there.

Concept art for residential area.

For my other artwork, I created 3D scenes in Maya and used them as references for keeping the perspective correct, as can be seen in the concept picture below displaying a market street in the slums. I have used various other references from image-googling keywords like “slums”, “markets” and “stand” just to get a sense of the details needed to portray the scene and the looks of a cyberpunk market.

One obstacle in working with cyberpunk is the amount of detail that needs to be put into the environment. The image I have of a cyberpunk slum is that everything has been built on top of eachother – vertically – due to the lack of space to build new structures. It’s a challenge to portray this in any authentic way.

Concept art of market street.

The next step in the graphics department will be to create more general concept art for the specific objects in the game levels. We will probably write another post about that in the future.

– Kim Jonsson (technical artist)

—————–

Like Kim said, when the lead technical artist and the game designer have agreed upon a thumbnail, the process of making the concept art begins. I bet you have a certain opinion of what concept art means but for us it could mean a bigger, more worked on image. It could involve colorisation or adding more details. Also, it could just be to clean up the previous sketch. A concept art image is not as restricted in time as the thumbnail sketches, however, the time given is not infinite.

When drawing the concept art for the police and the weapons, I had been given feedback from the leads on what they liked. The final results were a combination between three or two different thumbnails made into one. It felt a bit strange trying to meld several very different sketches into one. As if I each time ended up with a strange mixed breed, escpecially with the guns.

Compared to the buildings and surroundings, I had things that did not really need to express much mood, and thus they became simpler images with only basic colors.

Can you see the mixes made by looking at these and the thumbnails in the thumbnail post?

When the overall visual design has been set we start doing thumbnail sketches. These are sketches done within ten minutes a picture. The trick about thumbnail sketches is speed and the moment you slow down, you shold consider yourself about finished with your picture. The police sketches, for instance, were made by drawing on top of a basic pose of a man to avoid unneccessary drawing. There should be no time for thinking when doing thumbnail pictures, only drawing. We both used Adobe Photoshop and a wacom4 drawing tablet for all the drawing of the thumbnails.

It was important not to zoom in on the picture when drawing because the harder we made it for ourselves to draw detail, the less detail was going to be drawn. And it worked for the most part.

I felt a bit intimidated because of the stress level coming with the time limit. When you draw it can be a peaceful experience, but that usually also equates to slow work for a lot of people. I think that description fits me very well but the more I drew, the more fun it became to push myself to finish a little bit faster.

The thumbnails for the weapons were more complicated and were not hurried as much as the police characters. This might have been unwise due to the pictures becoming a bit to complex and time consuming, but it helped with having more straight and clear lines. With the time aspect in the back of the head, the weapons took shape in a reasonable pace.

Each piece took around 10 minutes to make, give or take.

Each piece here took around 5 minutes to make, sometimes a few minutes were added due to me getting caught up in details.

– Matilda Karlsson (technical artist)

—————–

I agree with what Matilda wrote above, that It was fun working with thumbnail sketches because you have to constantly push yourself to finish a sketch a little bit faster every time. For me this was a new way of dealing with creation of concept art and even though it was difficult to keep within the time frame of each thumbnail, it’s a great way to improve your drawing skills.

The concept art I’ve made pictures the environments of the slum parts in the city, and some scenario-related structures, architecture and machinery. I tried to finish each thumbnail off with some general shading to add, to a small extent, some detph to the images. Each frame took between 5-15 minutes to create which is a stretch owed to my modest drawing skills.

Thumbnail sketch for residential area

This is the residential areas of the slum and I was going for three things here: tall buildings emphasizing how the people are really living on the rock bottom of the city as seen in frame 1, 2 and 6, vertical construction as seen in frame 7, and early 1800’s industrialism as seen in frame 8, 4 and partially 5. I accidentally drew a concept for transportation in frame 3 but I left it in the thumbnail anyways.

Thumbnail sketch for market area

As with the finalized concept art images following these thumbnails, I wanted to focus on some general details typical for the cyberpunk theme. In the image above I wanted to highlight the neon signs and advertisements visible on every wall and stand in the slum markets. And as with the concept of the residential area I tried to get a sense of vertical construction, that buildings, homes and other premises have been built on top of another due to lack of space (and freedom).

Art is often a big part of a game, so establishing a visual theme and style is very important during the development for the art to be homogenous, and for it to match the narrative and the gameplay as closely as possible.

When we started defining the visual theme we knew what the basic gameplay would be, namely that the player would control riot police squads to keep riots under control in an urban environment. We also knew that the actions the player take in the game should feel meaningful and have an impact on the player.

With this in mind, the first thing we did was decide on a visual theme that would fit this experience. We ended up with a grounded cyberpunk theme set around 50 years into the future. In this theme we imagine a world in grave imbalance in all areas from social classes to the environment, so finding meaningful conflicts in this realm would be pretty easy. Some examples of conflicts are lack of food and clean water, human labor being worth very little, huge disparities between rich and poor people, transhumanism, etc.

With a theme broadly defined, the visual style was next. After discussing different styles and what we wanted to convey to the player, we decided on going for realism. The riots should feel real and the players’ actions should feel as meaningful as possible, so this style made the most sense. We also wanted to challenge ourselves to create a realistic looking game environment.

After the visual style and theme had been defined, we created moodboards for a variety of different areas. This started by going onto image sharing and searching websites and downloading any image we thought aligned with the view of what our game would look like in terms of design, colors and atmosphere. These images were then put together into a single image that represent a specific area of the game. We made five moodboards in total, for the high, middle and low class housing areas as well as for the rioters and the riot police forces. These will help inspire and guide us to create art that is homogenous throughout the development process.