On the first day(ish) of the project I made a working proof-of-concept demo. Today I’m going to pull a Nightdive by throwing everything away and restarting the project in Unity.

This isn’t as stupid as it sounds. I’m only a day or so into the project, so I’m not going to be throwing away a lot of code. Also, I think writing something in C++ and then re-writing it in C# is a good learning exercise. A year ago I took a swing at learning Unity. The problem is that once you’re done with the tutorials, you need to start making something real. But this leaves you with a three-pronged problem:

Learning a new programming language.

Working in a new programming paradigm, with strictly enforced object-oriented design structure.

Trying to solve this new problem. (Whatever it is that I’m currently working on.)

That’s a lot of unknowns to juggle. Things go wrong all the time when you’re programming. In a situation like this if I do something and I don’t get the result I expect, I won’t even know where to look. Yes, maybe there’s a flaw in my design. But maybe the design is sound, but I’ve somehow expressed it incorrectly in the C# language. Or maybe that stuff is fine but I’m misunderstanding Unity. Even trivial problems can take ages to sort out if you don’t know how to find them.

But re-writing something I just wrote is a pretty good exercise. If nothing else, I’ll know the logic is sound.

If you just stick to doing the Unity tutorial programs you’ll end up focused on a very narrow workflow. Unity is built with the idea that you’ll import pre-made art assets and use simple, short scripts to move them around. And it’s pretty good at that. If you take some random models from the asset store and dump them into Unity, you can make something “playable” (in the sense that the player can push buttons and make things happen) in just a few minutes. This makes it feel like you’re making big progress towards “learning” Unity, but you’re not gaining a lot in terms of understanding how to make an actual game.

Creating a procgen city is a pretty complex task that pushes me into doing a lot of things not covered in the tutorials. At the same time, this work is familiar enough that I’m not getting lost in the program logic.

This re-write takes longer than the initial job. I wrote the original demo in one(ish) day, but the re-write / translation takes almost two. That sounds bad, but this is actually really good by the standards of what I’m trying to do. I’m throwing myself into a new language, a new coding style, and a new set of tools. That’s a huge learning curve to deal with. While I’d like to claim I was able to accomplish this because I’ve got a great big programmer brain, the truth is that a lot of the credit for this easy transition should go to Unity. While relentlessly strange, these tools are very easy to use.

What Makes Unity So Strange?

In the very old days – back in the 1970s and 1980s – coding was really inconvenient. You opened up your code in a text editor. You typed in code. Then when you were done editing you exited the text editor and typed some cryptic nonsense to the terminal windowObviously this was before the days of mouse-driven environments. to have it compile all of that code into a program you could run. Assuming it worked, you could then type the name of your program to run it, test it out, and then close it again. Then you’d run your text editor to go back to editing code.

All of this was before my time. When I arrived on the scene in 1990, we already had better tools for this, in the form of the Integrated Development Environment. (IDE.) To me, a “normal” programming environment looks like this:

I'll admit it doesn't look very sexy.

You’ve got your IDE where you type your code. The IDE lets you browse through your source files, edit your code, and compile your program. It helps you look for errors when things go wrong. When you hit “run”, your program will start up as its own standalone program with its own window. If you’re making a game and you need (say) a level editor, then that would be another program you’d need to write yourself. Then you’d give that program to your artists and let them do their thing.

I spent my entire professional life using Microsoft’s programming tools. I started out using Borland’s Turbo C tools in 1990, but in 1994 I bought a copy of Microsoft’s Visual Studio for myself and never looked back. I dabbled with other languages and other tools over the years, but the bulk of my programming time was spent in VS.

In Unity, everything is a bit different. This is Unity:

Uh... where do I type the CODE?!

Unity lets you browse the files in your project, it lets you test your program, and it acts as your level editor. When you run your game, it runs in a window inside the Unity environment. So I guess it’s an Integrated Integrated Development Environment? Everything is integrated now, right?

Well, no. The one thing Unity doesn’t have is a text editor, so you can’t use it to edit your code. When you click on the source file to edit the logic of your space marines, it opens in a separate program called MonoDevelop. I already wrote a bunch of complaints on the shortcomings of MonoDevelop a year ago, so I don’t need to repeat them here.

Having said that: Remember that annoying, glaringly obvious, widespread, easy-to-reproduce bug where you lose the ability to paste text? That is still present. That bug turns five years old pretty soon.

Just shameful.

So Unity integrates everything except source editing, and for that you have to use this fiddly external editor that is apparently abandonware? Or if not abandonware, then “apathyware”. Either way, it’s not a comfortable way to work. Even ignoring the bugs, there are many problems with Monodevelop that make it painful to use. I’ll probably gripe about them in a later entry when I need to do some debugging, but for now let’s just get back to work…

Texture Mapping

One of the first problems I have to deal with in these types of projects is texture mapping. Without a texture map, everything in the world would be a smooth polygon. It would look a bit like this:

The city wouldn't look EXACTLY like this, because this is a gun and not a city.

Think of texture maps like wallpaper. Imagine you’ve got this wallpaper with a strong pattern on it, and you’re trying to cover all the streets with this pattern. Except, you need to be able to have the textures meet at intersections without forming obvious seams. You need roads to be seamless at two lanes and seamless at eight lanes. That would be a maddening job. Aside from being annoying and fiddly, it would make the code really complex.

Valve discussed a similar problem in the commentary for Episode 2. When designing the caves, the level designers had a hard time getting those square bits of wallpaper to flow naturally on those organically round cave surfaces. Sure, if you’ve got the time and patience you can make that kind of situation work. You make things match up as well as you canThankfully, texture maps can be stretched or squashed, which you can’t do with real wallpaper. and shove all the nasty seams into a corner. Then you can stick a boulder in front of the seams to hide the mess.

Ugh. I would NOT want to try to texture this using regular rectangular images.

But then two days later, gameplay testing reveals we really need a side-tunnel in this one spot. That throws off all that tedious texture-matching, meaning you’ll have to start over.

The solution that Valve came up with is to use a shader program to make a “3D” texture that can wrap around any surface. The artist doesn’t need to line anything up. It “just works”. The trade-off here is that the artist can’t control where specific details go. But who cares? When it comes to caves, you generally don’t want to worry about where all those little surface details go. All you care about is that you don’t see any seams.

I don’t know exactly how Valve did it, but I had to come up with a way to accomplish the same thing during project Octant. You can read that entry to see how I did it, but the short version is that I projected the texture onto the surface along all three axes, and then used the surface normal to fade between these three projections. So a west-facing wall would have the texture mapped so that the polygon’s position on the north-south axis controls the horizontal mapping of the texture. A south-facing polygon will use the position of the polygon on the east-west axis. If the wall is a diagonal that faces southwest, then it would use both of these projections, blended together 50-50. This doesn’t work if the texture is (for example) a picture of words or something else that needs a particular orientation, but since we’re dealing with things like pavement and asphalt it’s no problem.

Back in project Octant, the result looked like this:

This bricklayer deserves a medal.

A brick texture has really obvious lines in it. And the spacing of those lines varies slightly across the surface and they vary drastically between each axisBricks are wider than they are tall.. that makes it a nightmare to get it to line up. But above you can see I was able to wrap it fully around an irregular surface. So this works, basically. At the bottom of that… pillar thing(?) in the archway you can see the crossfade where it transitions between different mapping systems. It’s a little weird when you do this with a brick texture, but I think this is good enough for a nighttime cityscape. The player would need to be very picky and be looking very closely to be bothered by this. Like all my projects, I’m looking for the 10-minute solution that solves 90% of the problem rather than the ten hour solution that solves it 100%.

This sort of texture mapping requires making a shader. This turns out to be amazingly hard because the Unity documentation is a disaster. For the sake of getting on with things, let’s save that rant for later and just pretend that this was a straightforward task.

Once I get the shader working, I wind up more or less where I was at the end of the last entry. I’ve got a grid of streets and a “city” of cuboids:

Hmm. Looks like the roof texture is hosed in this particular shot. Don't worry. I sorted that out later.

While a layperson might mistake this graphical feast for a Grand Theft Auto V screenshot, this is actually just my city generator. Who knows where the project could go next? Someday I may even have lighting!

Anyway, this means I can just lazily make polygons and not have to calculate texture coordinates as I go. I don’t have to worry about seams or solving complex mapping problems. Now, if my only goal is to wrap the entire world in concrete, bricks, and pavement, then this would be the end of it. But based on the research I’ve conducted by looking out my window, I’ve learned that cities have more detail than that. Buildings have windows, sidewalks have patterns, and streets have lines.

So what I’m thinking is that I’ll combine two different texture samples. One will put down the base surface, and the other will add the detail.

Shamus! What are you doing, man? You just said you didn’t want to worry about texture mapping and now you’re mapping two different textures onto an object. How is this supposed to be “easier”?

The problem I was trying to avoid was making disparate surfaces line up. So I can have two roads arrive at an intersection (or two walls meet on the side of a building) and not worry that we’ll end up with a seam. A seam would look like this:

*eye twitches involuntarily*

Gross, right? And painstakingly planning out all the texture positions so that I never end up with seams would be a pain in the ass. This base texture system I’ve come up with solves the problem for me. Now I’m going to stick (say) a window on top of that. But when I’m making the window I won’t have problems with seams. Windows won’t form a continuous surface. I can put one window on one section of wall and that bit of wall doesn’t need to worry about what any of its neighbors are doing.

So let’s have our shader combine these two textures and see what we get:

There, it's done. Whaddya think?

Yeah. That’s basically what we’re going for. As a reminder, these buildings are just simple cubes that fill the footprint of the building site. A PROPER building would have surface detail and won’t always fill the entire volume of space. Basically, I need to write the next-gen version of the procedural building generator I created for the original Pixel City. But there was no point in writing that until I’d decided how texture mapping was going to work.

The other advantage of this system is that it lets me mix & match base textures and windows. So one building can have brick with window style #1, the next one can be brick with window style #2, then the next one can be concrete with window #1, and so on. I don’t have to make a unique texture entry for every possible combination of surface + window.

Like I said last time: This project is a little more ambitious than the last one, so we’re going to be stuck in these early experimental stages for a little longer. I know Pixel City showed us a cityscape almost right away, but it’s going to take us some time to get there with this project.

Footnotes:

[1] Obviously this was before the days of mouse-driven environments.

[2] Thankfully, texture maps can be stretched or squashed, which you can’t do with real wallpaper.

I would also like to know this. The reason I’m going through tutorials for Panda3D and not Unity, is that I had one too many problems to juggle. Not good at C#, plus new to Unity, plus on Linux instead of Windows, plus actually trying to make the thing I want to make. Python sucks for larger projects, and Panda3D is only graphics (not sound, networking, or anything else) but I already I know the language, have my IDE set up, and the library works fine in Linux. If the docs were better for things more complex than pre-made assets, but less complex than expert knowledge of the totality of Unity, I’d have started using Unity.

The really useful part of that documentation page, is the link to an example project, of which there are many. This stuff has gotten a lot better since the last time I messed around with Unity! (Or else I missed it entirely somehow…)

If you take some random models from the asset store and dump them into Unity, you can make something “playable” (in the sense that the player can push buttons and make things happen) in just a few minutes. This makes it feel like you’re making big progress towards “learning” Unity, but you’re not gaining a lot in terms of understanding how to make an actual game.

Haven’t read the whole thing yet (at work), but don’t use MonoDevelop. Unity is ending support for it anyways. You can change your preferred IDE im the preferences, and Visual Studio has a good Plugin to make the integration with Unity nearly seamless (in the sense that you can use breakpoints for debugging and they actually get hit). If you use ReSharper (which I would advise for anybody writing C#) it also has a handy Unity Plugin to display nice information, like which functions are called by the engine or which fields are visible in the inspector.

JetBrains, who make ReSharper and PyCharm, have a full-fledge C# IDE called Rider. I strongly recommend it over MonoDevelop. (Or anything else, frankly. It’s that much better.) Getting it setup is a bit of a pain, frankly, but it offers so many more modern IDE features for doing Unity development.

MonoDevelop absolutely deserves it. Just uh, dont add to the mountain that is people declaring Unity inferior for issues that have already been fixed years ago if you can help it? Pretty please?

…I may be a bit of a fanboy. In my defense, I made my master’s thesis (huh, turns out its on the internet? link for the suicidally curious, I guess) in Unity and it was an absolute joy to work with, so…

They more than deserve it. There was a short period where I had some weird problems getting Visual Studio to run due to tampering too much with it (took a month to unravel the mess I made). Tried to use MonoDevelop, really tried hard to give it a shot and live with it. Ultimately used Notepad++ to write code, edit project files and msbuild’s command line compiler. It was so much less painful despite having to deal with things we normally don’t even think about in C# like what order to compile our files in.

I agree, there is no need to “declare Unity inferior for issues that have already been fixed years ago” when there are SO MANY issues that have not yet been fixed. I could probably fill a 100 page essay on everything that is (still) wrong with Unity.
That being said, Unity is pretty great for prototyping, which may be just what Shamus needs so I hope it works out for him.

I’ve been using MonoDevelop with the Mac version of Unity for years, and it has none of the problems you mentioned in your old post. Given that Unity was originally Mac only, I guess it’s a problem with the port?

Both times I installed Unity on Windows, it defaulted to VS, so your situation is perplexing to say the least…

When I was using Unity the (unofficial) tutorial suggested using Visual Studio from the off, so I didn’t even remember Monodevelop was the default. Still Unity shouldn’t push and include as default stuff that doesn’t work. So I’ll still enjoy it being given both barrels.

Honestly surprised that MonoDevelop is still going, what with Visual Studio Community Edition, VS Code and VS for Mac (which is built on MonoDevelop). Microsoft actually has some good docs on setting up Unity with Visual Studio Community Edition (assuming you don’t already have Pro).

I’ve really fallen in love with VS Code as of late, especially in combination with dotnet Core. It still feels a little surreal to be running a bunch of Microsoft tools on my Linux laptop and have everything work so smoothly. Old Microsoft this ain’t.

I just wished they’d named it something more distinct so that I didn’t have to wade through a ton of VS search results whenever I’m trying to look something up.

You make things match up as well as you can[2] and shove all the nasty seams into a corner.

Or,you have your artists draw a single texture that will cover the whole thing.

And thinking of that has made me think about ai.Since there have been ai that paint stuff already,has anyone tried making an ai that would “manually” mesh all the textures together and make a single megatexture that you can then use for your game?Of course,this would not work for a procedural game,but for an already premade level such a thing could theoretically work and cut down the time when the level is being tweaked.

You don’t need anything as complex as AI to do this, though. If you’ve already got the textures, and they’ve already been placed on objects in your game, then you just make some program that stitches all the single image files together, and translates all the coordinates. Your last point, that this would speed up development; Can you describe more what you mean here? This should actually slow down development, since you need to shove all your textures back through your megatexture process, when you change something in the level.

What I had in mind was an actual painting ai that can fill in the mundane stuff.For example,one that knows how to paint walls from a vast library of various walls,internal and external.So whenever you need to build a level in whatever game you are making,you just fire it up and it gives you a bunch of interesting walls,with cracks,stains,bricks for outside and wallpaper for the inside,…Basically an artificial artist.

And yes,I know that current ai tech is not fully up to the task of doing this for a big budget “photorealistic” game,but still I cant be the only one who had such a thing in mind.

If you want to save time with texturing, you do what Shamus did, or make a texture-generator (usually implemented as a shader program, but you can have it be offline too), or make textures from high-res photos in real life. This is actually what Star Wars: Battlefront did for its textures / bump-maps – all photos from the real world, plus some programs that map that onto existing models, or create new models straight from the real-world objects.

“In the very old days – back in the 1970s and 1980s – coding was really inconvenient. You opened up your code in a text editor. You typed in code. Then when you were done editing you exited the text editor and typed some cryptic nonsense to the terminal window[1] to have it compile all of that code into a program you could run. Assuming it worked, you could then type the name of your program to run it, test it out, and then close it again. Then you’d run your text editor to go back to editing code.”

Heh, I’m still using Make and all that, both at home and at work (Linux, of course). And who doesn’t want to type in various git rituals while developing?

However, IDEs are slow, huge messes that just ooze aggravation and waiting for the screen to update when you click something. Ugh no.

Perhaps give CodeLite a try. Best IDE I’ve used in Linux. Like it enough that I rather use it in Windows for C and C++ than Visual Studio despite being significantly less feature heavy. CodeLite is so close to command line that you really have to know how to work gcc and it’s utilities to get the most out of it.

What does HerpesDevelop do to talk to the rest of the development environment, and why can’t eg notepad++ do that part of the thing, or some other text editor that doesn’t fail so horribly at being minimially functional.

Just FYI, a lot of us still work by “changing something in the editor, then going to the console and writing some magic command to re-execute everything”.

Consoles have gotten better. I understand they might not be everyone’s cup of tea. Their biggest upside is that I don’t need to learn a new environment every time I switch technology stacks – I keep using the same editor and console.

Lots of command line stuff also watches for file changes, and only re-does work for those files instead of everything. Is how a lot of new IDEs actually do their thing, under the hood. Makes it pretty easy to use your program of choice for editing, and keep using good, solid command line stuff.

I know what you mean about trying to track down problems in that kind of development environment. I’ve been stuck for months on bugs I thought were in my code; sometimes they really were, and sometimes they were from some obscure setting on some obscure GameObject that I hadn’t realized was so important! And that’s how I got into the awkward habit of creating a fresh scene full of fresh testing GameObjects at the drop of a hat. I’m sure it’s worse for me because I lack the real-world coding experience that would help me spot logic problems more intuitively, but the level editor strangeness is definitely contributing.

I certainly know that pain of learning a poorly documented system. Recently taken to developing my own shaderpack for Minecraft. Encountered a lack of any real documentation on how shadermod actually works. Have had to resort to things like rendering all of the stuff from a particular shader to a buffer and seeing what exactly it draws. Different combinations into a buffer to see what orders they are executed in and what states (like depth testing and the like) are being set. Shotgunning ideas at how things are done in vanilla Minecraft using what data is available in a given shader (like the sky was a nice day long poking and prodding to figure out). It would have probably been easier to just reverse engineer my own shader system for the game.