I decided to take the quickest approach – just pack all of it up in a zip and release everything. This is good because I don’t have to go through and package everything up nice and neat with a little bow on top, which was the main thing holding me back from releasing all of the stuff in the first place.

Sadly, this also means that there’s a lot of stuff in there that’s well below the level of ‘polished.’ Lots of broken maps and models, compatibility issues, missing textures, and it’s all way too big to put on the steam workshop, which cuts out the bulk of my fanbase. That said, this was really the only way I’d ever find the time to release all of it, and as a bonus, I included several source files and dmx scene files as a thank you for dealing with the lackluster format.

One thing I didn’t count on was the death of both my dropbox and onedrive for (just) image hosting – the thread has been getting way more traffic then anticipated. Dropbox is notorious for doing this, but I’ve never hit the limit for onedrive before, and those links are a pain to get. The only images left standing are those from this blog – so I’m going to try changing my standard operating procedure to upload release media here from now on.

With that in mind, here are the shots from this release that I’ve uploaded like three times now! It’s a mix of things I’ve posted here previously and things I’ve never put up publicly, randomly sorted, much like the release they’re attached to.

I currently have it up for download at two locations, Mega and SFMLab (Thanks to Ganonmaster for working with me to get it uploaded). It’s 880 MB compressed, but it unpacks as a 4GB SFM directory, complete with a gameinfo.txt. If you’re on FacePunch, you can comment on it here.

I’d like to take a moment to thank a few people that made this possible – Adam Palmer, PhillipK, Nirrti, Taggart, Bloocobalt, Wraithcat, W0rf0x, Kali, Rusty, Squiddy, fury_161, and Nigbone. There are a host of other authors that helped along the way, but these guys helped make this what it is – the best work is never done by one person. I’d also like to thank the G-Mod screenshot community and the Non-TF2 SFM community for their support and use of the content!

If you’ve never been on this blog before, pretty much every article preceding this one talks about the technical aspects of authoring this kind of content – if you make stuff like this or are just plain curious, I would give you my highly biased recommendation to do browse!

]]>https://internethatemachinae.wordpress.com/2015/08/26/sci-fi-assets-the-definitive-release/feed/3stratus_consoleltcomstratus_consolestratus_console420_2013-03-24_00002420_2013-04-28_00001colddayfanggm_nograv_hall0008lt_scifi_white0111lt_scifi_white0122lt_scifi_white0123lt_scifi_white0125lt_scifi_white0148lt_zenobia0022painting_finalscifi_lobby_nolightscifi_wall_previewsfm_scifi_white_observationsfm_scifi_white_shuttlebaysfm_scifibox_2ship_hammership_wip4stratus_bridgeworkersnick_cleanknife_peview_smallzoey_newhairsatellite_2uav_holo_examplecryo_corridor3cryo_corridor2cryo_corridor1norad_deskgm_nograv_hall0001zoey_pilotstratus_bridge_ashuttle_cubemapsguidon_poseguidon40sw_pistol_abusymorningxeno_cargoairlock_ingameeyelidsHa, still slipped some scifi stuff in here!Coffee thermos with some fitting props.desk_row_test2uav_pilot_promo_smallscifi_pilot3zoey_pilot_2freakin adorable. I animated it from 6:59 to 7:00 via skingroups and texture frame animation for the the colon.steamworkshop_webupload_previewfile_284266415_previewholo_platformNot sure what it is but it was baked.zoey_newhair_2Opinion: The Future of Modding with Paid Modshttps://internethatemachinae.wordpress.com/2015/04/24/opinion-the-future-of-modding-with-paid-mods/
https://internethatemachinae.wordpress.com/2015/04/24/opinion-the-future-of-modding-with-paid-mods/#respondFri, 24 Apr 2015 19:49:55 +0000http://internethatemachinae.wordpress.com/?p=712]]>Rarely have I felt so passionate about a subject like this.

Game mods, for me, have been my primary source of creative output since I was 10 years old – I’ll be turning 27 this November. I’ve been modding dozens of games for as long as most up and coming modders in my primary community have been alive. I have a library of public releases dating back to 2003 that have garnered a traceable number of over half a million downloads, I suspect though, that since this number only accounts for downloads in the last three or so years, I’ve likely hit well over a million people that have downloaded something I have made. If you account for derivative works or media that others have created that used my content as a cornerstone, that number is truly uncountable. I’ve never directly made a penny from any of this. I’ve never asked for donations, and I recently started fulfilling private contracts using my reputation to make legitimate original content for projects – even then, I held to the right to free public distribution when I could. This year, I moved into the sphere of Indie dev as an environment artist, and I don’t plan on stopping now.

Does it bother me that I never made a dime off of my years of work and contributions? No. Would I like to have gotten paid for my efforts? Well I’m not going to say no… if there’s money on the table, I’m not going to just leave it there. That said, I don’t think I would be half the artist I am today if my main goal was to make money. Allow me to extrapolate how I see modding communities split up as a result of paid mods, and why this would have made me quit a long time ago.

Much of my released work has been derivative. That is, in many ways, the core definition of a mod. Most of my work is unsellable because, even if I had the rights from the game developer whose game I’m putting my assets into, I don’t have permissions from the other games I might have sourced content from. Even some of the mods where I did 100% of the work from scratch are unsellable because I used tools that explicitly state in licensing that I can’t take a profit from work done using those tools. I can’t speak for everybody, but when you look at content on any workshop and take those very basic and very clear rules, the amount of sellable content goes to near 0. if you get real strict – and people get real strict when money starts trading hands – a donation could constitute as profit, so the amount of legally donatable mods is near 0 as well.

Why is that?

Modding is not game development. Not fully. It’s a gateway for people to get into game dev, yes, but it is not a platform for professionals to make a living. It’s a platform for people to learn how to become a professional if they so wish, or it’s a way for people to spend a weekend doing something they love. The incentives in a successful mod community are to experiment and be bold – to improve source material or bring new works to the table in ways the original artists and developers never intended. Modders use any resources at their disposal to dream big and try and deliver on those dreams. Sometimes those resources are cracked or student versions of several thousand dollar tools, fonts with clear licensing rules on paid or free versions, or base assets from games that actively try and harden against extraction or modification. Mods are famous for building on and improving other mods, which is a huge strength, and when there is a dispute over credited work, it can generally be resolved with a few private messages if both parties are civil, or reputation corrosion if the offending party is belligerent. Modding exists with the quality and quantity it does today because it is so good at flying under the radar.

Many people agree – modders should be entitled to earn money for their hard work – but what what if that hard work is built on horridly shaky legal grounds? Even if the tools are paid for and the artist can make content from scratch, what if what they make is a likeness of a real person or a mod based off of content from a protected IP? What if it’s a team effort and the lead modder puts it up for sale but he didn’t know someone on the team wasn’t paying for their tools? Are we going to have to have to start building contractual agreements to co-op on mods for liability purposes?

So what happens when we deincentivise boldness and tell people that if they pay by the rules, they can make a profit? Here’s what I see happening:

People that are good at their craft –the cream of the crop– will go legit and do everything they can to appeal to the masses. Who knows, professional game developers looking to make some side cash might see this as the way to go as well. The quality of modding from these developers will go up and they will be hailed as success stories, much like the hat makers in CS:GO, TF2, and DOTA2. What you won’t see them doing anymore is producing things for personal reasons as much if at all, or stepping out the bounds of complicated copyright law to make something they may be passionate about, or possibly even have much passion for the game they’re modding, worst case scenerio. Some of these people are the cornerstone of their communities and a source of wisdom for up and coming modders. They may decide that teaching others their craft is akin to building up their own competitors. This will homogenize the type of mods that are well made and stifle community growth and knowledge, but these people will make a profit. They essentially become subcontractors or pseudo-game developers, just without the stable employment, support backend, overall profit share, or general respect of the authors of the base game.

Those average joes that are not quite at the level of the cream of the crop will attempt to sell their mods and become discouraged when they see no returns. Those that do make some money may never make enough to warrant payment for their favorite modeling package or photo editor, or they may try and experiment or remix or other work to make up for shortcomings. in a free modding community, these are the people that have the most to gain and learn, these are the guys that could end up in the top tier or move on to game development jobs. This is where everybody is at some point, and the success or failure of the entire community hinges on how they treat the newcomers with a lot more passion then skill. In a paid environment, these are the most burned – liable for fixing their mods when the devs beak them, prime targets for DMCAs, and less likely to get the help and support they need from their fellow modders that have made it. If I had been in the environment paid mods will likely create, I would have backed out a long time ago.

In another corner are the amateurs that don’t understand why their stuff sucks and will try to sell terrible products. These people will make the rest of the community look bad in the process, and will devalue the work of others – not necessarily the top tier modders, but those in the middle will get the short end of the stick when the overall reputation of ‘legit’ paid mods get tanked by this bunch.

Next, there will be the idealists. Those that stick to the old ways, and offer up their product for free or help to educate people to enhance the community, regardless of their abilities. Hated by the cream of the crop and loved by the average joes that need their help, they may only get a fraction of the collaborative effort the community had to offer, and they’ll be looked at by some as not valuing their own time or making everybody else look greedy. There will be a lot of strife in communities when the inevitable line in the sand is drawn, and core mods that a lot of other peoples’ work relies on becomes a ‘do not profit from my work’ type arrangement, or people trying to freely distribute their mods find that a mod they used is now paywalled. This will segment the community and create endless drama and stifle collaborative efforts.

Finally, there are the thieves. I never thought, as a modder, I would have to be weary of piracy or theft of product I was giving away, but this is my biggest concern. Even if I said ‘hey I want no part in this, I’m going to give my stuff out for free,’ some leech might release your own work anyway and charge for it for a quick buck. Why do I feel I’m going to have to learn how to issue DMCAs? How often am I going to waste manhours looking at new releases if only to weed out my own stolen content? Alternatively, there will be the more traditional form of piracy in people that reupload paid work for free. Sure, it’s a message to the dev and valve to go screw themselves, but at the end of the day, it’s the modders that get shafted by this, as it’s their effort and manhours that are getting squeezed. Let’s not forget that for the devs, it’s a ‘release tools, open storefront, collect profit’ type operation. They’re not going to feel the sting of seeing their own work being uploaded against their will.

Modding has worked for so long because it’s free. Not necessarily free as in free beer, but free as in liberty. Not because modders should be selfless and expect to get nothing from their work, but because that’s how you can develop a cooperative environment for people producing and asking for highly skilled labor in a legally grey zone. Not every mod is going to go up with a pay wall or a tip jar, but the equation for why any person might want to mod a game has fundamentally changed. I’m worried that even if this thing is a flop and somehow it gets removed from steam, this will still haunt the fabric of the craft. Getting paid pennies in ad revenue has killed even the most vibrant mod communities, look at Minecraft, the Sims, or GTA’s foray in paid mods and how it caused a huge collapse every time. Instead of being on the periphery in those cases, it’s news headline number one for the gaming world.

Just 24 hours in, and it has split the downloading community form those that work to produce mods and turned friends into customers, or corporate lackeys. Those poor souls that dared to spearhead this operation are now recipients of nonstop spam on their mods and personal profiles. The deal offered is shafting all modders with the percentages and utter lack of real support on behalf of steam for either the modder or community members if a deal goes south, but what I fear is how it’ll segment communities. This may kill modding in good faith, and those that wish to continue that tradition will end up sounding like those guys that talk about nothing but their views that all software should be free and open source.

In the future, good mods may be considered and even marketed as 3rd party DLC – not as products of a strong and passionate community. Although not as hard as getting hired by a game studio, there will be a high barrier to entry in this market space, both financially and skill wise. After the consumers get over for being charged today for what was ‘free’ yesterday, they’ll likely appreciate the additional work, polish, and support that will come from professional mods. The modders that end up successfully selling mods will love it too. The people that end up loosing out the most are those that drive a modding community, the unskilled and looking to improve. I was there for years, and hell, I could always improve, but at least I ‘grew up’ in an environment that fostered open sharing of ideas and techniques. I find it sad that the next batch of modders might not get to experience that.

]]>https://internethatemachinae.wordpress.com/2015/04/24/opinion-the-future-of-modding-with-paid-mods/feed/0ltcomSubstantial by Designhttps://internethatemachinae.wordpress.com/2015/03/20/substantial-by-design/
https://internethatemachinae.wordpress.com/2015/03/20/substantial-by-design/#respondFri, 20 Mar 2015 20:50:03 +0000http://internethatemachinae.wordpress.com/?p=683]]>I took a few extra days I had off recently to work on exploring new tools, specifically Substance Designer (4). In this post, I’ll be talking about my first time experiences with it and directly contrasting it to my experiences with the Quixel Suite. My end target was Marmoset Toolbag 2, if only for general testing.

The Model

For the purposes of this test, I authored a simple near-future RPG round. It’s not an overly complex (or, according to Nirrti, a common sense driven) model, but it does hit on a few points that will inform me about the toolset’s strengths and weaknesses, specifically that of the built-in baker.

Custom cages – the inner spiraling on the low poly is a complex concave shape that is more or less the worst case scenario for overlapping from simple push cages. I had to make a custom cage to make any bake work. I also did this to see how well auto-cages would handle nasty geo like this (hint: not very well)

Traditional high poly – I did the high poly in max and exported that to zbrush for some sculpting, but nothing dramatic enough to force a retopology. The damaged variant I sculpted will be completely executed via texture work. I’m not an expert at zbrush but it’ll provide a nice alternate texture.

Floating bake-only highpoly geo – the front face has some floating inlays that need special consideration in bakes to avoid improper shadow and height map data.

Multiple Subobjects – the fins are a separate object on the same UV space, and a proper bake must be exploded to avoid errant overlaps and bad AO shadows, but ideally combined to the same bitmap.

Low poly cylinder/wave testing – I considered redoing my geo a bit to increase the quality of the bake along the surface of the cylinder by avoiding waviness, but decided to keep it low to see how SD’s baker would handle it.

Custom decals – since being able to work with text and shapes that can’t be easily generated is rather important, I created some decals to work with.

ID map/decals/UV

dDo Pass

With the model done and ready, I made a ‘baseline’ to compare to. Since I’ve got about 8 months of the Quixel Suite under my belt, I decided to push through the entire pipeline using that toolset. I used xNormal as the baking tool, and made short work of the model’s texture. I grabbed a few material presents and and did some custom dynamask editing to clean up seam overflow and make some tweaks to wear to make them look less ‘dDo-y’, and imported my custom decals over from the PSD I was using to store/compost my bakes. It took me an hour or two in one sitting, with most of the time being a project loss with a mid-workflow crash that lost about 30 minutes of unsaved work. Such is life with Quixel Suite (1.8).

As an aside, I duplicated the project and re-imported my bakes with the zbrush damage model, tweaked my existing settings for more wear and tear, and re-exported the maps. That took less then half an hour, and produced some really nice results with little extra work.

With that completed, I set out to replicate my work with Substance Designer.

Substance Designer Pass

After following the introductory tutorials and analyzing a few sample projects, I set out to texture my RPG. Please keep in mind that this is very much a first impressions discussion, I’m sure that it does not reflect the total power of the tool in the hands of someone with experience, but it’s more of a ‘this is what I was able to do as a first try’ breakdown. There are things that I likely did wrong, and I’m sure that I’ll look back at this and kind of roll my eyes at how obvious a simpler path would have been in a couple of months.

An included sample object from SD

Baking

One of the appeals of SD is that it has integrated bakers for AO, normal, gradients, etc built in, so in theory, you can sidestep baking in the modeling package or xNormal. The bakers are fast, but I found that generated AO was a little lacking for my taste. It didn’t handle the concave section of the RPG so well, the AO was too subdued for such a deep cavity. Since AO is an important component of procedural textures; I was hoping it would yield better results. I also found that it had a harder time negotiating smoothing errors as well as xNormal does, and the default settings didn’t leave as much padding as I’m comfortable with. Another caveat was that because the body and fins were baked separately, they got separate outputs. I couldn’t find any setting that allowed me to just combine all sub-objects to the same output, so I ended up creating a cheap subobject mask in photoshop and combined my maps in the node phase of the texture process.

I couldn’t find a solid way to incorporate floating geo (the front panel indents) in the SD baker. There are a lot of settings and checkboxes to go over, and I have no doubt there is a way to do it, but I wasn’t able to find it off hand. I’m still partial to 3ds Max as an AO baker only because I really know how to get a good bake, even if it does take a good couple of hours for an excellent result. If it’s not worth an evening of baking, though, xNormal still has better results and more finely grained options to tweak then the SD AO baker. All that said, it fared quite well overall, and I imagine a good mesh with a good clean cage is not going to have a single problem using SD for its sole bake source. I also really like SD’s position (xyz gradient) map, since it’s the first time I’ve had that as a ‘one click’ bake option.

Texturing

I decided to start from scratch for the actual substances, there were reference substances in the example files that I could of copied, but I found it very rewarding and more informative to just create my own.

The tutorial series guided me in the direction of separating out subobject materials into separate substance files and combining them in the last pass, using my ID map as a mutli-switch masking layer. This is very similar to the dDo workflow, where smart materials are applied to the entire texture and are masked off after all is said and done.

A mid-process shot of the masking substance

One thing worth mentioning about the above shot – I was still getting the hang of the I/O system, so it is much messier then it needs to be. Ideally, you should be able to make one connection, but it your sub-substance missing or has improperly linked nodes, you have to drag some of the links to the inputs manually. Now that I know how to do it right, I shouldn’t have this problem in the future. That’s kind of a theme with SD; you do can brute force a bad workflow because it lets you, but that almost defeats the point of using the tool in the first place.

I used the PBR, and with a realtime previewer, I was able to get nice looking material definition by pulling a few sliders. If I was going for accuracy, there are tons of PBR reference swatch sheets for known surface types floating around.

Submaterial nodegraphs

My end nodegraphs were really messy and probably very inefficient. On the above image, you can see my materials for paint, plastic, metal, copper, gold, and glow. Besides getting the very basics from a reference, these were mostly created with a ‘drop, link, and look’ type workflow. Drop in a node, link it to something, see what it did to the pipeline, and tweak things until it looks okay. It was all pretty fun, and definitely the meat of the program. Even as I went from one substance to the next, I found myself getting a little better and smarter about usage each time. I can easily see myself making a whole library of base substances from both my own hand and downloaded, thus cutting down on the time it takes to make these things.

I’m pretty familiar with nodes, but I don’t think I’ve really used them for pure image manipulation tools before. It’s not an impossible hurdle to jump, but it does require a little rethinking as to how to approach the path to get from A to B. The closest photoshop analog I can think of is using adjustment layers. Just imagine that all functions that photoshop has – like invert, noise, blur, and such – were available as adjustment layers that you never collapse. That’s kind of similar to the SD texturing experience. Most of the ‘texture’ comes from either a bitmap (normalmaps, baked AO, decals, etc) or a noise layer of some sort. If you look at the above graphs, you can see that I started with the bitmaps on the left, SD generated maps at the top center, and the end results on the right, with everything in-between being the process from going from A to B. (Also, the furthest left bitmap, the white with the black hole in top center, is the masking map for the combining the fin and body bakes. that’s why there’s two of all of my bitmaps)

The basic path for all of the edge wear layers is something like this: curvature from normal + blur + gradient adjustment to brighten/harden edges + noise + gradient to adjust strength = mask for dirt or edge wear. This is essentially what the dynamask editor in dDo does behind the scenes, but instead of using (and/or tweaking) Quixel tested and approved results, you’re essentially making your own, for better or for worse.

SD node work isn’t without its gripes.

The biggest annoyance was having to be conscious of whether my last node was black/white or color. It’s not like SD really cares too much and will give you converter nodes when it can, but it creates a lot of mess if you don’t at least make an effort to keep things clean. Some nodes, like blur, require a specific input of color or B/W and you have to swap out the node if you grab the wrong one. you can see from the above graphs that it’s something I grappled with and ended up flip-flopping on when I wasn’t paying attention.

The normalmap blend node is very sensitive; even the slightest variations in the hightmap input can become craters on top of your normalmap. It has an intensity slider that goes from 0-10, and I found myself sticking from 0.1-0.9; I couldn’t imagine what ten times the strength would do, or more accurately, when it would be needed. It’s fine for small things like noise and dirt overlay, but down the road, if I intended to do heavy normalmap adjustments, I would definitely end up using nDo or even crazybump to get better results. (Provided the texture is using a bitmap workflow)

Some of the adjustment nodes at times felt a little limiting. The default “Blend” node only has some adjustments (multiply, linear add, masked blend from alpha, etc) other functions like overlay, soft light, and hard light have their own nodes that you have to grab from the toolbox for some reason.

I found myself relying heavily on gradient nodes for most of my adjustments since I found it a little more powerful and easy to use then the adjustment node, esp if you want to posterize, harden, or invert your edges. This could be due to my preference to use gradient maps in PS over the Levels editor.

Setting inheritance got a little messy for me near the end. SD allows you to change the output resolution (and other things) of the maps on the fly, and will recompute them as needed at any point in the pipeline. If you know what you’re doing, you can probably really cut down on compute time if you make size sacrifices in various points during the pipeline, and it’s nice that you can output the same texture from 256×256 to 4098×4098 by just pulling a slider, but one of my submaterials got a little confused about its intended size at one point and I ended up on a wild goose chase to find out where it happened.

The toolbox has a lot of complex nodes that I could spend hours in documentation learning how to use. I imagine I could have saved myself some time and gotten some edge wear masks prebuilt from a single node, but that’ll take more research and time.

Exporting

When all was said and done, my .sbsar file was huge. I’m not sure if it was grabbing all of the baked maps and compiling all of the generated ones or what, but it came to be 200MB+ and took a solid 5 minutes to generate. On top of that, it crashed Marmoset when I tried to link it. I’m pretty sure this was just me not knowing how to do it right. Shame on me for thinking I could just do it by clicking a simple button! I’m going to hold off on making any snap judgments on the export process since one the greatest advertised strengths of the package is integration with real time engines. It’s something I hope to try, but it appears to require more SD knowledge to leverage then what I currently have. I did the more traditional bitmap route this time:

Exported Maps

I did get a few anomalies when I loaded up my maps in Marmoset – the roughness seemed a little off when compared to the SD builtin viewer, particularly on the dull grey metal. I checked sRGB and linear color space settings, but those weren’t the problem. I’m hoping it’s just a checkbox I missed or the difference between IBL skyboxes – it kind of worries me that the interactivity of live updating on the previewer is undermined if it’s not going to match the final result.

Rendered Scene. SD (top) and dDo (bottom)

Final Thoughts

I didn’t get a chance to do everything I wanted to do, but I do have plans for what I want to try next time I use the program, for sure.

I want to give my material subnodes inputs for things like normals, base metalness/roughness and color, turning them into a black box ‘drop and get good results’ type process, exposing the base values as parameters down the line.

Importing my damaged variant and adjusting values to see how quickly I can come up with nice results. If I don’t do what I just suggested by abstracting my bitmaps from my materials first, that could take ages.

Figuring out the proper export procedure and getting the sbsar + plugin to generate the textures.

Testing UE4 integration.

Trying out Substance Painter. I think that warrants its own blog post though.

Make procedural bricks or a tiling floor pattern. Everybody seems to do that with this program at some point. When I first saw SD stuff, I though that was all it did.

Substance Designer is a complex tool. The difficulty of your SD experience directly relates to how well you know how to use it; the learning curve is quite dramatic. This is a program that requires you to watch tutorials and look at examples before you start, the pool is just too deep to jump in unprepared. You can contrast this to dDo, where (if you know photoshop) it’s 0 to attractively textured in a few minutes, even on the first try. dDo’s simplicity is its greatest strength and its heaviest weakness – it comes with a great library of prebuilt, high quality materials, but you need to dig into the nuts and bolts of the application to make your texture look unique. That requires building your own materials to tweaking and painting over custom dynamasks to create a texture that others don’t immediately recognize as auto-generated. It’s at this point where dDo goes from intuitive and easy to clunky and at times frustrating to use.

SD, on the other hand, assumes you are going to be doing the nitty-gritty nuts and bolts work in the first place, and doesn’t really do anything for you. It has a few examples to play with, but you really have to build your own materials and learn how it works to get the most out of it. Substance Designer isn’t just a texture suite, it’s a heavy automation tool that takes time to master, but is worth the journey. Seeing what others have done with it and looking at their example products, I can see that it has much deeper potential then dDo, but it comes at the price of time to learn. Simply using SD for your texture doesn’t guarantee it’s going to look good when your’re done.

I was a little disappointed that I was unable to implement the plugin workflow, as I believe that it is where it starts to break away from other packages. The idea of passing low level variables that can make dramatic changes to the texture on the fly in the end package has the potential to be very powerful. Instead of having to export bitmaps and using, for example, UE4’s powerful node editor to write a shader system to add variability on top of those bitmaps, you can eliminate the disk space of storing the intermediate steps altogether and let SD take the wheel (with your guidance) from raw model to end result. That’s something I really want to try, but it is obviously going to take more then the 10 or so hours I’ve been able to scrape together in my fleeting spare time to try.

Let SD do this kind of work, and more!

What I think SD represents is a new approach to texturing, leveraging a procedural workflow to automate the more tedious portions of hand drawing edge wear, in conjunction with randomization and noise to generate a base for various materials instead of relying on photos or hand-drawn bitmaps. Add seed control and the ability to apply the same process to different bases, and you effectively have the ability to generate complex variation that can be reused effortlessly. What’s really cool is that it looks like companies are starting to use it in this capacity. If you have an additional hour and are interested in the subject, I highly recommend this video, it’s from a guy far more talented then I talking about using both programs in modern game development pipelines:

In short, I like Substance Designer. I need to learn how to use it better before I could consider replacing dDo, but I can see it slipping into my workflow alongside it. Getting the most out of SD means mastering part of an asset creation pipeline, like shader editing or modeling. It’s not a ‘shortcut’ tool like dDo is designed to be. The two tools can fill the same general pipeline slot and approach the same problem of texturing in the same general way, but the target audience and product scope is different. dDo is targeted for people that don’t really mind giving the reins to a program to get quick, good results for textures without investing a ton of time. Substance Designer is for people that want to go deeper, that want to have have total control of the look of a texture combined with on-the-fly modularity and even programability in their textures, at the expense of time to setup and learn.

It’s been a nice change of pace from the work I’ve been doing on Due Process, where everything fits into a 128×128 box, lighting is hand painted on the texture, and every pixel counts. I imagine I’ll be writing about that soon, but right now I’ve got too much work to do to talk about workflow and put together beauty shots. Until then, thanks for reading!

]]>https://internethatemachinae.wordpress.com/2015/03/20/substantial-by-design/feed/0spinflatltcomcomparisonrpg_2ID map/decals/UVspinflatspinflat_2An included sample object from SDbakes_compareA mid-process shot of the masking substanceSubmaterial nodegraphsExported MapsRendered Scene. SD (top) and dDo (bottom)Let SD do this kind of work, and more!January February Rollover Reporthttps://internethatemachinae.wordpress.com/2015/02/02/january-february-rollover-report-2/
https://internethatemachinae.wordpress.com/2015/02/02/january-february-rollover-report-2/#respondMon, 02 Feb 2015 23:47:31 +0000http://internethatemachinae.wordpress.com/?p=648]]>Okay so this is a little late going out. the main reason for the delay has been good news, and I wanted to make sure it stuck before I wrote up something on it. Also, here’s a fan!

2014, In Review

My previous post wrapped up my later projects for the year quite nicely. I just want to give you a rundown of the numbers for the year. Sorry for the general lack of pictures.

Steam Workshop

2014 was a great year for me on the gmod and SFM workshops. I more or less ‘cashed in’ by releasing the bulk of what I had been working on for the past few years, and saw a terrific response. By the numbers, this was 2014:

Gmod – 7 Addons

Visitors (to workshop page): 169,446

Total Subs:219,388 (129% of visitors – 29%+ thumbnail only downloads)

Active Subscriptions:129,111 (59% of people still have the addons installed from first download)

Favorites:2,753

Positive ratings:2,851 (~94% positive)

SFM – 7 Addons (85% Overlap with gmod workshop, or one ‘exclusive’ and one excluded for technical reasons)

Visitors (to workshop page): 9,926

Total Subs: 18,734 (189% of visitors – almost half never visited the desc. page.)

Takeaway: These are some impressive numbers – there were just about 250 thousand downloads of something I made this year, and I have seen my work used in projects more then ever before. None of this is monetized of course, and these combined are a fraction of the single file downloads of some of the headliner addons, but a few of my more popular works have landed in the top 250-300 downloads on the workshop of a wildly popular platform. All of the trends I noted last year, like thumbnail downloads and the fractional audience of SFM vs. gmod are still true. I had to omit this year’s most popular gmod download from the SFM workshop for purely technical reasons with the uploader, which, I contend, is one of the reasons why the platform is so anemic compared to it’s sandbox cousin. I sent an email to the support team, but emailing valve expecting results is about on par with praying to a deity and expecting a miracle.

I hope you’re happy, I had to break out Excel for this post.

Youtube

Youtube has awesome analytics. Most of my views have been and will continue to be driven by embedded links from the steam workshop. Here are my ’14 numbers:

Views: 13,305

Minutes Watched: 11,344

Subs: 97

Likes: 151

Takeaway: My best videos were promos for workshop addons with a direct correlation between the workshop popularity and the video’s performance. I made one ‘offbeat’ video that was an hour long recording of a streaming video tutorial in SFM fundamentals, which got over 1000 views, but given its hour+ running length, only an average of about 2:30 watch length. I would like to do more tutorials, but it’s time consuming, and I feel that in the future, I’d like to do a better job then that offering. Overall, these numbers are twice that that of my previous youtube performance to that point.

All of this still feels a little moot as I don’t monetize videos and I see youtube as a secondary source of creative output at best. Still, it’s nice to see a slowly building audience. I just don’t want to get people’s hopes up when my average upload is an 11 second video of some technical proof of concept.

Clearly it’s a transcendent artform and if you don’t get it you’re just uncultured.

Twitter

I started using twitter in 2014. I use it as a news aggregator, a platform to push my game art, and as a game job search tool.

My numbers aren’t all that impressive, but I like posting things and seeing visible progress over time. I’ve made about 100 tweets since May and have about 30 followers.

Believe it or not, it’s actually pretty successful as a job tool provided you follow the right bots – I got a job offer that I found through the service, no joke. It was a serious commitment that had some detractors and after a little soul searching, I passed on it. In retrospect, I’m glad I did decline the offer.

Other thoughts:
With my slow but steady exit from source, I doubt I’ll be putting up these numbers again in 15. I’m okay with that though. What these numbers don’t talk about are the number of incredibly dense comments and outright spam that I have to contend with on a daily basis. One other side effect is that I get more or less a constant stream of unsolicited requests for custom content and private tutoring from people. I’m not hurting for work now that I have two jobs, and what little time I get I like to enjoy. As of writing, I have 5 friend requests on steam, and this just after clearing them out last week; it’s to the point of avoiding launching my steam client if I can get away with it. I really wish valve would revamp the friends system and/or the workshop system to add a layer of privacy between my workshop releases and my personal profile.

Even if only 1 in 10 people that download something I’ve made look at my profile (and given the trend of popular downloads leading to download spikes on other addons, they clearly do) that’s still on the order of 25 thousand people looking at how many, what, and how often I play games I own, plus screenshots and a litany of other info. On the other hand, I can’t just close down my steam page because I do business through it and I do have actual friends I keep up with only through steam. I’ve even had to deal with a personal stalker that has connected the dots and found things like my resume, facebook, and linkedIn profiles – things that – as an adult in 2015 – I need online and searchable but don’t want every ’13 year old kid that liked that thing I made’ knowing.

I’ve made a concerted effort to put my best foot forward with my online presence regardless of the platform or website, but I try and compartmentalize my digital presence – it’s why my steam handle is still Lt_Commander and not Ben Bickle, it’s why I’ll never connect my facebook there. I get that we all live in digital fishbowls, but there are times that I don’t want to be directly connected to a stream of people begging for me to make more things for them to download or to teach them the ways of photoshop or 3dsmax. It’s not that I don’t like to interface with new people or meet fans (or accept the fact that I have fans, that is still weird to think about), but I just want to enjoy my nights and weekends without having to compose responses to why I’m not interested in making a custom map for a server or why I don’t have time to sit down and hold somebody’s hand through the basics of content authoring. Regardless of how much I deal with, I’m still a teeny-tiny fish in a huge ocean of content creators, and there are scores of people that are better then I am at what I do. I know others have it way worse, but what I deal with now is enough for me to get frustrated to the point of writing about it on a blog. (heh)

2015, Going Forward

Within the first few hours of the new year, I was given a job offer as a contract artist for Giant Enemy Crab, AKA the guys that are working on Due Process. As part of the deal, I have been in a trial period for the past month, and as of this writing, my position on the team is solid enough to talk about. This means that for the first time, I’m actually a (paid) game developer of some sort!

It’s a part time job on top of my dayjob, but it is awesome! It feels like being in a band, working night and weekend gigs and doing the best you can with a group of peers which happen to be some pretty fun dudes. There’s a lot of talent and passion on the team, and I’m happy to be a part of it. Due Process is already a well constructed and fun game – I’m just there to help make it prettier.

As such, my time and effort are going to be focused around this project. Everything else is essentially on hold until I’m done with this. I’m still going to use this blog to do writeups, and I’ll still keep you up to date with things I have the greenlight to show off. The art style is …stylized, so what you’ll likely see out of me for the next several months is going to be like this:

(Note: I’m not making a Star Trek game. This will not be in Due Process. This was a personal primer for the type of aesthetic design we’re shooting for with my work on the project.)

Just a quick comparison of a prop I made previously directly converted to the artstyle, another primer to get a feel for the texture design theory. Here, the texture went from 1024×1024 to 128×128.

This is much more in line as to what you’d see in game – as a matter of fact, this rooftop HVAC might end up in the game!

For now, keeping track of my work means keeping on top of Due Process. I believe in the game and heartily recommend keeping up with it – We have a subReddit, a Facebook page, a Twitter account, and a thread on Facepunch (which is how I found out about the project).

My Plans for 2015

At the end of my last rollover, I laid out some goals. I’m actually pretty proud, I did well in meeting most of them.

Last year, I wanted to explore high poly baking in max. It is now integrated into my AAA asset pipeline, and I’m able to leverage the skill to generate better quality models across the board. This year, I want to branch out and explore Maya and zBrush in order to boost my studio hireability. I also want to learn Substance Designer and Painter. They were given to me as gifts for Christmas, and I promptly accepted a job where they’e useless! That doesn’t make the suite any less awesome or less worth learning though, and I should keep myself practicing with more traditional artstyles since my current work is fairly unique.

I wanted to explore animation. I still consider it a secondary skill, but I do have at least a baseline ability to rig and animate a model, although it’s not much better then were I was at the end of ’13. This year, I’m not planning on focusing on animation more then required as I move away from source (and by extension SFM). I feel I’ve found a specialty in environmental art and having given it a shot, don’t really enjoy animating. That said, I would love to explore and improve my rigging and ‘animation prep’ stage, to make my content more animator friendly.

This kind of animator friendly.

I still have small source projects I want to release, some of which I talked about in the last update. However, they take a backseat to my job related obligations. I have a few models that really just need to be polished for release. I don’t want to work with source unless I *have* to, and getting those assets out the door can signify a gracious shutdown of my source engine/last gen pipeline. I understand that most of my following is centered around my work in source and I’ll likely loose a few people among the way, but after a solid decade working with one engine, I am tired. I love the prospect of exploring new engines and self improvement through diversification. That just can’t happen if – after finishing this game – I jump right back into source.

My last rollover talked about starting a mod. That was before Unreal Engine 4 and everything that brought to the table, and I’m now working on a game using the Unity engine. My goals for 15 are to provide the best work I can for the project I’m a part of now and when that’s over, who knows – I’ll just need to see what’s out there. I’m optimistic about what could come next. Here’s to a new(ish) year!

]]>https://internethatemachinae.wordpress.com/2015/02/02/january-february-rollover-report-2/feed/0rooftop_ac2ltcomfan_floor_aniI hope you're happy, I had to break out Excel for this post.connie_gifscifi_box_crate_lowrooftop_ac2due_process_logoThis kind of animator friendly.2014 Q4 Updateshttps://internethatemachinae.wordpress.com/2014/12/04/2014-q4-updates/
https://internethatemachinae.wordpress.com/2014/12/04/2014-q4-updates/#commentsThu, 04 Dec 2014 20:13:05 +0000http://internethatemachinae.wordpress.com/?p=620]]>Hey everybody! It’s been a little while! I have been busy and although I haven’t done any one thing that I feel is ‘blogworthy,’ I do have enough small things that are worth an update. If you want more granular updates, I post pretty actively on twitter, it’s generally the same content I post here, just more granular and with horrid compression on images. Plus you can get in contact with me, I’m using it as This post is just going to jump around a little more then usual, I just want to give an idea of what I’ve been doing without drowning you in transitional text.

Most of what I’ve been doing lately has been small props and individual pieces, as well as testing new software for integration into my workflow. Above is an image of a knife I finally finished about a month ago, as rendered in Marmoset using PBR. Hand-done textures here, no Quixel or dDo. I wanted to make sure I haven’t lost my touch. This image illustrates a few new tools that are slowly worming their way into my pipeline – Marmoset obviously, as well as zbrush, which was used to sculpt the high poly for the handle. Learning Marmoset has been my focus for the last month, it’s deeply controllable, previews UE4 ready PBR textures quite well, and produces some damn fine results if you set up your scene properly. Compare the above to my SFM output of the same knife using the same model with the same textures, using equivalent source shaders:

Here are some other Marmoset renders using the UE4 PBR texture pipeline, all of which I’d consider works in progress:

Although I’m moving away from source at an accelerated rate, I’m still doing some work with it. I released Meryl from Metal Gear Solid 4 for SFM and Gmod. I also fully released the pertinent source files, including the .max (10+) file and .qc/qci for my FACS face setup. If you need resources to look at and compare against when doing your own, I highly recommend you pick up this file.

Sadly, this release was a victim of feature creep, there were a few finishing touches that I needed to do to have finished this in July that I couldn’t find the motivation to do until the last month! Such is the way of things when there’s no deadlines or incentives beyond a sense of completion.

I plan to have the rest of the existing, source ready sci-fi assets out the door before 2015. I released Norad, and I still need to push out the set pieces and clean up the maps for the other portions of the Synthesis project. I’m going to try and lump in a few new model hacks I’ve been working on as well:

Somebody asked for my AO settings in SFM. I tune them per-camera per-scene, but I generally go for sharp cavity shading and boost the radius until I start seeing halos around outer edges of the actors, here’s some examples:

I also wrote up a simple 9-step tutorial for getting realistic chromatic abbreviation in photoshop. It’s a little ‘brute force’ but it works:

Photoshop wise, I’ve also done some graphic design doodles:

Fun fact: I reused that crimson logo design on the tan spherical ship that I posted a little higher up. The other designs might pop up here and there in other environments in future work, stuff like this goes in a resource library for later use.

Besides the knife, I’ve been doing some brisk modeling work. Nothing has made it to game engine yet, but here are a few beauty shots/wip shots of some computer hardware and other stuff:

A guided missile that’s been on the back burner for a while:

Early WIPs above and a second round dDo pass below.

I’ve been using this model mostly to test custom dynamasks on dDo and new features as updates roll out, so I’m not in a rush to actually finish this. I was thinking of pushing this model to Star Citizen (cryengine) if/when they allow 3rd party mod support. That should happen around the release of HL3, so the model should be good and ready by then.

An Abrams smoke launcher:

This model was to be commissioned by someone I’ve know for a while, but there was a frankly surprising (and borderline offensive) issue with the amount it was going to cost. This was as far as I got before it hit a dead end.

Computer Hardware:

I’ve been having some fun making some pseudo professional/gamer hybrid computer hardware. I’ve been quite happy with the results so far, take a look:

I’m doing a mutli-texture system on these to squeeze out all the graphical fidelity I can spare. The monitor is using 2048×2048 maps, the screen is split to a ‘anti-glare’ layer that’s using part of the body texture, but the actual display is using a 2048×2048 map that’s mapped to 1920×1080, I mean why not, right? The split means that even if I’m not using textures, I can still reserve it as a UI plane for real time outputs and stuff. The keys on the keyboard are using shared textures and translucent planes for the actual lettering. Besides the self illumination and color variation properties, it means that I have more room on the texture for things that are not keys. Instead of ~100 separately mapped keyheads, I just have the uniquely sized ones and one for the 90 or so standard ones, as well as this easily readable and modifiable texture sheet:

Alright, that pretty much wraps up what I’ve done for the latter part of 2014. I’ll be doing a more formal recap of the year in January that goes over what I accomplished in ’14 as well as long term goals for 2015. See you then!

]]>https://internethatemachinae.wordpress.com/2014/12/04/2014-q4-updates/feed/4knife_kbar_marmoset_smallltcomknife_kbar_marmoset_smallknife_peview_small40sw_marmoset2osr_marmosetsteyr_marmbubbleship_marmoset_2workersnick_cleanao_scenechromacorporate_logoswhite_uimissile_wipsmissile_wip5smokelaunchermonitor_backmonitor_ddomonitor_marmosetkeyboard_wip1keyboard_wip2keyboard_marmoset2_smallkeyboard_keysThe Making of Synthesis – Modelshttps://internethatemachinae.wordpress.com/2014/10/18/the-making-of-synthesis-models/
https://internethatemachinae.wordpress.com/2014/10/18/the-making-of-synthesis-models/#commentsSat, 18 Oct 2014 21:05:20 +0000http://internethatemachinae.wordpress.com/?p=526]]>Alright, part two! In this post, I’ll be talking about the various model and texture projects spawned by the cancelled SFM short Synthesis. Just a heads up, I’ll be digging deep into the new Quixel tools and dDo workflow; this project was a case study in producing results in as little time as possible.

Before I really dig in, I just want to underline a few restrictions that drove every design decision. This was designed to be a Saxxy entry, and making a film short for that competition has a few massive asterisks attached to it when referring to content used, namely, content from non-valve games are a non-starter. There are exceptions to a rule, but existing content you might have access to is on an unwritten whitelist, assume everything is banned unless explicitly stated as allowed or you made it yourself. I suppose valve made this rule for simple and obvious legal reasons, and to encourage the creation of fresh, original content. What it has done in reality has stifle entries in the past, locking them mostly to TF2, with occasional other Valve IP works. Our goal was to sidestep this rule by mostly making new content from scratch and reusing what we could when it wouldn’t be overly obvious.

The second thing to consider was that we were on a very tight deadline. Most of the design decisions I made as a content creator were to make the best looking assets I could in the shortest amount of time possible. When the decision was made to give up on the saxxy deadline (and eventually a total cancellation), the assets were about 90% complete. With that much done, from my perspective, there was no reason not to just finish my part of it. Sadly, I’m not much of an animator, so the film wasn’t going to happen, but what I am capable of doing is making completed assets for source, so once i had the go-ahead from my partner, I released all of the finished content.

Character design – Flight Lieutenant Keller

Nothing encapsulates all of my design and project goals quite like the development of the main character. I’m not much of a character modeler (actually, I’m not a character modeler at all, it’s on my list of things to learn). I am, however, quite apt at putting together existing models and assets and recombining them into something new. In the garrysmod modscene, this is referred to as ‘hacking.’ (As in hacking bits and pieces off of things to make new things). The core assets for Keller were sourced, but all of the textures were modified and in some cases completely reworked. Let’s go through them.

The head mesh and all of its flexes started as Zoey, from Left 4 Dead. True to for for anyone fighting a tight deadline, I did not rule out previous work of mine, and in 2012, I had co-released a model that fit our billing to the letter. Zoey has very nice facial flexes and mesh, there was very little I had to do to make the head work. The skin I did in 2012 had the extra bonus of substantially changing her appearance, below is a shot from the 2012 release of the AVP Synthetic Pilot (coincidence???)

The hair hails from Fuse. It was not my first choice, I’ve seen and used much better options sourced from games in the past, but once again, Saxxy rules and we didn’t want to push our luck. I don’t currently have the skillset to model hair, nor did I have time to learn it for this project. We ended up with choosing two hairstyles that worked.

When we made the decision to take more time and there was still a glimmer of hope for completion, the first thing I did was to spend extra time with the meshes and textures. I scrapped the old hair texture completely and hand drew the new one, and I supplemented that with additional geometry, luckily 3ds max’s ‘flow connect’ tool was quite useful here.

Hand drawn hair. Inset, source texture I was provided.

Source’s handling of transparency has never been ideal. As with all the projects I have that use hair, I used the trick of mesh duplication to get the best effect. This allows for two co-existing meshes with duplicate shaders save for one setting – one gets $alphatest for the correct z-pass sorting, the second gets $translucent to produce the softer edges of a greyscale alpha. The end result I came up with is still not the best hair ever, but at least it’s respectable. Also I added jigglebones for the larger hairstyle, because why not.

Here’s a silly stress test I made to see how the jigglebones would react in a running animation. I may or may not have gotten carried away.

The body is from my one of my partner’s previous project. It was custom made for that film, but the backing company that funded it seemed to have little interest tight asset control, and it fit our needs perfectly. He did add the condition that I retexture it, I had no problem with this request. The author of the model did a very nice job creating the mesh and rigging – the unwrap was a little wasteful, it used three 4096×4096 textures, but there was little stretching and the bakes were solid. The same could not be said for the textures – they had the issue of to much high frequency detail and not enough large shape definition. Although at 4096×3 you could texture every fiber of a jacket, the overall design at a glance was a formless grey mass.

A direct comparison between the initial textures and my spin on them.

With the source AO and normalmap bakes in hand, I remasked everything and fed the texture into dDo. After some work and custom smart layers, I ended up with a ‘mid point’ I liked.

Quixel suite does not directly support the source engine. It is a dream for UE4, they have profiles built that port directly over with accurate calibrations that’ll look very similar to the 3do previewer. If you want to make things for source, you need to create custom maps and give up on 3do giving you an accurate image. This was the first asset I really spent time to convert to source, the workflow was less then ideal. By the end of the project, I had something far more efficient worked out, I’ll save it for my corridor section.

I did do some modifications of the mesh for the shuttle scene, I decided to just have the model follow the Alien(s) style of synthetic, so I pulled up some reference images from Alien and Aliens and blew an arm off. I just used some basic shapes that looked appropriate for robo-guts and baked out normals and AO. I unwrapped it on the texture where the arm and hand were, so I skipped quixel and just did the texturing by hand in one afternoon.

Revisiting Old Projects

I made some small changes to the shuttle for this project, namely creating alternate skingroups for flickering holograms, appropriate decal sets, and modeling out a blow out panel. Nothing much more to say, just some simple contextual adjustments to fit the needs of the film.

The VR headset was an addon for a previously made and released model. I actually made (modeled, unwrapped, and baked) the main mesh for the VR portion months ago when I first thought of the idea for the script. It sat, untextured, until the project became a reality. I took it as a challenge to match dDo output to my previous hand painted work. I’ll dig into the dDo workflow in the next section, but I think it turned out relatively seamless. Also, I propagated the neat little feature that the object can be colored at run time. Not exactly a needed for the project, but I think someone might come up with some use for it when it’s out in the wild.

Yes, I know it looks like an Oculus Rift Dev kit, seeing as it was obvious inspiration for the design, but please tell me that it reminds you of one next time you talk to me.

Modeling the Corridor

I’m going to defer the bulk of the design discussion to the previous post where I talked about the influences of the corridor, but I thought I could expand a little on how I generated the textures, and how I pushed a lot of the grunt work to dDo.

Modeling wise, I used a few simple tricks to make things appear more seamless – namlely, I tried to break up seam points where one module met another with girders. This turned out to be a good decision – as I discovered, SFM doesn’t allow precise rotation, so, if you look closely at the scene, you’ll see some of the girder connection points don’t quite line up. Words cannot describe how much this annoys me, but I agreed to just let it go for the sake of completing things. When I unwrapped the model and prepared it for AO baking, I encountered another issue – all of the textured surfaces are concave. Simply putting a single skylight in the scene would yield bad results, so I ended up using three omni-lights with two lighting passes and combined the shadow maps in photoshop. To get per-object AO (and correct normals) for the scene, I ended up exploding out everything and baking. It took more then one try to get something that looked okay.

The coffin above illustrates simple color material separation on the pre-turbosmooth high poly to split different surfaces. I used the baked diffuse map with additional color layers in photoshop to feed the correct color-ID map for dDo. Another thing you may notice in the low poly is lack of collapsed edges in the final export – this was purely a time saving measure to use the same mesh for unwrapping and as a base for the high poly. Normally, I model the base shape as all quads and keep my loops as intact as possible and duplicate it twice – object_low and object_high. From there, the high poly gets control loops and additional geometry for baking, and the low poly gets collapsed edges, partial triangualtion, n-gon removal, and a UV map. Here I used the uvmapped ‘mid’ pass as my low pass.

This was the first object that built the ground rules for my current dDo integration pipeline. Let’s go over it.

1. Map Cleanup

As discussed earlier, I generate my color-ID map with a combination of diffuse baking and manual editing. Depending on complexity, I may do all of it in the bake, or all of it in photoshop, with the decision maker usually being map splits on non-UV seams. The more precise the selection needs to be, the easier it is to bake.

AO is straight forward, I usually just have to correct little cage-miss related errors and possibly a despeckle/noise removal/blur pass if needed. Normals are also straightforward, I do despeckle/cage miss cleanup if needed. Source uses (-Y) normals (flipped green channel), and dDo takes both if you specify, so now would be the time to make them game ready here or at the last step. I find that sometimes dDo doesn’t quite take the hint that the normals need to be inverted occasionally, so I generally flip them to make it happy.

2. Modifications

Before I import the maps into dDo, I’ll do a little initial texturing, mostly of shapes and patterns that need to exist on the nomralmap, as well as first pass decal placement (for reference). I generally call this a ‘grey pass’ or a ‘height pass’ or ‘that thing I do before I feed it to the beast.’ The important thing to remember here is that all we’re doing is defining additional ‘geometry’ that would otherwise be on the model. This could be anything from surface inlays or panels to simple engraving to grates or grip patterns. Things that are easy to texture on but hard to model. You don’t want to add cloth or brick height -or any kind of material definition- at this stage, though.

Most professional modelers do small surface details like this in zbrush or max, but I do them in photoshop mostly because I am pretty good at texturing on additional detail like this and sometimes I want to have those areas easily masked or otherwise easily available for selection during the later texture phases. The main detractor of doing it in photoshop is that you don’t get seamless transition of a contiguous shape along UV seams, but with enough practice, you can learn to work around that limitation, especially if your unwrap is decent.

Once the heightmap is completed, I’ll throw it in Crazybump (I’m still using it, although nDo is viable as an alternative here), and get my overlay normals. The trick to combining them with the baked normals is that the overlay normal you just created has to have a neutral blue channel. Select just the blue channel and run two brightness/contrast passes on it. Switch each to legacy, and darken it first by 100 and again by 27, if you did it right, what was previously white in the blue channel will be 127 grey. Go back to your RGB view and you should now see your normals looking a little like the ‘Normal Overlay’ in the next image.

From here, all you have to do is set the blend more to hard light and you won’t loose any data in the blue channel – your normals will retain the depth they should have. Make sure that if you did/did not flip the green channel on your baked normals you do the same here.

AO is another thing you can modify here. I propagate a soft drop or inner shadow to the appropriate layers in the AO. It helps dDo with cavity wear and inlay dirt buildup.

3. dDo pass

Now that I’ve done everything that I need to do with the shape of the model, it’s time to define the materials. dDo does a great job of this, for the most part. I’m not going to make this a step-by-step tutorial on how to use dDo since those tutorials exist. For source, calibration for a specific profile isn’t really all that important. What is important is choosing aging and texture overlays that match the material you want. Dig into the dynamask editor and make something unique. I also make sure that regardless of how ‘new’ the object is, I create a custom dirt layer, starting from the ‘sandy’ dynamic material. I can adjust it by hand in later steps. Since source is ‘last gen,’ I also generate a base definition layer from the ‘stylized’ column.

4. Manual Submap Generation

Once you’re generally happy with what you got from dDo, it’s time make the masks needed for source. You have two options here – generate then in dDo as custom masks, or make them yourself in photoshop. I did both ways for this project, but I found the latter to be far easier given my existing workflow, plus it allows for much more latitude if you want to actually, you know, texture it. If you go the dDo route, you’ll still need to crack open each psd it generates and combine some per channel anyway, so it’s not like using it fully is a huge time saver, and the 3Do previewer really looses its value as an accurate preview at this point.

Think of dDo as a step in a journey.

I’m a layer comp guy, so I like everything in one easy to use psd, but that’s just me. Once again, I could spend 5,000 words breaking down my take on source’s texture pipeline (and spoiler, I have another post that I’ve been writing for a long while that covers exactly that) so I won’t be taking about that here.

5. Import to Source

With the maps complete, all that’s left is to combine them correctly, write up your vmts, and bring it all into source. Once again, doing the submaps on your own is faster for small tweaks when you discover what you thought would look good in photoshop doesn’t actually look good ingame. Don’t forget to flip your normals if you did that in the first step! Remember, on a source friendly normal, an object poking ‘out’ should have a black outline on top and white outline on bottom on your green channel.

Modeling Norad

With my workflow locked down and an ever approaching deadline, the visual design for norad was equal parts creativity and pragmatism. I’ve already talked about the design side, but I wanted to share a little about the models.

Norad breaks down to 7 models. Thanks in part to dDo, I was able to replicate material and surface properties with relative consistency across all of the models.

The core of the design for the more stylized objects was a spline workflow. Instead of starting out with a box or another primitive, I started with a max line and created simple geometry from it. I tweaked the shape and flow before collapsing to an editable poly, and once they were done, for the girders especially, I used edge and face bridging extensively.

The one object that defines the workfllow and styling of the Norad environment on the whole is the chair. Built from three primitives and two splines, it is simple but stylish, and when you take a close look at things like the connector between the legs and seat, you can see I made some time saving decisions for areas I knew would not be heavily scrutinized in the final product. It also has a full set of LODs; since it was spline based, reducing the polycount was simply removing existing loops.

dDo mid-pass

How-To: I made the fitted leather cushions by selecting appropriate faces from the back piece, detaching them them as a new object, and doing a slight inlay and a dramatic bevel I then deleted the inlay geometry. That gave me square edges, so from there I applied a single (or double?) iteration turbosmooth modifier that gave it a cushion-y look. I collapsed that back to edit poly, removed the unneeded edge loops, and reattached it to the base model. From there, I pushed it down into the wood back piece just enough to cover seams and treated it the same as the rest of the low poly geometry.

The real meat of the Norad modeling process was the desk rows, however. The script called for a wide open space filled with people doing VR stuff. Easily said, hard to execute. To fulfill this, I could have simply duplicated the desks/chairs and filled each one with a different character. However, this would have meant 250+ ragdolls and 500 single bone props, plus 250 VR visors. Not ideal. SFM may not be a real time renderer, but ideally, you still want a decent framerate in the preview, and things like that add hours to render time.

I created a second LOD for the desk and chair, cutting polygons buy about half. I also reduced the polycout for the character, culling entire sections I knew would never be seen.

In all, there around 250 bones in the 17 desk variant, but given the idea was to save as many resources as possible, I decided to make a model for each row – the largest hit the bone limit, and each after is less resource intensive. I crunched the numbers and there was a substantial savings in both bones (thus bone animation data) and polygons. For the largest case of 17 desks in one row, the optimized triangle count is 93,621 vs unoptimized of 267,376, a reduction to 35% of normal. For bones, it’s 242 vs 1224 – 19% of normal. If you were to propagate to all 272 desks needed to fill the scene, the results are quite dramatic: 4.27 Million Polygons compresses to 1.5 Million Polygons. This doesn’t even compute overhead for things like vertex animation of facial flexes or calculation of two eye shaders per model. Bottom line, it was worth it to do it. Even if time were infinite for the project, the easy to navigate animation set viewer is worth the trouble.

The Satellite and Story Holograms

One of the key assets for the project ended up being a satellite that I had modeled as a favor to a friend a week before we decided to do the film.

It was a quick model that utilized a lot of duplication to create the final shape. At the time, dDo wasn’t working very well, so I ended up hand painting it, and giving it a second pass after I got the Quixel suite working again. I also made a holographic version similar to the treatment I gave the shuttle, I’m glad we got to use it.

dDo redux v2

Speaking of holograms, I decided to make a small number of story-specific ones for this project. Besides the floating briefing set pictured below, I also made a few for Norad.

freakin adorable. I animated it from 6:59 to 7:00 via skingroups and texture frame animation for the the colon.

The Norad screens turned out pretty cool. I’ve wanted to do a galaxy map that kind of hints to the deeper story behind the environment for quite a while. A flat texture wan’t my first choice, and I may end up redoing it with a more 3D one in the future, but it works.

I also convinced my partner to use a hologram for the opening title. Here’s a shot of the initial texture I made up for the ‘pitch.’

Closing Thoughts

This project pushed my ability to produce whole scenes. While no one thing was particularly hard to make, the volume of content I made for this project in the time I had really makes me proud. It is unfortunate that we were not able to finish it in time for the 2014 Saxxy awards, but seeing as it was another year where the only category to win was TF2 (again), I’m not as unhappy about it as I was before the winners were announced.

Putting so much effort into a project that dies can be soul crushing. I’m not happy that all of this highly contextual art will never realistically see the purpose for which it was made, but I take solace in the fact that I was able to use this as a way to test my limits and release the content to the world for free use anyway.

Thanks for reading. I hope something here has been worth the time it took for me to write it, and it was at least a little educational.

]]>https://internethatemachinae.wordpress.com/2014/10/18/the-making-of-synthesis-models/feed/5uav_pilot_promo_smallltcomuav_pilot_promo_smallnorad_deskcs_coldwar_synthuav_pilot_promo_smallscifi_pilot3zoey_hair_long_2Hand drawn hair. Inset, source texture I was provided.zoey_newhairzoey_newhair_2jiggleboneA direct comparison between the initial textures and my spin on them.scifi_female_torsoscifi_pilot_jacket_srcscifi_pilot_fullzoey_battledamaged2zoey_battledamaged3zoey_battledamaged5shuttle_decalszoey_pilot_2vr_headsetdesk_row_test3vr_headset_2avr_headset_2corrodor_bakingcorridor_rawmapsmid_normalsmid_aocorridor_ddoThink of dDo as a step in a journey.corridor_sourcecryo_corridor2norad_scene_fullnorad_linenorad_chairchair_lodsnorad_lodsnorad_desk_rowsnorad_lods3desk_row_testdesk_row_test2satellite_2nextlevelsatellitesatelite_wip1dDo redux v2uav_holo_examplefreakin adorable. I animated it from 6:59 to 7:00 via skingroups and texture frame animation for the the colon.synthesis_titlepublishThe Making of Synthesis – Environmentshttps://internethatemachinae.wordpress.com/2014/10/18/the-making-of-synthesis-environments/
https://internethatemachinae.wordpress.com/2014/10/18/the-making-of-synthesis-environments/#respondSat, 18 Oct 2014 21:04:16 +0000http://internethatemachinae.wordpress.com/?p=465]]>With our Saxxy 2014 entry canceled and a solid month’s worth of source asset development done for a cancelled project, I though I’d still release what was made for the project. These two posts chronicle my workflow and touch on some things I made, since it’s still very solid work that leverages SFM and source in ways I don’t seem any people in the community do. I wanted to do some workflow breakdowns for how I achieved a very specific look for the short in hopes it’ll help others improve their craft. This post will focus on the environmental design.

The most important (and subtle) part of the environments were an additional push to get reflections working. This post will go into detail how that was achieved in SFM, the branch of source known for being tricky to author assets for (and that’s saying something considering source as a whole has a reputation for being a struggle to work with). I won’t go too deep into the modeling side of it in this post, instead, this will be targeted towards the mapping side of things.

SFM’s Hammer, Cubemaps, and YOU

So, if you’ve used Source Filmmaker(SFM) for any length of time, you will encounter an issue with a map from another branch of source. It will A) Not load, B) Not have HDR lighting, or C) Have no or broken cubemaps.
A and B can be fixed with a simple decompile, fix the problem, and recompile process. C is far more insidious.

That effect is actually a badly broken cubemap reflection. This map is rendered essentially unusable with specular reflections.

I think this is a cubemap issue, still haven’t figured out exactly what caused this. If you’ve seen this before and can fix it, send a electronic-message to this webzone.

When you get huge flashing colors on reflective surfaces, or surfaces that should be reflective are completely flat, your map doesn’t have cubemaps, or the ones it has are broken. If you ask people on forums how to fix them, they’ll tell you to set mat_specular to 0 and deal with not having them. Other people I’ve seen evangelizing the complete removal of them from shots. This is a travesty – cubemaps are an incredibly powerful -if old- system of improving overall material definition for metallic and reflective smooth surfaces. If you know how to leverage them correctly and accept the limitations they have in source, then you can produce some impressive results. Speaking of limitations, here are a few things to keep in mind:
-On models, cubemaps are either shared with phong masking or take up the alpha slot of the diffuse, since these are the only two options for the specular reflection mask in vertexlitgeneric. A solid material definition means that you shouldn’t use the first method, and using the second disables a lot of cool features you can’t achieve without the alpha of the diffuse (translucency and dynamic tinting come to mind).
– One environment textures, cubemaps are the only advanced shader technique you get – there is no phone on world geometry, even in 2014. According to my own tests and observations, a masked pixel on the world geometry shader will be about 35% as strong as one on the model shader, meaning you have to darken a world specular mask by a third to get the similar level of reflection on a model.
– Specular tinting is global, there is no per pixel way to tint the spec mask in source. This is possibly one of the biggest caveats of the generic source shader.
– Cubemaps are placed by the mapper ad are non-parallax correcting. If you look hard enough at a specular reflection on a model or surface in a position in a map, the offset from where the map was generated and where the object is can be jarring is poorly placed. Furthermore, switching from one closest cubemap to the next can cause a form of texture popping that can be jarring and very obvious when you introduce motion. This problem is further magnified in SFM. There is no solid documentation on it (of course), but I have a hunch that projected texture on all surfaces (map and model) use the closest cubemap to the camera, not each model.

This last caveat can produce some very nasty results – imagine you have a cave environment that opens into a small clearing, and you’ve placed a cubemap entity (point to draw your map) both inside where it is dark, and outside where it is bright. In a standard source engine game, the interior of the cave- the rocks, the puddles of water, the static reflective surfaces have had their cubemap pre-defined at map compile. The player, who, for the purposes of this exercise, is holding a reflective sphere, gets a nasty texture pop as he moves from the mouth of the cave to the clearing, but at least it makes sense. The reflections are chosen by which he is closer to. In source filmmaker, the cubemap is global to the scene, so if you placed your camera outside in the clearing and are peering in, the actor with the reflective sphere, the cave walls, the puddles, all of that, is using the outdoor cubemap for reflection. This looks horrible, and as you move the camera inward, all of the textures ‘pop’ to the proper indoor one.

The way SFM handles cubemaps is questionable, there have been some scenarios where this exact event plays out as described, other times I have seen the global cubemap get ‘stuck’ to whichever the camera was closest to when the project was opened. Sometimes, it works as it does in a game – long story short, no matter how you cut it, cubemaps in SFM are broken.

A video illustration of this effect can be seen in in the first 10 seconds of this video, pay attention to the pipes along the left edge. As the camera moves from a near black ‘space’ cubemap to a brighter interior one, all of the static surfaces pop.

But fear not! Broken as they may be, if you’re smart about how you map, where you place them and how good you can cheat the system, they can still produce stunning results.

When mapping for SFM, here are a few things to consider:
-There is no gameplay and no moving objects. Remove anything that moves; nothing is sacred. Seal off doorways with static props or brushes, or leave them completely open. Rule of thumb for keeping props: if it might be animated at some point, strip it out for replacement as an element in SFM. The only props you want to keep are static ones that you want baked into the lighting and will never move. prop_dynamic and prop_physics have no place in an SFM map, or do func_trigger, func_door, or func_rotating, or anything that depends on player action. (still need an info_player_spawn though)
-With lighting, less is more. This actually solves a lot of cubemap issues I’ve discussed – the less dramatic the map lighting, the less dramatic the cubemap ‘pop’. In my experience, you want just enough lighting for a very low level ambient environment – basic shapes are visible, but it is clearly dark. Let SFM lighting shine; SFM’s dynamic spotlights can recreate everything from laser beams to artificial sunlight. You can also use hammer lighting for low level (brightness <10) point lights on objects that glow. Remember, lighting in hammer does not use dynamic shadows, so objects illuminated with map lights generally look washed out.

You don’t have to make the map pitch black, but let SFM do the heavy lifting

-Place lots of env_cubemaps, and maximize their render size (double click on the env_cubemap entity and select the largest available render size.) Valve’s recommendation is ‘less is more.’ Mine is ‘more is more.’ You don’t have to worry about performance impact on a non-real time render. In a SFM map, how an environment looks is your only real concern, and I don’t think I have to tell you (again) that cubemaps play a huge role in that.

What are you worried about? File size?

-Don’t bother with sound. Sadly, SFM does not take advantage of soundscapes or leverage any type of audio from environments, unless you map for TF2 and do only ‘ingame’ recording (and if you are reading this post in earnest, you probably don’t).

Once you have your map done, you can run your standard HDR compile. Nothing unique about this, except you’ll want to use the version of hammer you can find in SourceFilmmaker/bin. (You can use SDK 13 hammer too, I just find that SFM’s one hammer is easier to work with when it comes to custom mods/content you have in usermod. If you’re mapping for garrysmod as well, it might be prudent to go the extra steps for SDK13 hammer.) Once it is compiled, the .bsp will show up in your usermod/maps folder (or the maps folder of your currently active mod). From here, you’ll want to copy it (and all the models, materials, and particles it needs) to your Alien Swarm directory. Yes, Alien Swarm.

Why Alien Swarm(ASW)? SFM’s BSP version is the same as L4D2, ASW, and Portal 2. (for those of you keeping track at home, this is why SFM maps don’t work in gmod) I find Alien Swarm to be the easiest to use of the three to throw a map into because A) I don’t play it, and I don’t mind if its directory gets bloated/wiped, B) Loading a map is fast as the AI overhead for a non-swarm map is low, and will play – unlike a non-AI raw map in L4D, and C) It’s free, for people that don’t own the other two games for some reason.
Once your map is loaded in ASW, run the game, open the console (~), type map my_mapname, select any random character, and hit start mission. You may or may not see your character as it is a top-down shooter, but all you need to do is open the console and run buildcubemaps. Once you see all your cubemaps flash before your eyes, the .bsp in the swarm/maps is now larger then when you put it in and you now have a map that’ll work great in SFM. (For those of you that are mapping savvy, you only need to compile HDR cubemaps, SFM does not use LDR anything, ever.)
A few caveats about using ASW: It has no shared dependencies with HL2 – no default models or textures that most mappers take for granted. You may want to extract the HL2 gcf’s and put the content into the ASW directory. Second, the laser beam shooting from the standard marine will bake into your cubemaps, as well as bullet holes, blast decals, or effects. Run your character into a corner or wall where the laser doesn’t project into the environment and you’ll be fine.

Taking it further
Alright, now that you have nice, high quality cubemaps in your map, how do you take it to the next level? What if you just want to impress your friends with source wizardry? Well I can shine some light on how to do that too.

When you compile your map’s cubemaps, the bsp will balloon in size. If you use VIDE, you can select specific files and it’ll create a zip file you can extract – this will include unopenable vtfs that will follow a cXPOS_YPOS_ZPOS.hdr.vtf naming convention, as well as specific vmts that override the defaults and tell the map what cubemaps to use. (Remember in the last section when I was talking how the engine statically casts reflections on stationary brushes? Yep, bingo, unique vmts.) The VTFs are the textures that correspond to the XYZ of your env_cubemap entities. Here’s the cool part: If you open a .bsp with cubemaps in it, you can rename the cubemaps you extracted from map A, match the names of those in map B, and manually overwrite them. When using VIDE, make sure the names are correct, import the textures from the file browser. Don’t select a relative path and you will be prompted to type one in. Make sure the path matches the existing ones directly. If you did it right, you will get a confirmation of overwrite and the .vtf files will turn bright blue until you commit your changes with a new bsp.

These can be replaced/overwritten but not edited directly.

This works exceptionally well for ‘blank space’ maps – map a box with one cubemap at the origin. You can build your box, run buildcubemaps in SFM to get the placeholder, and replace the blank cubemap with one from another map that has substance. If you’re planning a scenebuild, then you can even build a simple version of the environment you want to make in hammer and give it simple lighting specifically for the cubemap – you get the dynamics of a scene built in SFM and the cubemaps you could otherwise only get via traditional mapping.

I created this render box that uses an unlit grey texture for the walls. The cubemap comes from one of my sci-fi maps since I mostly make sci-fi assets, but there’s nothing stopping me from taking a cubemap from almost any map and manually placing it here.

Caveat: This process may not be successful! I believe the uneditable cubemaps are locked by bsp version or vtf version, or at the very least, trying to grab cubes from a newer BSP and putting them in an older one (say a SFM\ASW bsp to a SDK13 bsp) might give you pink and purple checkerboards of death. Luckily, VIDE saves backup of your map for these kinds of reasons, but always be ready to undo a few steps. Overwriting auto-generated content always carries risk with any program; this is a really hacky way to work around already broken systems, and you should be prepared for trouble.

Environment Breakdowns

Shuttle Interior
Extending on the trickery discussed in the last section, I wanted to touch on a custom map I did for the interior shots of the shuttle. I made a box map with a static prop verion of the shuttle and only minimal point lights to get the definition of interior objects for cubemap baking, extracted the cubemaps of the shuttle interior, recompiled the map without the shuttle, baked cubemaps again, and overwrote the interior cubes with that of the shuttle. What I ended up with is the best of both worlds: a dynamic shuttle environment we could modify in SFM, and reflectivity that makes sense for the bulk of the shots.

Hallway and Shuttle Bay
Not much to say about these, they’re both a bit of traditional mapping combined with custom props and decals. The shuttle bay was used for another project previously, I only made slight modifications to fit a door instead of a window. Given the time constraints of the project, I did everything I could to reuse assets from previous projects.

NEW BRIDGE

The hallway was the first environment made just for this project; the shot below is the first piece of ‘production media’ – made a day after the project began on the 23rd of August.

For reference, the map looks dramatically different without cubemaps or lighting – here’s a first pass of a garrysmod build:

And with cubemaps…

Cryogenic Storage

In the first draft of the script, the film would have started with the pilot getting a briefing and package in the hangar bay before walking onto the shuttle and taking off. As the script formed, it was decided that there should be a ‘wake up sequence.’ The initial idea was to kitbash something from portal props, and then it moved to a traditional vertical standing cryo-pod she would step out of. I decided, for time’s sake, to model out something closer to a morgue (without the flashy hinged doors, glass, or glowing interior trappings that seem to be the cryo-pod cliche). The cylindrical, tiered approach of the environment also meant I could turn duplication to MAXIMUM. The core design is something I made in a few minutes to simply illustrate to my partner how I thought the scene should play out:

They call this a rough cut.

The rest, from that concept to environment completion, took about a week.

I modeled and textured 1/24th of the main ring, and the interior of one ‘coffin.’ The upper rafters are a flipped and inverted version of the corridor, and the center connector is a 1/4th duplication. The only things that are particularly unique are the name plaques, of which I did 9 variants and a ‘power failure’ state.

When completed, I ended up with this:

On the technical side, the environment is 2 2048×2048 textures, with a 2048×1024 texture for the center connector. The holograms are two 512×128 planes, with a third smaller plane that uses scrolling proxies and a shard texture. The entire scene takes about 400k polygons (mostly in ladder rungs), but for a stationary scene, you only need a fraction of that.

Norad
The last act of the story was designed to be artistically different from the rest of the film in order to drive home the monumental shift in tone. I initially wanted to place it in what would have amounted to a modern office with cubicles and slight sci-fi trappings – initially, I had considered sourcing CS:GO office assets with a combination of my own work. As the development progressed, we decided to shift more to the idea of a large, open area.

I referred to three major sources of artistic influence:Norad’s command center, the office environment from the film Gattaca, and a google image search for modren office chairs. I modeled a desk that I found fitting (I’ll be elaborating more on the models in this film in another blog post), and used it as a ruler for a simple environment. What I ended up blocking out looked a little like an iron cross from the top.

Once I had this blockout done, I then shaved down my vmf to the quarter of the environment I was actually going to model, and imported the scene into 3dsmax with wallworm’s vmf importer.

At this point, I focused on modeling out the core assets in max using a 1/4 repeating system, I essentially modeled 90 degrees of it and duplicated the rest.

One trick of note, I decided to not model any kind of entrances or exits, nothing in the scene is really functional. I considered doing some opaque glass doors for the area under the overhang, but that simply did not fit in the time constraints, nor would it have added to the scene as used. Instead, I baked my AO in such a way that the entrances under the corner frame would receive no lighting, and used a black texture for the surface. To complete the effect, I used a black gradient overlay on the floor and rotated/stretched it to size. Normally, this is something you’d assume vrad (valve’s map light baking tool) would take care of, but cubemaps were so strong on the floor texture that I had to kill those with an overlay. The end result looks natural but stylized, as long as you don’t draw attention to the illusion.

I also saved some time by using Portal 2’s stock elevator model for the central hub, it looks decently fitting for the environment and fulfills the need for some way to get to the upper-deck office. The model was placed in SFM not because it needed to be animated, but because placing it in hammer would have covered the lighting origin for the tower’s wall models.

Finally, I used multiple passes to ensure the desks were appropriately spaced in the scene. I did a quick compile of the map with ‘helper brushes’ on the ground that I could place and center the desks over. In the next compile pass, I removed the helpers. Since the dmx had the desks already placed on the map, the next load of SFM with the next map compile had the placement perfect.

With such a heavy reliance on models for this environment, I tried hard to make source look like anything but source. Considering that it was about 10 days of work from hammer blockout to final pass, I’m happy with the result. I’m going to try and convert the environment to UE4 – it’ll be good practice.

Shots of the garrysmod ready build are also nice…

Thanks for reading! I’ll be putting out a post on the model workflow next – the story of how I efficiently placed and populated all of those desks is worth a separate post.

]]>https://internethatemachinae.wordpress.com/2014/10/18/the-making-of-synthesis-environments/feed/0cryo_corridor2ltcomcryo_corridor2brokencubesspacehallcubemap_illustrationYou don't have to make the map pitch black, but let SFM do the heavy liftingWhat are you worried about? File size?2014-09-23_00002These can be replaced/overwritten but not edited directly.scifi_containers_2gm_nograv_hall0003reflectionsshuttle_cubemapsstratus_bridge_azoey_pilotgm_nograv_hall0001gm_nograv_hall0005They call this a rough cut.cryo_corridorcryo_corridor_2cryo_corridor_3cryo_corridor1cryo_corridor2cryo_corridor3corridor_textureshowoffnorad_wip1norad_wip2norad_wip3norad_wip4norad_wip5norad_midpass_1norad_midpass_2gm_norad0000gm_norad0002Postmortem – Death of a Projecthttps://internethatemachinae.wordpress.com/2014/10/17/postmortem-death-of-a-project/
https://internethatemachinae.wordpress.com/2014/10/17/postmortem-death-of-a-project/#respondFri, 17 Oct 2014 21:29:21 +0000http://internethatemachinae.wordpress.com/?p=611]]>Truth be told, I’ve had a few posts waiting in the wings, completely done, to be released (along with all the maps/models/textures and some dmx files) with a short film that was slated to be done about now.

It did not pan out.

Projects die, it happens all the time. I know more then a few people that were planning on making and submitting a short for the 2014 Saxxy awards cancelled due to an early deadline. As a matter of fact, this is the second project this year I was asked to make content for that was scrapped, both generating a great deal of content on my end before cancellation. The first was a request for me to make a large space hangar scene, here are some shots from a few months back when it was going strong:

The second was much further along and hit much closer to home. Ideally, the following posts would have been prefaced with a link to the film. In its stead, I’m writing this to give them some context.

It was to be a short film titled Synthesis, with an approximate 5 minute run time. It focused on a female pilot wain from cryo-sleep that was tasked with the repair of a downed satellite, only to be attacked and killed by space pirates or rebels. The twist was that the character in space was a mere body double for a VR-like remote pilot program, where operators safely control synthetic versions of themselves – the end would have been a cold and detached debriefing about the worthlessness of the synthetic and a clear lack of appreciation for the fact that the remote operator just experienced the trauma of death. It would of mostly focused on UAVs/RPAs/Drone warfare and PTSD through a sci-fi lens. (This ends my cold butchering of the plot, sorry, I’m a texture guy, not a writer)

It was clearly very ambitions for a small competition for an obscure machinima tool, but my partner on the project, Adam Plamer, was very ready to give it a shot as director/animatior/sound mixer/post editor. I would cover the other half of the spectrum with character and environment design as well as a cooperative effort on lighting and shot framing, as well as providing the outline of the story. I was deeply connected to the project, and we were both very excited for the possibility to push the medium. Sadly, it simply didn’t pan out.

It’s not an uncommon story. When you have side project, regardless of how big or small it is, when something more important comes up, that project falls by the wayside. Even with nothing else to do, we were seriously pushing the realm of possibility with a 5 minute short with almost all custom content and a timeline of one month. When it came down to the deadline, I barely met my end of it after kicking it into turbo mode. Adam wasn’t so lucky, he got lumped with a real commission to do with the possibility to lead to more work. As much as I wasn’t happy to hear that the project essentially died a week before the deadline and with my content finished, I can understand the need to put food on the table. This is a hobby, and bottom line is that is simply comes second to real life. I’ve worked with a lot of people in the source mod-scene and I have been on both ends of it, it simply happens.

On the plus side, Adam was okay with us releasing everything from the get-go, so that doesn’t mean that perfectly usable and completed assets have to get locked into a vault or relegated to a few beauty shots and a footnote on my portfolio. At the very least, if we couldn’t make a thought provoking film, we’ll at least be able to provide the means for others to do so.

The following posts will be a breakdown of my workflow in making them. I’ve had to change the wording a bit to reflect the project cancellation, but it’s still well worth a read (in my opinion). I’ll be putting them up them once I release the content. Until then, stay tuned and watch for releases on my profile!

]]>https://internethatemachinae.wordpress.com/2014/10/17/postmortem-death-of-a-project/feed/0desk_row_test2ltcomstien_hangar_fullstien_hangar_4desk_row_test2Recent Goings Onhttps://internethatemachinae.wordpress.com/2014/07/29/recent-goings-on/
https://internethatemachinae.wordpress.com/2014/07/29/recent-goings-on/#respondTue, 29 Jul 2014 16:21:08 +0000http://internethatemachinae.wordpress.com/?p=435]]>Alright, it’s been a little while since I last made a blog entry, and I aim to rectify that. It’s not that I haven’t been doing anything (Truth be told, I’ve been playing a lot of Space Engineers) but I’ve also made some stuff since last post. Let’s talk about a bit of it!

Misc Work and Stuff
Pictured above is a mil-spec guidon. Show this to anyone that has served in any branch of the U.S. armed services and I guarantee they’ll recognize this object from their basic training and/or time in garrison. I decided to do a bit of cloth simulation + 1 turbosmooth modifier for the flag. It’s decadently high poly, but given that the target is SFM, I’m not too worried. I plan to port it over to UE4 at some point to test the cloth shader. The pole at least is done well. I’ve got the hang of high poly -> low poly, and the shape was fun to do. I did notice that normalmap compression kind of chewed up the surfaces though. I rigged the pieces separately for posing as well.

In the ragdoll front, my side project has been Meryl from Metal Gear Solid 4. I’ve done everything short of the modeling and texturing – I rigged her, implemented bodygroups, jigglebones for the earring, even hacked together a dress uniform from a source model from xcom, not to mention FACS faceposing.

I tried another foray into getting my max scene of the model into maya, this time instead if using FBX, I did the built in ‘Send to’ functions in max14/maya14. It still failed to properly compile with DMX and brought great shame to my house, causing me to consider committing sodoku.

I’ll be releasing her for gmod as a ragdoll and playermodel and as a character for SFM as soon as I finish the last little fixes and feature checks.

Although not a game model, I made a hih poly + render of the new UAC logo from the upcoming DOOM. It’s a nice logo, I hope I did it justice. I could unwrap a low poly and port it to a game engine but it’d be wrong to put it in anything other then idtech6(66).

I recently made a practice model, a super low LOD version of that shuttle I made a while back. I decided to simplify it as much as possible and bake everything to a new set of UVs (I know introducing a new texture just for a skybox+ distance model is not really smart, but it was worth a try for practicing baking multiple maps and whatnot, as well and layered normal bakes.) There are some obvious baking errors, but I thought it turned out kinda cute.

Another practice model was this holo-platform. It has nice, pretty shapes and the bake turned out pretty nice. Unwrapping was a ton of fun what with the 0 help max gives with unwrapping spline geometry. Haven’t had a chance to texture it yet!

nDo? dDo? 3Do? Oh my!
Speaking of textures, I picked up the new Quixel Suite. As a texture artist, I was outspoken against using it in the past – after all, it essentially replaced my position if the team-based workflow I was in, but now as I transition to controlling the entire pipeline, I can appreciate cutting down on time if the results are good.

Here are some things I’ve made using it:

WD-40
A nice simple shape with a wide range materials to recreate, it was a good test platform for experimentation. Just slapping on the default colors and textures that come standard is really only going half the distance; when I went in and got my hands dirty with fine details, the texture went from bland to believable.

I even converted it to source and made an informational and totally not overstated comic. I also released it!

I find that dDo is really good at doing those textures that would otherwise be very tedious to do do by hand. I personally believe that my work is much more… alive(?) when I do it myself, the dDo results feel accurate but mechanical. For example, I made a recreation of my work desk a while back, but I never found the motivation to finish it because I knew the texture would be so bland and with so may surfaces, a bit of a waste of time. The dDo results worked perfectly, and it fills the role of a minor background prop well.

On the other hand, there are some things that I’ve had less luck getting a good result from. I’ll have examples of that in a future update I think.

UE4 Developments
In light of a ramped up UE4 work, I’ve been trying to wind down work with source. If I want to realistically do game design and content creation as a job, I simply cannot tie myself to an old engine that is barely relevant to AAA studios. In light of that, I’ve decided for a full release of my source compiled sci-fi assets. A few posts back, I talked about how I was planning on making a mod with them, but it would be smarter to pour that into making content for UE4 – they have a fledgling asset store, it might be smart for me to make some modular asset heavy sample environments and sell it for peanuts there so I can get my work out to people that I would potentially like to work for.

I’ve continued to make content for the overall design those props use, but I find myself just starting from scratch for things that need to conform to a grid (read: almost all of the props I’ve made to this point). In the process, I’ve been rendering out full color ID maps in max with the high poly normals and AO to speed up the workflow pipeline between programs. I recently started going full steam on flat wall and floor textures.

I’m shooting for a new level of modular use for these, allowing everything about color and surface properties to be decided at the last moment with an instanced material in UE4.

The shader still lacks the definition between things like scratches and dirt overlays (I’m using a built in function in the shader editor for these which mostly works) but I still want to give the textures more definition before I say they’re definitively done. Once they are, however, I should be able to usher in a new type of unified texture generation for my personal workflow.

I’m still doing some more traditional hand done texture work too. Right now, I’m mostly done with a door.

Alright. With all that, we’re just about caught up. I find myself posting little snippets to twitter as well, so if you want something faster paced then the slow moving glacier that is this blog, you can check me out there.

]]>https://internethatemachinae.wordpress.com/2014/07/29/recent-goings-on/feed/0UAC_Renderltcomguidonguidon_posemeryl_wip3meryl_wip7meryl_wip8UAC_Rendershuttle_cheapholo_platformwd40_renderwd40_render_2wd40_sfmwd40_comicdeskssssoffice_deskclerk_desksteamworkshop_webupload_previewfile_284266415_previewflooring_layersflooring_bakesscreen01muhparametersmuhparameters_adoor_crewSFM Tutorial Streamhttps://internethatemachinae.wordpress.com/2014/06/03/sfm-tutorial-stream/
https://internethatemachinae.wordpress.com/2014/06/03/sfm-tutorial-stream/#respondTue, 03 Jun 2014 14:00:22 +0000http://internethatemachinae.wordpress.com/?p=429]]>I decided to do a video stream of making a poster in Source Filmmaker last night with the intention of helping people transition from Garry’s Mod to SFM.

It’s a bit out of my normal wheelhouse, I rarely narrate videos and I don’t have a very good audio setup for it, so there’s quite a few popped ‘p’ sounds, dead air, and a ton of ‘um’s and ‘like’s. That said, I tried to cover everything needed to make a fully lit and posed still image and covered the finer points of exporting it properly. I also touched base on how to install custom assets, but I assume you already know how to mod gmod. If you’re just getting into the tool and have about 75 minutes to spare, I humbly suggest you take a look at it.

On the technical side, I decided to use google+ hangouts on the air over, say, twitch, or traditional screen capture. This allowed me to set up an ‘event’ for the live stream, gave me an allowance of 8 hours and auto-uploaded to my youtube channel if it was under 2. I was hoping to get a little more viewer interaction and I did have a few comments that I addressed live, but it played out fairly decently as just a static recording of my workflow. I’m not sure if I’d do it again on my own volition, but that depends more on if it is requested by enough people. One thing I know fore sure, next time I talk for 80 minutes straight, I’m going to be sure to bring a bottle of water.

As a preview for future posts, I’ll be breaking down the development of a small pistol I modeled – I just want to have a swing at throwing it into UE4 as well as source and possibly animating it before I do a post-mortem.