Recent Profile Visitors

A few days ago I was setting up a new resource build pipeline for our games, and wanted to integrate the build directly in Visual Studio. The goal was to include a resource manifest file in the project, and have them be fed to my compiler as part of the normal VC project build. Often the starting point for this is a simple command line entered as a Custom Build Event, but those are basically just dumb commands that don’t follow the project files at all. The next step up from there is configuring a Custom Build Tool on the files in question. This works well once you have it set up, but there are distinct drawbacks. Each file is configured completely separately, and there’s no way to share configuration. Adding the file to the project doesn’t do anything unless you go in and set several properties for the build tool. There has to be a better way.
Setting all of these fields up gets old real quick.After asking around for that better way, I was pointed to Nathan Reed’s excellent write-up on Custom Targets and toolchains in VS. By setting up this functionality, you can configure a project to automatically recognize certain file extensions, and execute a predefined build task command line on all of them with correct incremental builds. This build customization system works great, and is absolutely worth setting up if that’s all you need! I followed those instructions and had my resource manifests all compiling nicely into the project – until I wanted to add an extra command line flag to just one file. It turns out that while the build customization targets are capable of a lot, the approach Nathan takes only takes you so far and effectively forces you to run the same command line for all of your custom build files.
The file is now recognized as a “Resource Pack” and will build appropriately! But we have no options about how to build it and no ability to tweak the command line sent.With some help from Nathan and a lot of futzing around with modifications of the custom build targets included with VS, I’ve managed to do one better and integrate my resource builds fully into VS, with property pages and configurable command line. What follows is mostly just a stripped down copy of the masm (Microsoft Macro Assembler) build target, but should offer a good basis to work from.
Now we have property pages for our custom target, along with custom properties for the command line switches.Command line display, including a box to insert additional options.For this type of custom build target, there are three files you will need: .props, .xml, and .targets. We’ll look at them in that order. Generally all three files should have the same name but the appropriate extension. Each one has a slightly different purpose and expands on the previous file’s contents. I’m not going to dwell too much on the particulars of MSBuild’s elements and format, but focus on providing listings and overviews of what’s going on.
The props file provides the basic properties that define our custom build task.
<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemDefinitionGroup>
<ResourcePackTask>
<!--Enter Defaults Here-->
<ComputeHashes>false</ComputeHashes>
<SuppressJson>false</SuppressJson>
<BuildLog>false</BuildLog>
<ManifestFileName>$(OutDir)Resources\%(Filename).pack</ManifestFileName>
<AdditionalOptions></AdditionalOptions>
<CommandLineTemplate>$(KataTools)KataBuild.exe [AllOptions] [AdditionalOptions] --manifest %(FullPath) $(OutDir)Resources</CommandLineTemplate>
</ResourcePackTask>
</ItemDefinitionGroup>
</Project>
My task is called “ResourcePackTask” and you’ll see that name recurring throughout the code. What I’m doing here is to define the properties that make up my ResourcePackTask, and give them default values. The properties can be anything you like; in my case they’re just names representing the command line switches I want to provide as options. These are not necessarily GUI-visible options, as that will be configured later. Just think of it as a structure with a bunch of string values inside it, that we can reference later as needed. The key component in this file is the CommandLineTemplate, which makes up its own syntax for options that doesn’t seem to appear anywhere else. [AllOptions] will inject the switches configured in the GUI, and [AdditionalOptions] will add the text from the Command Line window. It’s otherwise normal MSBuild syntax and macros.
Next up is the .xml file. This file’s main role is to configure the Visual Studio GUI appropriately to reflect your customization. Note that VS is a little touchy about when it reads this file, and you may need to restart the IDE for changes to be reflected. We’ll start with this basic version that doesn’t add any property sheets:
<?xml version="1.0" encoding="utf-8"?>
<ProjectSchemaDefinitions xmlns="http://schemas.microsoft.com/build/2009/properties" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:sys="clr-namespace:System;assembly=mscorlib">
<ItemType
Name="ResourcePackTask"
DisplayName="Resource Pack" />
<ContentType
Name="ResourcePackTask"
DisplayName="Resource Pack"
ItemType="ResourcePackTask" />
<FileExtension Name=".pack" ContentType="ResourcePackTask" />
</ProjectSchemaDefinitions>
So far we’ve told the IDE that any time it sees a file with the extension “.pack”, it should automatically categorize that under “ResourcePackTask”. (I’m unsure of the difference between ContentType and ItemType and also don’t care.) This will put the necessary settings into place to run our builds, but it would also be nice to have some property sheets. They’re called “Rules” in the XML file for some reason, and the syntax is straightforward once you have a reference:
<?xml version="1.0" encoding="utf-8"?>
<!-- This file tells the VS IDE what a resource pack file is and how to categorize it -->
<ProjectSchemaDefinitions xmlns="http://schemas.microsoft.com/build/2009/properties" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:sys="clr-namespace:System;assembly=mscorlib">
<Rule Name="ResourcePackTask"
PageTemplate="tool"
DisplayName="Resource Pack"
SwitchPrefix=""
Order="300">
<Rule.Categories>
<Category Name="General" DisplayName="General" />
<Category
Name="Command Line"
Subtype="CommandLine">
<Category.DisplayName>
<sys:String>Command Line</sys:String>
</Category.DisplayName>
</Category>
</Rule.Categories>
<Rule.DataSource>
<DataSource Persistence="ProjectFile" ItemType="ResourcePackTask" Label="" HasConfigurationCondition="true" />
</Rule.DataSource>
<StringProperty
Name="Inputs"
Category="Command Line"
IsRequired="true">
<StringProperty.DataSource>
<DataSource
Persistence="ProjectFile"
ItemType="ResourcePackTask"
SourceType="Item" />
</StringProperty.DataSource>
</StringProperty>
<StringProperty
Name="CommandLineTemplate"
DisplayName="Command Line"
Visible="False"
IncludeInCommandLine="False" />
<StringProperty
Subtype="AdditionalOptions"
Name="AdditionalOptions"
Category="Command Line">
<StringProperty.DisplayName>
<sys:String>Additional Options</sys:String>
</StringProperty.DisplayName>
<StringProperty.Description>
<sys:String>Additional Options</sys:String>
</StringProperty.Description>
</StringProperty>
<BoolProperty Name="ComputeHashes"
DisplayName="Compute resource hashes"
Description="Specifies if the build should compute MurMur3 hashes of every resource file. (--compute-hashes)"
Category="General"
Switch="--compute-hashes">
</BoolProperty>
<BoolProperty Name="SuppressJson"
DisplayName="Suppress JSON output"
Description="Specifies if JSON diagnostic manifest output should be suppressed/disabled. (--no-json)"
Category="General"
Switch="--no-json">
</BoolProperty>
<BoolProperty Name="BuildLog"
DisplayName="Generate build log"
Description="Specifies if a build log file should be generated. (--build-log)"
Category="General"
Switch="--build-log">
</BoolProperty>
</Rule>
<ItemType
Name="ResourcePackTask"
DisplayName="Resource Pack" />
<ContentType
Name="ResourcePackTask"
DisplayName="Resource Pack"
ItemType="ResourcePackTask" />
<FileExtension Name=".pack" ContentType="ResourcePackTask" />
</ProjectSchemaDefinitions>
Again I don’t ask too many questions here about this thing, as it seems to like looking a certain way and I get tired of constantly reloading the IDE to see if it likes a particular variation of the format. The file configures the categories that should show in the properties pane, indicates that the properties should be saved in the project file, and then lists the actual properties to display. I’m using StringProperty and BoolProperty, but two others of interest are StringListProperty (which works like the C++ include directories property) and EnumProperty (which works like any number of multi-option settings). Here’s a sample of the latter, pulled from the MASM.xml customization:
<EnumProperty
Name="ErrorReporting"
Category="Advanced"
HelpUrl="https://msdn.microsoft.com/library/default.asp?url=/library/en-us/vcmasm/html/vclrfml.asp"
DisplayName="Error Reporting"
Description="Reports internal assembler errors to Microsoft. (/errorReport:[method])">
<EnumValue
Name="0"
DisplayName="Prompt to send report immediately (/errorReport:prompt)"
Switch="/errorReport:prompt" />
<EnumValue
Name="1"
DisplayName="Prompt to send report at the next logon (/errorReport:queue)"
Switch="/errorReport:queue" />
<EnumValue
Name="2"
DisplayName="Automatically send report (/errorReport:send)"
Switch="/errorReport:send" />
<EnumValue
Name="3"
DisplayName="Do not send report (/errorReport:none)"
Switch="/errorReport:none" />
</EnumProperty>
All of these include a handy Switch parameter, which will eventually get pasted into our command line. At this point the IDE now knows what files we want to categorize, how to categorize them, and what UI to attach to them. The last and most complex piece of the puzzle is to tell it what to do with the files, and that’s where the .targets file comes in. I’m going to post this file in a few pieces and go over what each piece does.
<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup>
<PropertyPageSchema
Include="$(MSBuildThisFileDirectory)$(MSBuildThisFileName).xml" />
<AvailableItemName Include="ResourcePackTask">
<Targets>_ResourcePackTask</Targets>
</AvailableItemName>
</ItemGroup>
First, we declare that we want to attach property pages to this target, point the IDE to the xml file from before, and tell it what the name of the items is that we want property pages for. We also give it a Target name (_ResourcePackTask) for those items, which will be referenced again later.
<UsingTask
TaskName="ResourcePackTask"
TaskFactory="XamlTaskFactory"
AssemblyName="Microsoft.Build.Tasks.v4.0, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a">
<Task>$(MSBuildThisFileDirectory)$(MSBuildThisFileName).xml</Task>
</UsingTask>
This is the weird part. In Nathan’s write-up, he uses a CustomBuild element to run the outside tool, but CustomBuild doesn’t have a way of getting the command line switches we set up. Instead we’re going to ask the MSBuild engine to read the provided assembly and ask its XamlTaskFactory to generate our ResourcePackTask. That XamlTaskFactory compiles a new C# Task object on the fly by reflecting our definitions from the .xml file (and maybe the .props file). This seems like an insane way to design a build system to me, but what do I know? In any case that seems to be how all of the MS tasks are implemented out of the box, and we’ll follow their lead verbatim. Let’s move on.
<Target Name="_WriteResourcePackTaskTlogs"
Condition="'@(ResourcePackTask)' != '' and '@(SelectedFiles)' == ''">
<ItemGroup>
<_ResourcePackTaskReadTlog Include="^%(ResourcePackTask.FullPath);%(ResourcePackTask.AdditionalDependencies)"
Condition="'%(ResourcePackTask.ExcludedFromBuild)' != 'true' and '%(ResourcePackTask.ManifestFileName)' != ''"/>
<!-- This is the important line to configure correctly for tlogs -->
<_ResourcePackTaskWriteTlog Include="^%(ResourcePackTask.FullPath);$([MSBuild]::NormalizePath('$(OutDir)Resources', '%(ResourcePackTask.ManifestFileName)'))"
Condition="'%(ResourcePackTask.ExcludedFromBuild)' != 'true' and '%(ResourcePackTask.ManifestFileName)' != ''"/>
</ItemGroup>
<WriteLinesToFile
Condition="'@(_ResourcePackTaskReadTlog)' != ''"
File="$(TLogLocation)ResourcePackTask.read.1u.tlog"
Lines="@(_ResourcePackTaskReadTlog->MetaData('Identity')->ToUpperInvariant());"
Overwrite="true"
Encoding="Unicode"/>
<WriteLinesToFile
Condition="'@(_ResourcePackTaskWriteTlog)' != ''"
File="$(TLogLocation)ResourcePackTask.write.1u.tlog"
Lines="@(_ResourcePackTaskWriteTlog->MetaData('Identity')->ToUpperInvariant());"
Overwrite="true"
Encoding="Unicode"/>
<ItemGroup>
<_ResourcePackTaskReadTlog Remove="@(_ResourcePackTaskReadTlog)" />
<_ResourcePackTaskWriteTlog Remove="@(_ResourcePackTaskWriteTlog)" />
</ItemGroup>
</Target>
MSBuild operates by executing targets based on a dependency tree. This next section configures a Target that will construct a pair of .tlog files which record the dependencies and outputs, and enable the VS incremental build tracker to function. Most of this seems to be boring boilerplate. The key piece is where [MSBuild]::NormalizePath appears. This little function call assembles the provided directory path and filename into a final path that will be recorded as the corresponding build output file for the input. I have a hard coded Resources path in here for now, which you’ll need to replace with something meaningful. The build system will look for this exact filename when deciding whether or not a given input needs to be recompiled, and you can inspect what you’re getting in the resulting tlog file. If incremental builds aren’t working correctly, check that file and check what MSBuild is looking for in the Diagnostic level logs.
I should note at this point that the tlog target is optional, and that as written it only understands the direct source file and its direct output. In my case, it will see changes to the resource manifest file, and it will see if the output is missing. But it has no information about other files read by that compile process, so if I update a resource referenced by my manifest it won’t trigger a recompile. Depending on what you’re doing, it may be better to omit the tlog functionality and do your own incremental processing. Another possibility is writing a process that generates the proper tlog.
<Target
Name="_ResourcePackTask"
BeforeTargets="ClCompile"
Condition="'@(ResourcePackTask)' != ''"
Outputs="%(ResourcePackTask.ManifestFileName)"
Inputs="%(ResourcePackTask.Identity)"
DependsOnTargets="_WriteResourcePackTaskTlogs;_SelectedFiles"
>
<ItemGroup Condition="'@(SelectedFiles)' != ''">
<ResourcePackTask Remove="@(ResourcePackTask)" Condition="'%(Identity)' != '@(SelectedFiles)'" />
</ItemGroup>
<Message
Importance="High"
Text="Building resource pack %(ResourcePackTask.Filename)%(ResourcePackTask.Extension)" />
<ResourcePackTask
Condition="'@(ResourcePackTask)' != '' and '%(ResourcePackTask.ExcludedFromBuild)' != 'true'"
CommandLineTemplate="%(ResourcePackTask.CommandLineTemplate)"
ComputeHashes="%(ResourcePackTask.ComputeHashes)"
SuppressJson="%(ResourcePackTask.SuppressJson)"
BuildLog="%(ResourcePackTask.BuildLog)"
AdditionalOptions="%(ResourcePackTask.AdditionalOptions)"
Inputs="%(ResourcePackTask.Identity)" />
</Target>
</Project>
This is the last piece of the file, defining one more Target. This is the target that actually does the heavy lifting, and you’ll see the recurrence of the _ResourcePackTask name from earlier. There are two properties BeforeTargets and AfterTargets (not used here) that set when in the build process this target should run. It also takes a dependency on the tlog target above, so that’s how that target is pulled in. Again there is some boilerplate here, but we start the actual build by simply outputting a message that reports what file we’re compiling.
Lastly, the ResourcePackTask entry here constructs the execution of the task itself. I think that %(ResourcePackTask.Whatever) here has the effect of copying the definitions from the .props file into the task itself; the interaction between these three files doesn’t seem especially well documented. In any case what seems to work is simply repeating all of your properties from the .props into the ResourcePackTask and they magically appear in the build. Here’s a complete code listing for the file.
<?xml version="1.0" encoding="utf-8"?>
<!-- This file provides a VS build step for Kata resource pack files -->
<!-- See http://reedbeta.com/blog/custom-toolchain-with-msbuild/ for an overview of what's happening here -->
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup>
<PropertyPageSchema
Include="$(MSBuildThisFileDirectory)$(MSBuildThisFileName).xml" />
<AvailableItemName Include="ResourcePackTask">
<Targets>_ResourcePackTask</Targets>
</AvailableItemName>
</ItemGroup>
<UsingTask
TaskName="ResourcePackTask"
TaskFactory="XamlTaskFactory"
AssemblyName="Microsoft.Build.Tasks.v4.0, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a">
<Task>$(MSBuildThisFileDirectory)$(MSBuildThisFileName).xml</Task>
</UsingTask>
<Target Name="_WriteResourcePackTaskTlogs"
Condition="'@(ResourcePackTask)' != '' and '@(SelectedFiles)' == ''">
<ItemGroup>
<_ResourcePackTaskReadTlog Include="^%(ResourcePackTask.FullPath);%(ResourcePackTask.AdditionalDependencies)"
Condition="'%(ResourcePackTask.ExcludedFromBuild)' != 'true' and '%(ResourcePackTask.ManifestFileName)' != ''"/>
<!-- This is the important line to configure correctly for tlogs -->
<_ResourcePackTaskWriteTlog Include="^%(ResourcePackTask.FullPath);$([MSBuild]::NormalizePath('$(OutDir)Resources', '%(ResourcePackTask.ManifestFileName)'))"
Condition="'%(ResourcePackTask.ExcludedFromBuild)' != 'true' and '%(ResourcePackTask.ManifestFileName)' != ''"/>
</ItemGroup>
<WriteLinesToFile
Condition="'@(_ResourcePackTaskReadTlog)' != ''"
File="$(TLogLocation)ResourcePackTask.read.1u.tlog"
Lines="@(_ResourcePackTaskReadTlog->MetaData('Identity')->ToUpperInvariant());"
Overwrite="true"
Encoding="Unicode"/>
<WriteLinesToFile
Condition="'@(_ResourcePackTaskWriteTlog)' != ''"
File="$(TLogLocation)ResourcePackTask.write.1u.tlog"
Lines="@(_ResourcePackTaskWriteTlog->MetaData('Identity')->ToUpperInvariant());"
Overwrite="true"
Encoding="Unicode"/>
<ItemGroup>
<_ResourcePackTaskReadTlog Remove="@(_ResourcePackTaskReadTlog)" />
<_ResourcePackTaskWriteTlog Remove="@(_ResourcePackTaskWriteTlog)" />
</ItemGroup>
</Target>
<Target
Name="_ResourcePackTask"
BeforeTargets="ClCompile"
Condition="'@(ResourcePackTask)' != ''"
Outputs="%(ResourcePackTask.ManifestFileName)"
Inputs="%(ResourcePackTask.Identity)"
DependsOnTargets="_WriteResourcePackTaskTlogs;_SelectedFiles"
>
<ItemGroup Condition="'@(SelectedFiles)' != ''">
<ResourcePackTask Remove="@(ResourcePackTask)" Condition="'%(Identity)' != '@(SelectedFiles)'" />
</ItemGroup>
<Message
Importance="High"
Text="Building resource pack %(ResourcePackTask.Filename)%(ResourcePackTask.Extension)" />
<ResourcePackTask
Condition="'@(ResourcePackTask)' != '' and '%(ResourcePackTask.ExcludedFromBuild)' != 'true'"
CommandLineTemplate="%(ResourcePackTask.CommandLineTemplate)"
ComputeHashes="%(ResourcePackTask.ComputeHashes)"
SuppressJson="%(ResourcePackTask.SuppressJson)"
BuildLog="%(ResourcePackTask.BuildLog)"
AdditionalOptions="%(ResourcePackTask.AdditionalOptions)"
Inputs="%(ResourcePackTask.Identity)" />
</Target>
</Project>
With all of that in place, hypothetically Visual Studio will treat your fancy new file type and its attendant compile chain exactly how you want. There are probably still many improvements to be made – in particular, this scheme as written seems to suppress stdout from the console at the default “Minimal” MSBuild verbosity, which is something I haven’t dug into. But this is a solid start for a fully integrated build process.
View the full article

V3 build pictured, but the V4 upgrades are all internal to the pad.If you’re unfamiliar with my DanceForce work or the previous versions, please read the introduction of my V3 build post for the rationale and advantages of this particular approach to a hard pad and what I’m going for. In short, the DF is a slimmer, lighter hardpad that can be more reliable and consistent than conventional designs due to its use of pressure sensitive sensors that are separated from the “click action” of the actual steps.
I’m now building the DanceForce V4 prototype. V4 is simpler, easier to build, requires less parts, and is cheaper. Traditionally I build and design these pads, make a bunch of tweaks, and play on them for a good while. Then I begin working on the draft of the instructional write-up, and eventually publish the full how-to guide. If I followed that timeline again, this V4 guide would appear in *checks notes* summer 2020. Let’s not do that. I began work this past weekend, so I’m just going to post a stream of photos and exactly what I’m doing as I go.
Excluding pad graphics and a few incidentals, this pad costs about $160 to put together.
Current Status: Core pad is done but top hasn’t been installed and control board hasn’t been assembled. These are not changed from V3.
Building the Base
Basic layout sketch of the initial cut pad.Note: the dimensions in this photo are slightly wrong and I had to go back and fix it. Always triple check your measurements before cutting and gluing!The base layer is 1/2″ plywood cut to 34″ x 33″. The extra inch on top will be useful for wiring. I’ve marked off the steps in pencil, and then begun adding the spacer layer. I’m using 1/8″ hardboard this time around, for shallower steps than in the past. My hope is this will reduce ankle stress and overall impact while playing barefoot. The bottom panels are 10.25″ square, the top are 10.25″ x 5.25″. The upper panels are sized to leave space for Start/Select buttons. Next step is beginning to lay out the contacts. I’m using 3″ copper tape today, but 4″ is probably even better because it’s less work and barely costs any more.
I’ve added the hardboard spacers around the Start and Select buttons. Note that these go on AFTER the copper tape, which runs underneath it. Here’s a detail shot of what you’ll end up with:
Finally, all of the contacts get connected together with a plus to serve as the common contact for the step sensors.
That concludes the base layer.
Sensor Construction
Start by building the top contact. Cut four 10.5″ squares of Lexan, and cover one side in copper tape.
Add a little strip around to the top side to serve as our connection point for later.
It’s important here to place the contact strip off center. You don’t want it touching the extension strip on the common contact. I also clip the corners to leave space between steps.Place an 11″ square of Velostat over the bottom of the contact. It does not need to cover it completely.
Then the top contact goes over it. The top contact MUST be insulated by Velostat on every edge or the step will not work. That’s why we cut it a little small. I’ve moved to 6 mil Velostat in the V4 design due to the higher sensitivity of pure copper contacts.
Finally, duct tape secures the sensor in place. I’ve done a couple experiments now and it appears that too much duct tape is a bad idea. This is a pressure sensor and excessive tape applies so much pressure that there isn’t enough range left to reliably detect steps.
The clipped corners leave space for hardboard strips that will fill the space between diagonal steps.Four assembled sensors. It’s a good idea to test them with a multimeter at this point, while the duct tape still isn’t that strongly bonded. You’re looking for 70+ ohms at rest, and sub-10 with foot pressure.I also add some corner boundaries at this stage. These are hardboard strips of 1.75″ x 0.5″ and they are important to have good corner separation of the steps. The gap is important, wiring is going to run through there.
Electrical
Get ready to break out the soldering iron – but we have some prep work to do first. Take a look at the edges where your top contacts are – is copper peeking out past the Velostat?
We don’t want this. It will short if we try to take the contact over this section. A little strip of electrical or duct tape will insulate the boundary.
That’s better. Now I’m going to build a solder pad from two layers of copper tape.
This solder pad I’ve laid down does not connect to the top contact of the step yet. This way if the step needs to come out, the soldered wire can stay where it is.Now to solder some wires. It’s important to leave lots of extra length when cutting the wire, I’ve been screwed multiple times by not having enough spare lead.
Be judicious with the heat. The copper tape solders decently enough but it’s not going to tolerate the iron for an extended period.Finally, one more layer of copper tape will link the top contact to the solder pad and shield it all in one go.
And now all four arrows wired up:
I’ll finish up Start and Select later. For now, we really need to neaten up those wires. Find a hot glue gun, route the wires nicely up through the top of the pad, and glue them in place.
And with that, the internal construction of the pad is complete.
View the full article

It's not a question of simpler or more complex. I get the impression you feel that game graphics programmers don't know what they're doing or haven't heard about splines in the forty-five years since Catmull-Rom or the nearly sixty years since NURBS. Our job is to provide the maximum amount of visual quality on the hardware that consumers actually have in hand at the current time. Splines and patches do not accomplish that goal. You keep bringing up CAD as if it's somehow relevant, but their goals and priorities are very different from ours.
You're describing a bunch of things which are mathematically sound on paper, but simply do not reflect the reality of achieving visual quality on a consumer GPU. And to the extent we can push the GPU designers for more powerful tools, geometric handling just isn't that interesting or important anymore. Our triangle budgets are high enough nowadays that there are far more pressing problems than continuous level of detail or analytical ray intersection tests or analytical solutions to normals and tangents.

Just to clarify, the primary reason that game developers don't use spline or patch based meshes at runtime is that the throughput on GPU is relatively poor. In most cases the GPUs are much better at pushing triangles than trying to do adaptive detail tessellation, as there are challenges across different hardware with how much expansion is actually viable and how on-chip buffers for tessellation outputs get sized. It can be more useful to use compute shaders to expand the tessellation ahead of time, but it's not really that helpful at the end of the day for most of the models we actually want in a game.
Please don't take Fulcrum's ignorance as representative of where game developers are at technically. In general, I would put runtime polygonal detail levels waaaay down the list of challenges graphics programmers should be spending their time on. It wouldn't even be close to making my top ten.

I don't care so much about helper libraries (SDL or GLFW or something), but I've really found it much more interesting to see something where the functional components are substantially homegrown. That means all of the architecture, design, and problem solving is really the dev's own solo work, which is not possible in Unity or Unreal and is often not the case with other large engines (Godot, Ogre, what have you). There's real value in being forced to build something (almost) entirely on your own, and not basing on tens or hundreds of thousands of lines of other people's work.
(And yes, you could twist this to talk about standard libraries or operating systems or whatever as being other people's code. I don't think the comparison is valid.)

I'm afraid I have to disagree. This was good advice once upon a time when making finished looking projects was really difficult. That's no longer the case, to be candid about it. Between the big engines and asset stores for those engines and plentiful samples, making a finished simple game is about the least useful thing. It's no longer a good marker of competence or tenacity, and employers are beginning to catch on to this fact.
If your goal is to just get a job on simple games-related work (mobile games or entertainment app type stuff), go ahead and learn Unity and throw some stuff together. But if your goal is serious AAA game development, do something that demonstrates heavy hitting technical competence. Don't write a game - write something that is legitimately challenging. Show off some sophisticated graphics techniques, maybe something with advanced custom physics, or something with interesting complex gameplay.
I'm not going to speak for hiring practices across what's a very diverse industry these days. But we develop our own engine and tech, and at this point I'm outright discarding any candidate who can't show me a track record of building interesting technical work from scratch.

Alright, that's enough. This is a classified, not an engine architecture thread. @DaTueOwner, feel free to post another request that is free of this rather meandering conversation. The rest of you are welcome to take it up in the usual places.

You're computing your fragment position in view space, but your light is presumably defined in world space and that's where you leave it. Personally I find it much easier and more intuitive to do all my lighting in world space rather than the old school view space tradition. Just output position times world into the shader, and then everything else will generally already be in world space.

There are a couple ways to approach this. The simplest, as mentioned above, is to simply implement the deformation effect in the vertex shader. If you're dealing with a simple one in, one out style of effect then this is a great way to do it and this is how skinning for example is done.
The next step up in sophistication is to not supply the vertex directly to the vertex shader, but to give it access to the entire buffer and use the vertex index to look up the vertices in a more flexible format. (Some GPUs only work this way internally.) That way your vertex shader can use multiple vertices or arbitrary vertices to compute its final output.
The most complex version of this is to write a buffer to buffer transformation of the vertices, which can either be done via stream out in the simple cases or a compute shader in the advanced cases. This lets you store the results for later, not just compute them instantaneously for that frame.

As we said on GameDev's Discord (that everyone should join!) what really happened here is that Unity tried to strong arm a company for a license fee, didn't get it, and then lost the ensuing PR war when they tried to force the issue. While the resolution is probably a good thing and makes Unity a more transparent business, it's alarming that things went this way in the first place and that Unity tried to leverage someone that hard. Improbable really came out on top by being savvy with social media and forging a strong alliance with Unreal and Tim Sweeney to back them up on it.
I'm not inclined to the charitable interpretation of Unity making a mistake here or Spatial doing something that was actually an offense. I think Unity wanted a cut from a certain class and thought they could get it.

The absolute easiest thing to do is to use an archive library like PhysFS to read files like those, maybe with password protection enabled, although it's limited to some common formats that people will generally know how to work with. You could go in and make a modified version of one of the formats that is similar but different enough to require different parsing, though now you'll need to make a tool to output that too. Another option would be to layer some encryption into the files inside the package, and that can range from simple to complex.

Unity's 2D stack has gotten quite robust in recent years and I don't think you should be reluctant about using it. If you really want to evaluate options, I believe Godot Engine has both a very capable easy to use 2D system and a scripting language.