Last month we started a thread on tech-artists.org about creating a tech artist’s creed. After several weeks of back and forth, we finally came up with something we could all agree upon. Here it is:

I am a Tech Artist,
Every day I will teach, learn, and assist,
And build bridges between teams, people, and ideas.
I will observe without interrupting and mediate without judging.
I may not give exactly what you ask for,
But I will provide what you need.

I am a Tech Artist,
I will approach every problem with mind and ears open
To my colleagues and peers across the industry.
I will solve the problems of today,
Improve the solutions of yesterday,
And design the answers of tomorrow.

I am a Tech Artist,
I am a leader for my team,
And a standard-bearer for my community.
I will do what needs to be done,
I will advocate for what should be done,
And my decisions will be in the best interest of the production.

My goal for the creed was to have the community come up with a code of ethics and standards for tech art in general. We are a diverse group and there are as many specialties as there are TA’s. So it was necessary to create something widely applicable, but still meaningful.

My hope is that we can hold ourselves to, and judge our actions against, this creed. I think it says everything vital about what a tech artist should strive for. I know I have not always lived up to it, and I want my fellow TA’s to call me out when I do not. I expect that other tech artists will share that sentiment. I want to keep pushing our craft forward, bettering ourselves and our community, and I think this creed embodies that.

So, a short post today because so much brain power and effort went into those words above. They are not mine alone (or even primarily), they are those of the tech-artists.org community which represents and advocates for the tech art community at large. I am just fortunate enough to have the honor and privilege of posting the creed here, on behalf of an amazing and incredibly creative group of people.

So read it over, tell me what you think, and if you have something to suggest, suggest away- the creed should continually grow and evolve just as our role does.

Share this:

Repost from altdevblogaday. Also of note that this was my first blog post that I know of that was reposted on reddit/hackernews, and on reddit especially the comments were sort of brutal… oh, internets. Anyway, I’d suggest heading over the altdevblogaday to read the comments when you’re done with the article.

It has been commonplace over the past few years to bash Object Oriented Programming. Functional programming going mainstream. Data oriented design becoming commonplace for performance. The resurgence of dynamic languages. OO bastions going multi-paradigm. Why is everything going wrong for traditional OOP?

The sin of statefulness. ProcessStartInfo, the mutable type that represents the filename, args, std io, and other state of the Process, has 20 mutable (get/set) properties. The Process type itself has over 50 properties (mostly immutable). The problem here is that the Process itself transitions between three states- not started, running, and finished- and only a subset of properties are valid at any given time. This whole situation is impossible to reason about- you either need to look at the extensive tests that would need to be written to test all the combinations of state, or you’d need to look at it under the debugger to know what’s going on.

Inheritance. This situation is bad enough. But have you ever seen someone subclass Process? I have, a few times, and it makes things even more impossible to reason about. You presumably subclass it to ensure certain state is set up by default, such as Filename. What if someone mutates that default, though? You either allow it, which makes your class sort of pointless and breaks its invariant (Filename won’t change), or you don’t allow it by raising an Exception, or even worse, just silently returning, which would break the fundamental contract of your base class and the Liskov Substitution Principle (you are quite clearly changing the behavior if you are raising an exception or not fulfilling the contracts the base class makes). There’s no point to inherit stateful objects like this, but that is canonical OOP.

Code reuse through inheritance/polymorphism. Obviously code reuse is a good thing. The problem is the way OOP encourages it, through polymorphism via inheritance. Process does not implement any interfaces. You could not pass Process to a method or class that, say, is responsible for managing IO and std streams in general, not just for Process. Actually, this isn’t a big problem- just either wrap the Process in something (don’t subclass it!), or pass in only the actual data/methods needed. The ease of getting around this quite clearly demonstrates that, if you were to take away inheritance, it really wouldn’t be such a big deal- would it?

Messy contracts and abstractions. What are the contracts on Process? Good luck trying to figure them out by reading the documentation (which is extensive). I think everyone has put an asynchronous process into a deadlock, even when following MSFT’s directions. Understanding how to use Process still requires a pretty thorough understanding of the underlying system, and it ends up in a no-man’s land between simplicity and power. These messy (not just leaky) abstractions are the major problem when consuming other people’s code- I can’t count how many 3rd party modules I’ve seen crashes or problems in, if they have a reasonable enough API to figure out in the first place.

I’m aware I’m picking on Process here. It is a .NET 1.0 type, and the .NET framework (and programming in general) has matured immeasurably. I’m sure if the team were to do it again, they would do it quite differently. Process is a simple thing but obviously technically not easy- look at the dozens of ways Python had to launch a process, until subprocess.POpen simplified things into a wonderfully simple yet powerful way. But that’s another good point, isn’t it- even Microsoft, who are supposed to be the leaders in these things (they are the ones training people and publishing the guides), ‘get it wrong,’ if it’s even possible to get right (it isn’t). How is Sammy the Scripter supposed to learn these lessons easily? He won’t. It will take him years, and he’s not going to learn it from OOP, he’s going to learn it (like the C# team did) from other languages and concepts. But this whole time, we’re telling him these fallacies about the wonders of OOP, with inheritance, polymorphism, code reuse, abstraction, patterns, and every other buzzword.

So what are we gonna do? Well, the first thing is to throw out ideological purity when it comes to OOP. The language designers are way ahead of us. Dynamic languages like python and Ruby have long been multi-paradigm. C# has been making big strides in the area, with anonymous methods/lambdas in 3.5, and even adding dynamic typing support in 4.0. Java and even C++ are following suit. On the opposite end of the spectrum, people are also taking hints from Eiffel, the most thorough and pure OO language around, with things like .NET’s Code Contracts.

We’re still lagging behind with education (the education we give at work, not just universities). We need to expand our toolbox by looking at other languages and other concepts. We need to throw out much of the traditional OOP approach we’ve taken that hasn’t worked. (As a commenter pointed out- ideological purity is an aid for new people, but we too often label it as best practices.) But I also don’t want to throw the baby out with the bathwater and start declaring that OOP is dead, or all around inferior. The practical applications of OOP languages (and not necessarily their ideological underpinnings) make them natural for multi-paradigm implementations, and this is something I think it’d be hard to say of procedural, or even functional, languages.

I’d love to see us start to branch out in how we educate and teach to include these non-OO concepts, so we can better use the generally excellent OO languages available. Let’s take the lack of state from functional programming. That’s easy enough to do. Let’s take the modularity and specificity of data oriented design solutions. Not everything has to fit into some grande, reusable abstraction. Let’s be honest about the fact that most of our code does a particular thing and isn’t reused. Let’s take design by contract from Eiffel, and stress how important contracts are for a clear and well abstracted API. Let’s take duck typing from dynamic languages, so we don’t have to write a new interface to use our code somewhere (interfaces are great, except when you want some small overlap or subset of functionality- look at how even though .Add isn’t part of .NET’s IEnumerable, it gets special treatment by the compiler). On the other hand, let’s not forget that formal interfaces are important, and make sure we have those (like ABC’s in python).

We have most of these things already, because the language designers are really quite smart people and are way ahead of where the mainstream usage and understanding of these concepts are. We just need to start using and teaching them more intelligently. Maybe it is a PR thing? Stop calling our languages ‘object oriented’ and take the focus off of the ‘4 principles’, and start teaching people how to program effectively using a variety of paradigms.

Likewise, I’d like to see caution when talking about the style-a-la-mode, whether that’s AOP, DOD, FP, whatever, so we don’t start treating it as a golden hammer. As modern programmers, we live in a complex world, and it is our duty to continually educate ourselves and others using all the information we can find.

Share this:

I’ve come around on optional parameters after being an opponent of adding them to .NET. They can be very helpful, clean up the code by not needing overloads, and inform the caller what the defaults are. They are an integral part of python and why it is easy to use. This is great.

Except when you abuse them.

And you may be abusing them without knowing it.

Parameters should only be optional if a valid default value can be determined by looking only at the contents of the method.

What do I mean? Well, if your default value is ‘None’, and you call another module to create the actual default value, this is not good practice. It negatively impacts two important software metrics: it increases coupling by adding a dependency on your module, and it increases cyclomatic complexity by creating an ‘if’ statement.

It is better, in these cases, to just force the user to pass in a valid value. If you’re jumping through hoops to determine a default value, odds are it is too specific and also breaks the reusability and dependability of the method. The caller has a higher chance of already having a dependency to what you would be depending on for your default value (any chance is higher than the 0% chance your method has of needing it). To demonstrate:

Share this:

If you read one long blog article this year, make it this one: Rands in Repose’s Bored People Quit. It is one of the most important blog posts I’ve read in a long time, and right on the money.

If you’ve ever worked a shitty professional job (especially programming), and you know you have, you’ll resonate with what’s written there. I told my last job for well over a year that I was bored and actually tried to quit a number of times, and gave exact action items for how to fix my boredom and what my grievances were. Not that any were ever addressed. And in a place like that, they couldn’t realistically be addressed, with so much riding on such an expensive project- they couldn’t give a shit about almost any individual worker or what I perceive as the long term health of the studio.

Anyway, go read it, and pass it on to your managers!

Share this:

I’ve talked a bit about my problems with OSS as an outsider. Martijn Faassen wrote a great post about his problems with it from the inside: How to Handle Ideas. It’s an informative, lucid post about improving the ways the open source community receives ideas and criticisms, written by an insider.

Even outside of the open source community, it is useful to read and remember his advice and ideas, as they’re useful ways to handle incoming suggestions and criticisms for any internal project you work on.

Share this:

Ian Cooper, one of the contributing authors at CodeBetter.com, recently wrote an article called ‘Why CRUD might be what they want, but may not be what they need‘. While this applies mostly to the world of applications, I’ve been saying the same things about tools and pipeline for a while now. The basic argument goes, the people designing/requesting our apps have a history and understanding of the process, and when we build new systems, they ask for optimized versions of that same process. But there are very likely ways we can rethink that legacy process in the context of much better technology and software, and change the experience profoundly for the better. It is our job, as the people who sit between technology and content development, to do that.

And the good news, as always, is that if we fuck up, no one dies.

Go ahead and read the article and see what I mean.

Share this:

Raymond Chen over at The Old New Thing had a few blog posts recently about debug/release build behavior. I have never figured out why, but it seems an incredibly common standard practice to not run in debug because there are too many errors.

Perhaps because I haven’t been around too long, I just cannot understand how so many otherwise smart people can have such, such bad ideas. And how common this particular issue is.

The issue was especially bad when I was forbidden to use exceptions. I wanted to put in asserts since I wasn’t allowed to use exceptions, except that no one else used the debug build, so when people broke these asserts, they never knew. And then when people’s changes broke some new (and pretty fundamental) asserts, I was told ‘oh, we don’t run in debug.’

Wait, what? You have absolutely no way to ensure valid state, or even keep track of state at all, other than in logs that make debugging far more difficult than it should be (because the problem that should have asserted or crashed will only manifest itself much later and it is unlikely you can determine where the state got messed up by just looking through the log, at least without adding a bunch more logging).

Does your studio do this (not run in debug because there are too many exceptions or asserts)? If so, you may need to smack sense into people. This is a god awful and unforgivable practice when you are using any program- programs with persistent state can corrupt that persistent state, and programs without state can return unexpected results. These sort of decisions are indicative of a myopic or insular culture that is in serious need of a rude exposure and shake up.

Share this:

As a small break while I finish my vacation, I’m going to publish my recent post at AltDevBlogADay in three parts. View it there in its entirety.

Not every studio has these problems (I know because I’ve argued with you about this). And I dare say that studios that don’t have these problems are simply lucky. I suspect that such people are in a fragile situation, and taking away a key player or two would destroy the precarious dynamic that doesn’t birth these problems. If you are at a studio without these problems, ask yourself this: is your setup one that you can describe, export, advocate for, reproduce? How would you do it, without saying “just hire better people?” It is this “coincidence as a solution” that propogates the problems at less lucky studios.

Let’s create real solutions.

We need to create roles and departments that can provide studios with a cohesive tools vision. We need to fill these director-level roles with uniquely qualified individuals who are experienced in art and design, and are excellent programmers. We need to mature our views on tools as an industry, and start looking for concrete solutions for our endemic tools issues rather than relying on chance.
We’re not going to find these people or do these things overnight. We need to, first, decide on this path as our goal. Not just you, but your studio’s management, and there’s no formula helpful formula I can give to convince them. Just nonstop advocacy, education, and reflection.

Then, start discussing what the application of these ideas would mean at your studio. And who is going to fill these key roles? There are people you already have at your studio who just need a little bit of training. Put your tech artists on your programming teams for a bit, or your programmers working on game design or art. See how quickly you’ll find someone with the unique set of skills for a Tools Director position.

We need people who understand how people work and content flows across a project. We need people who are able to guide its formulation/improvement/reconsideration. This is vision. And the lack of vision in tools development is a deadly disease we must remedy if we are to improve the state of our tools across the industry.

Share this:

As a small break while I finish my vacation, I’m going to publish my recent post at AltDevBlogADay in three parts. View it there in its entirety.

So how come with Tools and Pipeline we don’t think the same way? There is no Tools Director, so we end up with disparate tools and workflows that fail to leverage each other or provide a cohesive experience. The norm for the tools situation is to look like the type of situation we find in studios with weak leadership at the Director level. A mess. We need a person who understands how everyone at the studio works, and to take ownership of it and provide a vision for improving it.

No longer can this vital role be left to a hodepodge of other people. Your Art/Technical/Creative Directors, your Lead Programmers/Artists/Designers, can no longer be the people expected to provide the vision for studio’s Tools and Pipeline.

The person who fills this role needs to be someone with enough experience creating art that they can embed with Artists. Someone who can program well enough to have the title of Programmer. Someone flexible enough that they can deal with the needs of Designers. Someone charismatic enough that they can fight and win the battle against the inevitable skepticism, fear, and opposition a change like this would bring.

These people are few and far between, and every one of them I know is happily employed. We’re asking for a unique set of passions and skills, a set that isn’t common in the games industry especially (who gets into games to write tools?!). We need to start training our tools developers (tech artists, tools programmers) to aspire to have these passions and skills.

This won’t happen magically. Unless our studios can promise that these aspirations will be fulfilled, few people will bother, and I cannot blame them. Many studios have made the commitment to having killer tools. Almost as many have failed. And almost as many as that have failed to realize lack of a cohesive vision as a primary factor.

It isn’t surprising that resources get moved from tools dev, that schedules cannot be stuck to, that they cannot attract senior developers. Without a cohesive tools vision, how are resources supposed to be properly allocated? Resources become a fragile compromise between competing departments, rather than brokered by a separate party without allegiances. How is a schedule supposed to be followed, when the people doing the work are not the ones who feel the repercussions? And it is no surprise that it is difficult to attract senior talent with strong programming skills necessary to develop great tools to these positions. If there is no career path- and, let’s face it, most studios have no career path for tools developers- they’re going to go into game programming, or the general software industry (which is, for the most part, some form of tools development in a different environment).

Share this:

As a small break while I finish my vacation, I’m going to publish my recent post at AltDevBlogADay in three parts. View it there in its entirety.

Every ambitious creative endeavor has at its helm a single individual who is responsible for providing the vision for its development. In games, we have Art Directors in charge of the aesthetic, Technical Directors in charge of the technology decisions, and Creative Directors in charge of the overall game. Their chief responsibility is to guide the creation of a project that achieves their vision. The most successful directors are able to articulate a clear vision to the team, get buy in from its merits and his success, and motivate the team to execute with excellence. A project without a director’s vision is uninspired and unsuccessful.

It is no surprise, then, that even though we talk about tools and pipeline as its own niche- and even acknowledging it as its own niche is a big step- we have such uninspired and unsuccessful tools and pipeline at so many places in the industry. We seem to have a mild deficiency of vision in our small community of tools programmers and tech artists, and an absolute famine of vision and representation at the director level.

This situation is unfortunate and understandable, but underlies all tools problems at any studio. Fixing it is the vital component in fixing the broken tools cultures many people report. Without anyone articulating a vision, without anyone to be a seed and bastion of culture and ideas, we are doomed to not just repeat the tools mistakes of yesterday, but to be hopelessly blind towards their causes and solutions.

Where does this lack of vision come from? What can we do to resolve it?

The lack of vision stems from the team structures most studios have. Who is responsible for tools as a whole, tools as a concept, at your studio? Usually, no one and everyone. We have Tech Art Directors that have clever teams that often lack the programming skills or relationships to build large tool, studio-wide toolsets. We have Lead Tools Programmers that are too far removed from, or have never experienced, actual content development. We have Lead Artists that design tools and processes for their team, that do not take into account other teams or pipelines and are uninspired technically.

There is no one who understands how every content creator works, who also has the technical understanding and abilities to design sophisticated technologies and ideas. No one who understands how content and data flow from concept art and pen and paper into our art and design tools, into the game and onto the release disk.

Without this person, what sort of tools and pipelines would you expect? If there were no Art Director or someone who had final say and responsibility for a cohesive art style across the entire game, how different would characters and environment look in a single game? If there were no Creative Director who had final say over design, how many incohesive features would our games have? If there were no Technical Director to organize the programming team, how many ways would our programming teams come up with so solve the same problems?

So how come with Tools and Pipeline we don’t think the same way? There is no Tools Director, so we end up with disparate tools and workflows that fail to leverage each other or provide a cohesive experience. The norm for the tools situation is to look like the type of situation we find in studios with weak leadership at the Director level. A mess. We need a person who understands how everyone at the studio works, and to take ownership of it and provide a vision for improving it.