Search Unity

Verifying the scripting docs – Fun with EditorTests

When I encounter an API in Unity that I am unfamiliar with the first thing I (and most of us) do is go to the Unity Scripting API Manual to see how it works via one of the examples. If that example will not compile when I try it, I assume that I must be doing something wrong. The example couldn’t possibly be broken, could it…?

This is how I discovered that we do indeed have examples in our scripting docs that do not compile, as a result of API changes over time and the odd case of an the example never compiling to start with. At Unity we have a lot of freedom in how we work; if we see a problem we can report it to the relevant team or fix it ourselves. At one of our recent Sustained Engineering team weeks we decided to do our own hackweek and picked several issues we wanted to tackle. Some of us chose to look into a solution for there being broken examples in the scripting docs.

There are about 15,000 scripting docs pages. Not all of them contain examples (a different problem which we are working to improve); however a large portion do. Going through each example and testing them manually would be unachievable in a week. It would not solve the problem of API changes or broken examples being written in the future either either.

Last year as part of the Unity 5.3 release we included a new feature called the Editor Test Runner. This is a unit test framework that can be run from within Unity. We have been using the Editor Test Runner internally for our own automated tests since its introduction. I decided to tackle the problem using an editor test. All our scripting docs are stored in XML files which we edit through an internal Unity project.

The code to parse all these files is already available in this project so it made sense to add the editor test into the same project so we could reuse it.

In our editor test framework (which is using NUnit) there is an attribute that can be applied to a test called TestCaseSource. This lets a test be run multiple times with different source data. In this case the source data would be our list of script examples.

And it worked! We needed to make some small changes in how we compile the examples, though, as some scripts are designed to go together as a larger example. To check for this we compiled them separately; if we found an error, we then compiled them again combined to see if that worked.

Some examples are written as single lines of code which are not wrapped in a class or function. We could fix this by wrapping them in our test, but we have a rule that all examples should compile standalone (i.e. if a user copies and pastes it into a new file it should compile and work), so we count those examples as test failures.

The test was now in a state where it could be run as part of our build verification on the path to trunk. However there was one small problem: the test took 30 minutes to run. This is far too long for a test running in build verification, considering we run around 7000 builds a day.

The test was running sequentially, one script after another, but there was no reason we could not run them in parallel as the tests were independent of each other and did not need to make any calls to the Unity API;and we are only testing that they compile, not the behaviour. Introducing ThreadPool, a .NET API that can be used to execute tasks in parallel. We push the tests as individual tasks into the ThreadPool and they will be executed as soon as a thread becomes available. This needs to be driven from a single function, meaning that we can’t have individual NUnit test cases for testing specific examples from the docs. As a result we lose the ability to run any one of the tests individually, but we gain the ability to run them all quickly.

This took the test time from 30 minutes to 2, which is fine for running as part of our build verification.

Since we couldn’t test individual examples with NUnit any more, we added a button to the scripting doc editor to allow developers to test the examples as they write them. The script with an error is now colored red when the test is run and error messages are displayed beneath.

When the test was first run we had 326 failures which I whitelisted (so they could be fixed at a later date). We now have that down to 32, of which most are failures in the test runner mainly due to not having access to some specific assemblies. There have been no new issues introduced and we can rest assured that when we deprecate parts of the API the test will fail and we can then update the example to use the new API.

Overall I thought this was an interesting use of the Editor Test Runner. It does have some limitations: We only test C# examples, and I have not managed to get JS compilation working, although that won’t be an issue in the future.

18 코멘트

What good is it to run 7000 builds per day, considering the typical work day of 8 hours, which is 480 minutes? At my work place we run a build each time someone pushes a commit, which is good enough for us.

Well we have multiple different types of builds. 7000 is what our build system reported on the day I wrote the post, this is a combination of building releases, running various tests etc. To run a build verification on a commit, what we call ABV is actually 30+ builds (builds of binaries and test suites).
A work day here is 24 hours as we have offices all over the world and we share the build system. Running an ABV every time someone pushed would be unsustainable for us, we only run on selected ones at the moment and that puts us at the 7000 mark, so imagine if we ran it on every push!
More info in this blog post: https://blogs.unity3d.com/2017/06/15/a-look-inside-evolving-automation/

Sure but it would only work for tests that are not calling Unity API, which for us is pretty uncommon. This was a hackweek project so we worked within the exiting framework, otherwise we would have spent the time changing the test framework. If its something you want then feel free to post the suggestion to https://feedback.unity3d.com/

we get lot of information about to get latest ideas to fix our issues about games so thanx alot that you give us such a great guide. after act upon your advise we solved many issues in Ninja Games Online Play

Nice one!
However there’s probably a typo:
At the part about performance, you first say it took 30 minutes to run, and later you say that multithreading reduced the time needed from 20 to 2.
Is it 20 or 30 minutes?

Just wondering when you guys at Unity are going to conform to proper c# coding standards? Your c# examples are littered with incorrect c# language notation, and the code you show off in this blog post does not look like something created by a real c# programmer. Oh yes, it works, but it looks like sh#t :)

const string k_PathToApiDocs should be const string pathToApiDocs. There is no such thing as a lowercaseletterunderscorecamelcase in the c# coding standards. Have a look for yourself, please. I highly recommend that you install ReSharper on all your dev machines and start getting some nice coding standard warnings and refactoring help.

We just have a different set of coding and naming standards. I am unaware of any plans to change that at this stage. Other than naming conventions what makes the code look like it was written by “not a real c# programmer”?

Why don’t you conform to the official standards suggested by Microsoft? I mean, what is the reason for doing your own thing? It would be a lot easier to read and understand the code examples if they did follow the standards, and it would be a lot less work to clean up your example code and example projects as well if we were to copy your code.

Well as you say it’s suggested, not mandatory. We just have a different set of standards. Don’t get so hung up on it, just be happy that we even have some. Good programmers are able to adjust to different standards as and when they need to. Going through examples and reformatting is something I used to do myself and then I learnt to just accept code was not always going to conform to how I wanted it to be formatted and instead I should just accept it. I’m sure the Microsoft standard is fine and if we were to start again we may have picked it, but at this stage to go and reformat the whole code base just to conform to a different set of standards is not really a good use of our time.

The first line of the Microsoft link you gave says “The C# Language Specification does not define a coding standard.”. In general it’s up to a software company to define their own house style, which is what the Microsoft guidelines are, their own house style which they’ve shared.

Maybe try to be less patronising (‘real c# programmers’ urgh!) and insisting your way is the only way.