I'm working on a project where we're adding automated testing to an existing project. We're starting with a central component, and setting up both unit tests at the code module level, and tests that run on the whole component, albeit in the developer environment. The intent is that this test suite must pass before code check-in, and will be run by a continuous integration system on each development branch.

What should we call this? Right now we're calling it "dev unit tests", but the pedantic side of me says that's not quite right, because it contains more than unit tests. And our vision is that this suite will grow over time to include full-product acceptance tests.

Any input here? Or should we stop arguing about names and go write tests?

Update

Everyone, thanks for good discussion! I think the conclusion I'm coming to is that there's not really a "common" definition - each project/team comes up with names that make the most sense for that project, depending on the technologies (Java, C#, ruby, etc) and methodologies (old-skool waterfall, Scrum, XP, etc) in use.

+1 Since patterns were valuable to define common wordings for concepts a good wording for XxxxxCeckins is a good idea. Maybe the the community has also an idea for a good wording for Release Candidate check-ins that has to be performed if you want to create a new release candidate. these tests also contain long running tests that are not practicable on each usual developper Xxxxcheckin.
–
k3bApr 2 '11 at 8:41

Even if you're not using TFS, I think calling them "gate tests" is a good idea. They govern the progress from one state to another. That's different from just seeing if your new code works.
–
Kate GregoryApr 1 '11 at 19:26

Part of the problem here is that this organization has called lots of things "unit tests". I'm trying to get them to use a more industry-standard definition. So I'd really like to avoid using the term "unit test" to refer to these automated, but higher-level, tests.
–
dpassageApr 1 '11 at 18:24

In my organization we call them "unit tests" before the build. They're called "integration tests" after the build when all the pieces are tested together. They're called "system integration tests" when the build package goes to the QA team for testing (after integration testing has finished successfully).

I call them "the tests" with the implication that I'm talking about the "automated unit tests" and with the corollary implication that these are fast (a few seconds to a few minutes being what I've experienced more frequently.

Sometimes there will be other tests that run automatically on a check-in, or once at night for example. These tend to take longer. Something between 30 minutes and a few hours. Typically we call those "the functional tests", or "the regression tests" or even "the slow tests".

We usually call them smoke tests, sanity tests or priority 1 tests. Priority 2 tests happen after a checkin and then priority 3 tests run on the scheduled builds. Our final set of tests, priority 4 tests, happen when we have a special build that occurs over a weekend - as they take about that long to run.

A unit test is a test or set of tests that covers a particular module. In OO programming, a module is typically a function or class. Unit tests are grouped into test suites. Unit tests are the building blocks of regression and smoke tests, and in some cases can serve as documentation for the intended behavior of a system or pieces of a system.

Regression testing is the act of running most or all of the unit tests on a changed module. This ensures that the changed module continues to work as expected following the changes.

Smoke testing is the act of running a (small - you want smoke testing to be fairly quick) number of unit tests on unchanged modules to ensure that a change to a module did not have any unintended side effects on other modules. The focus is typically on classes that have associations with the changed class as well as the modules that provide key functionality to the application.

Depending on your build environment, any of these might be automated or executed by the developer. A committed change set should be small enough such that regression tests don't take an unacceptably long amount of time, and smoke testing is designed to be fairly quick. Typically, I run at least regression and smoke tests on code before it even gets checked in so I don't break the build. I've usually seen the build system be designed to execute and report on the status of all test cases on a regular basis, ranging from daily to weekly depending on the rate of development and time to build and execute the tests.