Test

YUI Test is a testing framework for browser-based JavaScript solutions. Using YUI Test, you can easily add unit testing to your JavaScript solutions. While not a direct port from any specific xUnit framework, YUI Test does derive some characteristics from nUnit and JUnit.

Getting Started

Next, create a new YUI instance for your application and populate it with the
modules you need by specifying them as arguments to the YUI().use() method.
YUI will automatically load any dependencies required by the modules you
specify.

<script>
// Create a new YUI instance and populate it with the required modules.
YUI().use('test', function (Y) {
// Test is available and ready for use. Add implementation
// code here.
});
</script>

Using Test Cases

The basis of Test is the Y.Test.Case object. A TestCase object is created by using the
Y.Test.Case constructor and passing in an object containing methods and other information with which
to initialize the test case. Typically, the argument is an object literal, for example:

In this example, a simple test case is created named "TestCase Name". The name property is automatically applied to the test case so that it can be distinguished from other test cases that may be run during the same cycle. The two methods in this example are tests methods ( testSomething() and testSomethingElse()), which means that they are methods designed to test a specific piece of functional code. Test methods are indicatd by their name, either using the traditional manner of prepending the word test to the method name, or using a "friendly name," which is a sentence containing at least one space that describes the test's purpose. For example:

Regardless of the naming convention used for test names, each should contain one or more assertions that test data for validity.

Except for methods and properties following these special rules and a few other reserved names described in the following sections, a test case may contain other utility methods or properties, all reachable as instance members via this.

setUp() and tearDown()

As each test method is called, it may be necessary to setup information before it's run and then potentially clean up that information
after the test is run. The setUp() method is run before each and every test in the test case and likewise the tearDown() method is run
after each test is run. These methods should be used in conjunction to create objects before a test is run and free up memory after the
test is run. For example:

In this example, a setUp() method creates a data object with some basic information. Each property of the data object is checked with
a different test, testName() tests the value of data.name while testAge() tests the value of data.age. Afterwards, the data object is deleted
to free up the memory. Real-world implementations will have more complex tests, of course, but they should follow the basic pattern you see in the above code.

Note: Both setUp() and tearDown() are optional methods and are only used when defined.

Ignoring Tests

There may be times when you want to ignore a test (perhaps the test is invalid for your purposes or the functionality is being re-engineered and so it shouldn't be tested at this time). To specify tests to ignore,
use the _should.ignore property and name each test to skip as a property whose value is set to true:

Here the testName() method will be ignored when the test case is run. This is accomplished by first defining the special _should
property and within it, an ignore property. The ignore property is an object containing name-value pairs representing the names of the tests
to ignore. By defining a property named "testName" and setting its value to true, it says that the method named "testName"
should not be executed.

Intentional Errors

There may be a time that a test throws an error that was expected. For instance, perhaps you're testing a function that should throw an error if invalid data
is passed in. A thrown error in this case can signify that the test has passed. To indicate that
a test should throw an error, use the _should.error property. For example:

In this example, a test case is created to test the standalone sortArray() function, which simply accepts an array and calls its sort() method.
But if the argument is not an array, an error is thrown. When testSortArray() is called, it throws an error because a number is passed into sortArray().
Since the _should.error object has a property called "testSortArray" set to true, this indicates that testSortArray() should
pass only if an error is thrown.

It is possible to be more specific about the error that should be thrown. By setting a property in _should.error to a string, you can
specify that only a specific error message can be construed as a passed test. Here's an example:

In this example, the testSortArray() test will only pass if the error that is thrown has a message of "Expected an array".
If a different error occurs within the course of executing testSortArray(), then the test will fail due to an unexpected error.

If you're unsure of the message but know the type of error that will be thrown, you can specify the error constructor for the error
you're expecting to occur:

In this example, the test will pass if a TypeError gets thrown; if any other type of error is thrown,
the test will fail. A word of caution: TypeError is the most frequently thrown error by browsers,
so specifying a TypeError as expected may give false passes.

To narrow the margin of error between checking for an error message and checking the error type, you can create a specific error
object and set that in the _should.error property, such as:

Using this code, the testSortArray() method will only pass if a TypeError object is thrown with a message of
"Expected an array"; if any other type of error occurs, then the test fails due to an unexpected error.

Note: If a test is marked as expecting an error, the test will fail unless that specific error is thrown. If the test completes without an error being thrown, then it fails.

Assertions

Test methods use assertions to check the validity of a particular action or function. An assertion method tests (asserts) that a condition is valid; if not, it throws an error that causes the test to fail. If all assertions pass within a test method, it is said that the test has passed. The simplest assertion is Y.assert(), which takes two arguments: a condition to test and a message. If the condition is not true, then an assertion error is thrown with the specified message. For example:

In this example, testUsingAsserts() will fail if value is not equal to 5 of flag is not set to true. The Y.assert() method may be all that you need, but there are advanced options available. The Y.Assert object contains several assertion methods that can be used to validate data.

Equality Assertions

The simplest assertions are areEqual() and areNotEqual(). Both methods accept three arguments: the expected value,
the actual value, and an optional failure message (a default one is generated if this argument is omitted). For example:

These methods use the double equals (==) operator to determine if two values are equal, so type coercion may occur. This means
that the string "5" and the number 5 are considered equal because the double equals sign converts the number to
a string before doing the comparison. If you don't want values to be converted for comparison purposes, use the sameness assertions instead.

Sameness Assertions

The sameness assertions are areSame() and areNotSame(), and these accept the same three arguments as the equality
assertions: the expected value, the actual value, and an optional failure message. Unlike the equality assertions, these methods use
the triple equals operator (===) for comparisions, assuring that no type coercion will occur. For example:

In addition to these specific data type assertions, there are two generic data type assertions.

The isTypeOf() method tests the string returned when the typeof operator is applied to a value. This
method accepts three arguments: the type that the value should be ("string", "number",
"boolean", "undefined", "object", or "function"), the value to test, and an optional failure message.
For example:

If you need to test object types instead of simple data types, you can also use the isInstanceOf() assertion, which accepts three
arguments: the constructor function to test for, the value to test, and an optional failure message. This assertion uses the instanceof
operator to determine if it should pass or fail. Example:

Special Value Assertions

There are numerous special values in JavaScript that may occur in code. These include true, false, NaN,
null, and undefined. There are a number of assertions designed to test for these values specifically:

isFalse() - passes if the value is false.

isTrue() - passes if the value is true.

isNaN() - passes if the value is NaN.

isNotNaN() - passes if the value is not NaN.

isNull() - passes if the value is null.

isNotNull() - passes if the value is not null.

isUndefined() - passes if the value is undefined.

isNotUndefined() - passes if the value is not undefined.

Each of these methods accepts two arguments: the value to test and an optional failure message. All of the assertions expect the
exact value (no type coercion occurs), so for example calling isFalse(0) will fail.

Forced Failures

While most tests fail as a result of an assertion, there may be times when
you want to force a test to fail or create your own assertion method. To do this, use the
fail() method to force a test method to fail immediately:

When the failure of this method is reported, the message "I decided this should fail." will be reported.

Mock Objects

Mock objects are used to eliminate test dependencies on other objects. In complex software systems, there's often multiple
object that have dependence on one another to do their job. Perhaps part of your code relies on the XMLHttpRequest
object to get more information; if you're running the test without a network connection, you can't really be sure if the test
is failing because of your error or because the network connection is down. In reality, you just want to be sure that the correct
data was passed to the open() and send() methods because you can assume that, after that point,
the XMLHttpRequest object works as expected. This is the perfect case for using a mock object.

To create a mock object, use the Y.Mock() method to create a new object and then use Y.Mock.expect()
to define expectations for that object. Expectations define which methods you're expecting to call, what the arguments should be,
and what the expected result is. When you believe all of the appropriate methods have been called, you call Y.Mock.verify()
on the mock object to check that everything happened as it should. For example:

In this code, a mock XMLHttpRequest object is created to aid in testing. The mock object defines two
expectations: that the open() method will be called with a given set of arguments and that the send()
method will be called with a given set of arguments. This is done by using Y.Mock.expect() and passing in the
mock object as well as some information about the expectation. The method property indicates the method name
that will be called and the args property is an array of arguments that should be passed into the method. Each
argument is compared against the actual arguments using the identically equal (===) operator, and if any of the
arguments doesn't match, an assertion failure is thrown when the method is called (it "fails fast" to allow easier debugging).

The call to Y.Mock.verify() is the final step in making sure that all expectations have been met. It's at this stage
that the mock object checks to see that all methods have been called. If open() was called but send()
was not, then an assertion failure is thrown and the test fails. It's very important to call Y.Mock.verify() to test
all expectations; failing to do so can lead to false passes when the test should actually fail.

In order to use mock objects, your code must be able to swap in and out objects that it uses. For example, a hardcoded
reference to XMLHttpRequest in your code would prevent you from using a mock object in its place. It's sometimes
necessary to refactor code in such a way that referenced objects are passed in rather than hardcoded so that mock objects
can be used.

Note that you can use assertions and mock objects together; either will correctly indicate a test failure.

Special Argument Values

There may be times when you don't necessarily care about a specific argument's value. Since you must always specify the correct
number of arguments being passed in, you still need to indicate that an argument is expected. There are several special values
you can use as placeholders for real values. These values do a minimum amount of data validation:

Y.Mock.Value.Any - any value is valid regardless of type.

Y.Mock.Value.String - any string value is valid.

Y.Mock.Value.Number - any number value is valid.

Y.Mock.Value.Boolean - any Boolean value is valid.

Y.Mock.Value.Object - any non-null object value is valid.

Y.Mock.Value.Function - any function value is valid.

Each of these special values can be used in the args property of an expectation, such as:

The expecation here will allow any string value as the first argument and any Boolean value as the last argument.
These special values should be used with care as they can let invalid values through if they are too general. The
Y.Mock.Value.Any special value should be used only if you're absolutely sure that the argument doesn't
matter.

Property Expectations

Since it's not possible to create property getters and setters in all browsers, creating a true cross-browser property
expectation isn't feasible. YUI Test mock objects allow you to specify a property name and it's expected value when
Y.Mock.verify() is called. This isn't a true property expectation but rather an expectation that the property
will have a certain value at the end of the test. You can specify a property expectation like this:

//expect that the status property will be set to 404
Y.Mock.expect(mockXhr, {
property: "status",
value: 404
});

This example indicates that the status property of the mock object should be set to 404 before
the test is completed. When Y.Mock.verify() is called on mockXhr, it will check
the property and throw an assertion failure if it has not been set appropriately.

Asynchronous Tests

YUI Test allows you to pause a currently running test and resume either after a set amount of time or
at another designated time. The TestCase object has a method called wait(). When wait()
is called, the test immediately exits (meaning that any code after that point will be ignored) and waits for a signal to resume
the test.

A test may be resumed after a certain amount of time by passing in two arguments to wait(): a function to execute
and the number of milliseconds to wait before executing the function (similar to using setTimeout()). The function
passed in as the first argument will be executed as part of the current test (in the same scope) after the specified amount of time.
For example:

In this code, the testAsync() function does one assertion, then waits 1000 milliseconds before performing
another assertion. The function passed into wait() is still in the scope of the original test, so it has
access to this.data just as the original part of the test does. Timed waits are helpful in situations when
there are no events to indicate when the test should resume.

If you want a test to wait until a specific event occurs before resuming, the wait() method can be called
with a timeout argument (the number of milliseconds to wait before considering the test a failure). At that point, testing will resume only when the resume() method is called. The
resume() method accepts a single argument, which is a function to run when the test resumes. This function
should specify additional assertions. If resume() isn't called before the timeout expires, then the test fails. The following tests to see if the Anim object has performed its
animation completely:

In this example, an Anim object is used to animate the width of an element to 400 pixels. When the animation
is complete, the end event is fired, so that is where the resume() method is called. The
function passed into resume() simply tests that the final width of the element is indeed 400 pixels. Once the event handler is set up, the animation begins.
In order to allow enough time for the animation to complete, the wait() method is called
with a timeout of 3.1 seconds (just longer than the 3 seconds needed to complete the animation). At that point, testing stops until the animation completes and resume() is called or until 3100 milliseconds have passed.

Test Suites

For large web applications, you'll probably have many test cases that should be run during a testing phase. A test suite helps to handle multiple test cases
by grouping them together into functional units that can be run together. To create new test suite, use the Y.Test.Suite
constructor and pass in the name of the test suite. The name you pass in is for logging purposes and allows you to discern which TestSuite instance currently running. For example:

By grouping test suites together under a parent test suite you can more effectively manage testing of particular aspects of an application.

Test suites may also have setUp() and tearDown() methods. A test suite's setUp() method is called before
the first test in the first test case is executed (prior to the test case's setUp() method); a test suite's tearDown()
method executes after all tests in all test cases/suites have been executed (after the last test case's tearDown() method). To specify
these methods, pass an object literal into the Y.Test.Suite constructor:

Test suite setUp() and tearDown() may be helpful in setting up global objects that are necessary for a multitude of tests
and test cases.

Running Tests

In order to run test cases and test suites, use the Y.Test.Runner object. This object is a singleton that
simply runs all of the tests in test cases and suites, reporting back on passes and failures. To determine which test cases/suites
will be run, add them to the Y.Test.Runner using the add() method. Then, to run the tests, call the run()
method:

If at some point you decide not to run the tests that have already been added to the TestRunner, they can be removed by calling clear():

Y.Test.Runner.clear();

Making this call removes all test cases and test suites that were added using the add() method.

TestRunner Events

The Y.Test.Runner provides results and information about the process by publishing several events. These events can occur at four
different points of interest: at the test level, at the test case level, at the test suite level, and at the Y.Test.Runner level.
The data available for each event depends completely on the type of event and the level at which the event occurs.

Test-Level Events

Test-level events occur during the execution of specific test methods. There are three test-level events:

Y.Test.Runner.TEST_PASS_EVENT - occurs when the test passes.

Y.Test.Runner.TEST_FAIL_EVENT - occurs when the test fails.

Y.Test.Runner.TEST_IGNORE_EVENT - occurs when a test is ignored.

For each of these events, the event data object has three properties:

type - indicates the type of event that occurred.

testCase - the test case that is currently being run.

testName - the name of the test that was just executed or ignored.

For Y.Test.Runner.TEST_FAIL_EVENT, an error property containing the error object
that caused the test to fail.

TestCase-Level Events

There are two events that occur at the test case level:

Y.Test.Runner.TEST_CASE_BEGIN_EVENT - occurs when the test case is next to be executed but before the first test is run.

Y.Test.Runner.TEST_CASE_COMPLETE_EVENT - occurs when all tests in the test case have been executed or ignored.

For these two events, the event data object has three properties:

type - indicates the type of event that occurred.

testCase - the test case that is currently being run.

For TEST_CASE_COMPLETE_EVENT, an additional property called results is included. The results
property is an object containing the aggregated results for all tests in the test case (it does not include information about tests that
were ignored). Each test that was run has an entry in the result object where the property name is the name of the test method
and the value is an object with two properties: result, which is either "pass" or "fail", and message, which is a
text description of the result (simply "Test passed" when a test passed or the error message when a test fails). Additionally, the
failed property indicates the number of tests that failed in the test case, the passed property indicates the
number of tests that passed, and the total property indicates the total number of tests executed. A typical results
object looks like this:

The TEST_CASE_COMPLETE_EVENT provides this information for transparency into the testing process.

TestSuite-Level Events

There are two events that occur at the test suite level:

Y.Test.Runner.TEST_SUITE_BEGIN_EVENT - occurs when the test suite is next to be executed but before the first test is run.

Y.Test.Runner.TEST_SUITE_COMPLETE_EVENT - occurs when all tests in all test cases in the test suite have been executed or ignored.

For these two events, the event data object has three properties:

type - indicates the type of event that occurred.

testSuite - the test suite that is currently being run.

The TEST_SUITE_COMPLETE_EVENT also has a results property, which contains aggregated results for all of the
test cases (and other test suites) it contains. Each test case and test suite contained within the main suite has an entry in the
results object, forming a hierarchical structure of data. A typical results object may look like this:

In this code, the test suite contained another test suite named "testSuite0", which is included in the results along
with its test cases. At each level, the results are aggregated so that you can tell how many tests passed or failed within each
test case or test suite.

TestRunner-Level Events

There are two events that occur at the Y.Test.Runner level:

Y.Test.Runner.BEGIN_EVENT - occurs when testing is about to begin but before any tests are run.

Y.Test.Runner.COMPLETE_EVENT - occurs when all tests in all test cases and test suites have been executed or ignored.

The data object for these events contain a type property, indicating the type of event that occurred. COMPLETE_EVENT
also includes a results property that is formatted the same as the data returned from TEST_SUITE_COMPLETE_EVENT and
contains rollup information for all test cases and tests suites that were added to the TestRunner.

Subscribing to Events

You can subscribe to particular events by calling the subscribe() method. Your event handler code
should expect a single object to be passed in as an argument. This object provides information about the event that just occured. Minimally,
the object has a type property that tells you which type of event occurred. Example:

In this code, the handleTestFail() function is assigned as an event handler for TEST_FAIL_EVENT. You can also
use a single event handler to subscribe to any number of events, using the event data object's type property to determine
what to do:

Viewing Results

There are two ways to view test results. The first is to output test results to the TestConsole
component. To do so, you need only create a new Test.Console instance; the result results will be posted
to the logger automatically:

If you are using a browser that supports the console
object (Firefox with Firebug installed, Safari 3+, Internet Explorer 8+, Chrome), then you can
direct the test results onto the console. To do so, make sure that you've specified your YUI
instance to use the console when logging:

You can also extract the test result data using the Y.Test.Runner.getResults() method. By default, this method
returns an object representing the results of the tests that were just run (the method returns null if called
while tests are still running). You can optionally specify a format in which the results should be returned. There are four
possible formats:

The XML, JSON, and JUnit XML formats produce a string with no extra white space (white space and indentation shown here is for readability
purposes only).

Test Reporting

When all tests have been completed and the results object has been returned, you can post those results to a server
using a Y.Test.Reporter object. A Y.Test.Reporter object creates a form that is POSTed
to a specific URL with the following fields:

results - the serialized results object.

useragent - the user-agent string of the browser.

timestamp - the date and time that the report was sent.

You can create a new Y.Test.Reporter object by passing in the URL to report to. The results object can
then be passed into the report() method to submit the results:

The form submission happens behind-the-scenes and will not cause your page to navigate away. This operation is
one direction; the reporter does not get any content back from the server.

There are four predefined serialization formats for results objects:

Y.Test.Format.XML (default)

Y.Test.Format.JSON

Y.Test.Format.JUnitXML

Y.Test.Format.TAP

The format in which to submit the results can be specified in the Y.Test.Reporter constructor by passing in the appropriate
Y.Test.Format value (when no argument is specified, Y.Test.Format.XML is used:

Custom Fields

You can optionally specify additional fields to be sent with the results report by using the addField() method.
This method accepts two arguments: a name and a value. Any field added using addField() is POSTed along with
the default fields back to the server: