The experiment must be suited to remote use. A proposed experiment must be analysed to determine the ways in which the user interacts with the apparatus to effect learning outcomes, and ensure that these can be conveyed effectively by the remote infrastructure.

User Interface Design/Abstraction

The remote infrastructure should be as inconspicuous as possible and interfere as little as possible with the user’s interaction with the apparatus.

The User Interface must be designed to convey the physical aspects of the experiment, and not hide it behind overly abstract, unnatural-looking graphs and GUI widgets.

The user must feel at all times that their interaction is genuine and not like a crude simulation.

If an experiment needs to have real-time video and audio monitoring, the user may require a broadband internet connection such as Cable or ADSL, otherwise they may have to use an on-campus computer lab. If this is not acceptable, then the experiment must be designed to support lower bandwidth options.

Some systems may require client software to be installed on the user’s system. Often, these only support a single operating system and require administrative privileges.

These issues can often be avoided through the use of standardised, platform independent technologies such as Javascript, AJAX, Flash. The emergence of virtualization technology in recent years has also provided other avenues to deal with such issues.

Reliability

The remote laboratory infrastructure should be designed to have fault tolerance so that a failure or malfunction of a single component will not prevent the entire laboratory from being used. This is often achieved by having multiple experiments and/or multiple servers.

Maintainability

The use of off-the-shelf hardware and standardised, well-supported software technologies will help ensure a maximum operating life of the laboratory.