Important: this version of the annex may be refined with the feedback of the competitors and distributed to the competitors by means of the
This e-mail address is being protected from spambots. You need JavaScript enabled to view it
mailing list and the call web site. Refinements will be documented in theRefinement of Organisation and Testing Procedures document.

Technologies

Each team should implement a localization system to cover the area of the Living Lab, without limits to the number of devices. Technologies will be accepted based on their compatible with the constraints of the hosting living lab.

Localization systems can be based on different sensing modalities, including (but not limited to):

The proposed systems may also include combinations of different technologies.

Moreover, the competition includes the possibility of exploiting existing context information provided within the Living Lab (such as opening/closing doors, switching on/off the light etc..), in order to refine the proposed localization system.

Competitors are requested to integrate their solution with the existing logging and benchmarking system. The integration will be guided in details and a competitor's integration package will be delivered for the purpose. All details about integration are given in the Living Lab and Technical Infrastructure document.

The score for each competing artefact will be evaluated by means of benchmark tests during a precise time slot at the living lab. The benchmark consists of a set of tests, each of which will contribute to assessment of the scores for the artefact. The time slot for benchmark testing is divided into three parts.

In the first part, the competing team will deploy and configure their artefact in the living lab. This part should last no more than 60 minutes.

In the second part, the benchmark will be applied (During this phase the competitors will have the opportunity to perform only short reconfigurations of their systems). The localization systems will be evaluated in three phases:

Phase 1: In this phase each team must locatea single person inside an Area of Interest (AoI). The AoI in a typically AAL scenario could be inside a specific room (bathroom, bedroom), in front of a kitchen etc. (see Living Lab and Technical Infrastructure document for the official AoIs)

Phase 2: In this phase a single moving personmust be located and tracked Measurements relatively to the 2D floor plan will be compared to a reference localization system inside the Living Lab. During this phase only a single person to be localized will be inside the Living Lab. In this phase each localization system should produce localization data with a frequency of 2Hz. The path followed by the person will be the same for each test, and it will not be disclosed to competitors before the application of the benchmarks. The environment will be similar to a household, e.g. there will typical appliances, nearby wifi AP, cellular phones on etc..

Phase 3: This phase is organized as the former phase, but the competing artefacts will be evaluated in the presence ofanother (untracked and uninstrumented) person who moves inside the Living Lab. Only one person must be localized by the competing artefacts, the second one will follow a predefined path unknown to the competitors 5 seconds after the actor starts moving.If the disturber generates an event, if any, he will be at least 2 metres away from the actor.

Further details about the paths might be disclosed to all competitors in advance. Moreover, during all the phases, the actor can move the appliances, in order to make the scenario more close as possible to the real life scenario.

In the last part the teams will remove the artefact from the living lab in order to enable the installation of the next competing artefact.

For the failure to meet deadlines or upon non-completion of tests teams will be awarded a minimum score for part 1 or 3 or the missing tests respectively.

Evaluation criteria:

In order to evaluate the competing localization systems, the TPC will apply the evaluation criteria:For each criterion, a numerical score will be awarded.

Accuracy (weight 0.35)

Installation complexity (weight 0.10)

User acceptance(weight 0.2)

Availability (weight 0.2)

Interoperability with AAL systems (weight 0.15)

For each criterion, a weighted numerical score will be awarded and added to the overall score.

Where possible the scores will be measured by direct observation or technical measurement. Where this is not possible, the score will be determined by the Evaluation Committee (EC). The EC will be composed of some volunteer members of the Technical Program Committee TPC, and will be present during the competition at the Living Lab.

The EC will ensure that the benchmark tests are applied correctly to each artefact.

Once both benchmark testing and EC evaluation are completed, the overall score for each artefact will be calculated using the weightings shown above. All final scores will be disclosed at the end of the competition, and the artefacts ranked according to this final score.