In the Internet of Things area, sensor-based smart environments are becoming more and more ubiquitous. Smart environments can support user’s cognitive abilities and support them in various tasks e.g. assembling, or cooking.

However, programming applications for smart environments still requires a lot of eort as many sensors
need to be programmed and synchronized. In this work, we present a novel approach for programming procedures in smart environments through demonstrating a task. We define abstract high-level areas that are triggered by the user while performing a task. According to the triggered areas, projected instructions for performing the task again are automatically created. Those instructions can then be transferred to other users to learn how to assemble a product or to cook a meal.

We present a prototypical implementation of a smart environment using optical sensors and present how it can be used in a smart factory and in a smart kitchen.