The integration of forecasting and decision-making in real-time Decision-Support Systems (DSS) provides a powerful tool to operators of water resources systems for evaluating the future control of hydraulic structures. Decisions may be supported by presenting information about predicted disturbances, e.g. inflows into the water system, enabling the operator to try out future trajectories of structure control, or suggesting an optimum control based on predictive controllers. Ongoing work is undertaken under the programme Flood Control 2015 (FC2015) with respect to the management of flood events. This MSc thesis research was supervised jointly by the Operational Water Management research group of Delft University of Technology and the research institute Deltares.
The aim of the MSc project is the transfer and extension of real-time DSS knowledge and techniques to a typical Dutch canal system such as Twentekanalen using simulation tools in development at Deltares.
The main research objective is to assess the potential of DSS in this context and to investigate and verify a robust concept for applying Model Predictive Control on canal systems, taking into account missing or wrong data by applying Data Assimilation techniques.
The main system characteristics and relevant processes of the Twentekanalen system are the following:
• 3 Canals connected by locks in which the water level needs to be controlled.
• The water level is chiefly governed by the operation of locks, which need to turn in order for ships to pass, discharging a large quantity of water each time in comparison to other water flows in the system. Measurements of water level and flows at the locks are relatively complete.
• The water level is regulated by pumps and discharge structures at the locks
• Other water flows that occur in the system are lateral inflow and outflow. The measurements of these flows are relatively incomplete.
At the start of the research a set of tools was available at Deltares. FEWS, a data management system, and RTC Tools, a reservoir routing model in development which was later extended with Data Assimilation capabilities. Near the end of the research a detailed model of the system in Sobek, a 1D and 2D water flow model, became available.
A model framework has been designed to assess the potential of applying MPC and DA in a DSS for such a system. The incremental design and verification of this model framework has been the core of this research. The novel research is the addition of Data Assimilation techniques to Model Predictive Control.
In order to show the added value of DA and verify its implementation a verification approach is needed to address the other components in the framework as well.
The first method taken to achieve this was to set-up the MPC for Twentekanalen and integrate it into Delft-FEWS in hindcast mode assuming a perfect forecast. When the data set was made available it became clear that it contained large water balance errors. Adding DA showed improvements in the forecast, but while using realistic values for the DA, the forecasts were still far from accurate. By creating a workaround in the DA module it was shown that especially the Eefde-Delden reach had a large balance error that did not have a high correlation with the known lateral flows.
Considering the low quality of the data set it was decided to expand the scope of the research and replace the data set by an accurate hydraulic model that became available near the end of this research. This model still uses measurements from the Twentekanalen system as input, but with internal controllers to regulate the pump and spill structures the water balance is maintained. With an extra expansion to inject known errors in the system, a thorough investigation of the effects of Data Assimilation and Model Predictive Control can be executed.
First results from this expanded approach show promising results, but because of practical implementation issues of conflicting software modules, the full results will not be available within this research.
Conclusions: From a theoretical point of view DA has a lot of potential. State updating solves an important issue of real time control; keeping the model state as close as possible to the real system state. Model training by parameter updating can be a good way to increase model forecasting performance. Online Parameter Updating can be very effective in systems were a high correlation occurs between measurements and unmeasured processes. These elements will make the model more robust, it can adapt to changing conditions. This also provides the model developer with interesting feedback on the workings of the modeled water system.
From a practical point of view DA has shown improvements in the performance of the DSS as designed within this thesis project. But because of the large errors in the measurements it is difficult to translate these improvements to the effects in other systems. Implementation of the designed model framework gives a more satisfactory answer to that question.
Recommendations have been made for improvements of the RTC Tools module, the development of prediction modules for the Twentekanalen system and further research using the developed framework with the models, scripts and programs written for this research. Most importantly getting predictions and real-time measurements on lock turning in the Twentekanalen system, and increasing the flexibility of model design in RTC Tools.

Introduction and problem definition
The IJsselmeer is located in the center of the Netherlands. For its relevance for the Dutch economy and society, it is often addressed as the Wet Hearth of the country. When looking into the future, the IJsselmeer is under climate threats. Wetter winters will bring more water into the system, in combination with sea level rise, and lower gravity discharge to the Waddenzee. This will generate safety issues. On the other hand summers will be drier, putting the satisfaction of water demand in danger.
Research approach and research question
The goal of the research is to define for the IJsselmeer a dynamic target water level which is variable through the whole year by means of an optimization approach. The optimization uses a single objective function considering dikes safety and water demand. Such approach has been chosen because follows a different path than the ones mainly used so far to tackle the issue. When management measures alone are not enough to define a climate-proof IJsselmeer, extra measures are taken into consideration: a pumping station at the Afsluitdijk and early storage in March.
The main research question asks for an evaluation of the optimization methodology used to define efficient alternatives for the IJsselmeer. The sub-question requires the assessment of the flexibility of the IJsselmeer towards a climate-proof system, and the definition of extra measures, when needed.
Methodology
The definition of the optimum measures is achieved in several steps. Firstly the objective of the problem owner is defined. The Dienst IJsselmeergebied is the only problem owner. Its interests are safety and water demand satisfaction. Then indicators are derived from the objectives, and merged into the objective function. Classes of measures are selected, and a model of the system designed for their evaluation. Finally the optimization problems are defined in order to design the optimum alternatives.
Results
A different planning of the target water level alone is not able to satisfy the needs of safety and water demand on the long term. As it is now, the IJsselmeer is flexible on the short term, but not enough to accommodate the impacts of longer horizons: extra measures are needed in order to define a climate proof system in 2050 and 2100. Pumping station at the Afsluitijk is an effective measure to guarantee safety for all the scenarios. Early storage in March is effective in the medium horizon (2050) but need high target water levels along the summer for the long term (2100). This might generate safety issues.
Even if applied on a simplified case, the use of an optimization methodology manages to define a realistic picture of the flexibility of the IJsselmeer, and retrieves efficient options for possible future strategies. For this reasons, the present research can be considered a successful implementation of an optimization approach for the IJsselmeer.
Conclusions and recommendations
For the short term it is recommended to use the flexibility of the system, implementing the changes in summer target water levels which would allow deeper satisfaction of water demand.
For the medium/long term, options for early storage need to be investigated together with the summer target water levels needed. This would probably require reinforcement of the dikes. Options for safety can be then defined for the new reinforced system, considering combinations of pumping station and raise of the dikes.
A more extensive and detailed optimization tool should be realized for the IJsselmeer, and applied for the definition of the measures above. In particular it is recommended to use a multi-objective analysis and include costs in the definition of the indicators.

This graduation project offers an alternative to the tradition in top-down planning in the Netherlands. The test-case for this strategy is the transformation of former industry area ‘Schieoevers’ in Delft. Both a spatial framework and a policy framework are created to support bottom up development of this project location. The spatial framework is an urban design of the main networks and the policy framework are the rules for developing an area enclosed by this network. The bottom up development is a form of self-organization in which a group of private and public initiatives can develop an area (unit) of the project location. The ‘Schieoevers’ will be developed into a pedestrian-based neighbourhood with a mix of living and working on a small scale. The rules for self-organization should guide the development in this desired direction. To test the bottom up strategy for densification three games where organized to simulate the self-organization process.

This three-part paper discusses the analysis and control of legged locomotion in terms of N-step capturability: the ability of a legged system to come to a stop without falling by taking N or fewer steps. We consider this ability to be crucial to legged locomotion and a useful, yet not overly restrictive criterion for stability.
Part 1 introduces the theoretical framework for assessing N-step capturability. Formal definitions of N-step capturability and related terms are given, and general disturbance robustness metrics based on capturability are proposed.
Part 2 uses the theoretical framework developed in the current part to analyze N-step
capturability for three simple gait models.
Part 3 describes how the results for the simple models were used to control a complex lower body humanoid robot with two six degree of freedom legs.

Endpoint stiffness is known to increase in the direction of instability during learning to accurately execute unstable motor tasks. Reflexes might be present in the time interval used for endpoint stiffness calculations suggesting a possible influence of reflexes on endpoint stiffness measurements. In addition, changes of reflexes during motor learning are still unknown. The purpose of this research was to investigate the separate contributions of intrinsic and reflexive stiffness to the observed change in endpoint stiffness during learning to move the hand in unstable force fields. A divergent (unstable) force field (DF) was applied with a two degrees of freedom manipulator (ARMANDA). Subjects performed 100 point-to-point arm movements in a null field and 300 point-to-point arm movements in the divergent force field holding the manipulator with their right hand. In random (catch) trials a minimum jerk position perturbation was applied in the middle of the movement. Force and EMG responses to the perturbation were used to examine endpoint stiffness and reflexes. Endpoint stiffness is defined as the force response to the imposed perturbation (in the interval 160-200 ms after perturbation onset) divided by the position displacement. Unperturbed trials were analyzed to investigate the decrease of errors (deviations from the straight path between start and target) and changes of co-contraction with motor learning. We found that errors decreased in the first 150 movements in the DF and leveled off from then on. Intrinsic, reflexive and endpoint stiffness were rapidly increased before the 35th DF trial. No significant changes of the stiffness parameters were found after this first learning period. An additional investigation of reflex response timing showed variability in the reflex timing between subjects. Reflexes were seen to influence the endpoint stiffness measurements for some subjects. In case of other subjects reflexes were not present in the time interval used for endpoint stiffness calculation and therefore no reflexive contribution to endpoint stiffness was assumed. In conclusion, our results showed a rapid increase of all stiffness parameters during the early phase of learning and suggest the involvement of other (unknown) mechanisms in the later learning phase. Because of the found variations in reflex response timing, we recommend to always include reflex analysis during endpoint stiffness measurements.

Since in the finger most muscle-tendons cross multiple joints, moment arms (MA’s) of these muscle-tendons have a major influence on the distribution of joint moments over these joints. This is crucial for finger balance and therefore finger functioning. Recent developments in the technique of ultrasonic measurements have made it possible to accurately measure tendon displacements over a large range of motion in-vivo. The change in tendon displacement divided by the change in corresponding joint angle (dl/dθ) was used to estimate joint angle dependent MA’s (MAest) of the flexor digitorum superficialis (FDS), flexor digitorum profundus (FDP) and extensor digitorum communis (EDC) tendon at the metacarpophalangeal (MCP) joint of the long finger. In addition, MA’s were obtained from the joint geometry for each individual subject (MAgeo). Two sessions of 3 repetitions of active flexion-extension motion of the long finger were conducted by each of the five subjects enrolled in this study. The intra subject repeatability between the two sessions at 0 degrees of flexion for the FDS and EDC tendon was good (ICC > 0.95, p = 0.004 and 0.101 respectively) but rather weak for the FDP (ICC = 0.630, p = 0.157). The obtained MAest values were underestimated in comparison to the MAgeo values. Thus, this method proved to be promising, but should be further validated and developed to yield sufficiently reliable results. Once fully developed, the in-vivo estimation of MA’s can be used to determine the subject specific muscle MA balances around the finger joints. These MA balances can be an important parameter for a musculoskeletal finger model. In addition, it can be a generic measure for finger pathologies where finger balance is disturbed.

This thesis deals with separation of freeway traffic using dynamic lane assignment, based on ones destination; either or not to the next downstream exit. At freeway exits a high amount of lane changes lead to capacity reduction. Furthermore, when the flow to the exit exceeds the exit capacity (like at an IKEA on Sundays) a queue will form that spills back to the freeway. In case no measures are taken, congestion spreads out over all freeway lanes. Traffic flow theory shows that separating exiting traffic from through-going traffic can prevent this total roadway blockage in case increasing the exit capacity is not possible. Compared with existing measures, this study explores chances and drawbacks of dynamic flow separation on one roadway. Here dynamic means variable in time and space, so no physical or static separation is used. The goal of this type of separation is to improve outflows for through-going traffic near oversaturated off-ramps.
Therefore two traffic controllers have been designed in this study. Through-going vehicles are guided away from the rightmost lane while exiting vehicles are guided to the rightmost lane. The dynamic aspect is that the length of this separation measure is based on the location of the tail of the queue. Both controllers switch on when the vehicle speeds drop below a threshold value at a specified location. In the first control strategy (a feedback controller) the length of the dynamic separation upstream of the exit is determined by a fixed offset distance upstream of the measured tail. In the second control strategy (a feed forward controller) the location of the tail is predicted using shockwave estimation. The separation length is now determined by the shockwave speed and direction.
Both controllers have been tested in an adapted version of FOSIM for different flow/capacity ratios by altering controller parameters like intervention location and (initial) offsets. The simple feedback controller mostly improves the outflow for through-going traffic. The advanced feed forward controller works as well, but the control behavior is very unstable due to measurement errors of many variables. The benefit in outflow can grow to 30% with a well specified intervention location, offsets around 1000 m and compliances from 80%. The results show a more uniform traffic situation near the exit, reduction of congestion spillback and no total roadway congestion after implementing a controller. A clear separation in flows can be seen resulting in higher speeds on the leftmost lane. The results are suboptimal however, because the speeds and flows on the lane adjacent to the exit queue are low. This is due to the legal and safety aspects related to the maximum speed difference between non-physically separated lanes.

Traditional content creation for computer games is a costly process. In particular, current techniques for authoring destructible behaviour are labour intensive and often limited to a single object basis. We aim to create an intuitive approach which allows designers to visually define destructible behaviour for objects in a reusable manner, which can then be applied in real-time.
First we present a short introduction into the way that destruction has been done in games for many years. To better understand the physical processes that are being replicated, we present some information on how destruction works in the real world, and the high level
approaches that have developed to simulate these processes.
Using criteria gathered from industry professionals, we survey previous research work and determine their usability in a game development context. The approach which suits these criteria best is then selected as the basis for the approach presented in this work. By examining commercial solutions the shortcomings of existing technologies are determined to establish a solution direction.
To separate destructible behaviour from particular objects, we introduce the concept of destructible materials: where the material of an object usually defines the way an object looks, a destructible material will determine how it breaks. Destructible materials provide a reusable definition and intuitive way of designing and tweaking destructible behaviour of objects, which can then be applied in real-time.
Using a prototype implementation we show the viability of the presented approach and how it extends previous research with reusability, making it more designer friendly and allowing the same destructible behaviour to be easily applied to different objects. While the prototype can only apply this destructible behaviour in real-time for simple cases, it still takes us a step in the right direction.

Serious Gaming is becoming a popular method for training and problem solving in companies. One of the companies who has taken an interest in this is ProRail. Together with the faculty of Technology, Policy and Management of the Delft University of Technology they started a project to develop a gaming simulation suite for training and decision making purposes, called the Railway Gaming Suite. In order to connect the games and simulators of the suite a solid architecture is needed. Three architectures were picked out to see if they are suitable for this, namely: Service Oriented Architectures, High Level Architecture and FAMAS Simulation Backbone.
Using the Railway Gaming Suite as a case study, we have extracted requirements (like performance and flexibility) for an architecture for gaming simulation suites using the Architectural Trade-off Analysis Method. These requirements are used to determine the suitability of the three architectures. In this thesis the research on the suitability of Service Oriented Architectures (SOA) is presented.
A prototype SOA was created, called Service Oriented Gaming and Simulation (SOGS). This prototype was used to test the performance requirement for the evaluation. The suitability was investigated by evaluating SOA to see if it is able to support the requirements we found. We subsequently also compared the suitability of the other architectures. Intermediate results of this thesis project were used to help with the decision for selecting an architecture for the Railway Gaming Suite.

Due to the introduction of a new grid connection policy, transmission system operator TenneT expects congestion to arise on the Dutch transmission grid in the near future. This new connection policy was introduced by the Ministry of Economic Affairs to abolish the discrimination between existing grid users and new entrants, and should improve competition. It allows generators to be connected to the grid directly, without having to wait for transmission capacity expansions that may be required. As this could cause transmission flows as desired by market parties to exceed the available capacity, TenneT must apply congestion management in order to guarantee the safe and reliable operation of the transmission grid.
The Ministry decided that basic system redispatch should be used to manage congestion. This method was regarded the most appropriate short-term implementable option available, but has some drawbacks nonetheless. In existing literature it is argued that it potentially leads to high costs, that it is vulnerable to strategic bidding, and that it creates economically sub-optimal outcomes from a grid efficiency perspective. This study has quantitatively evaluated the application of the method in the Netherlands, in terms of congestion costs, their allocation, the incentives it creates, and the opportunities for (and the consequences of) generators bidding strategically. These outcomes were compared to three other congestion management methods (market splitting, market coupling, and the APX-based method1), in order to assess the validity of the proposition that market-based methods, which form the current trend in Europe, lead to better outcomes.
Using a quantitative model of the Dutch electricity system the application of all four congestion
management methods was simulated. This was done under four different scenarios, each of which
was based on extreme conditions that were expected to contribute to congestion in parts of the grid:
• Low wind availability in Germany
• Cheap natural gas
• Green revolution
• Code red
The simulations revealed that the transmission link between the Maasvlakte region and the Ring is most prone to become congested. However, this study also found that the resulting congestion costs will be low. This is the case because the variable cost levels of production units in the areas upstream and downstream from the congested grid segment were found to be very similar. A deviation from optimal dispatch will therefore result in only slightly higher dispatch costs. Under the most extreme scenario conditions, in which 1292 MW needs to be redispatched from the Maasvlakte to other areas of the Netherlands, net congestion costs were found to be € 231 / hr. On a yearly basis this would be € 2 mln., which is significantly lower than cost estimates found in literature, which expect this cost to be in the order of magnitude of € 10–100 mln.
To identify the most appropriate congestion management method for the Netherlands, multi-criteria decision analysis (MCDA) was applied to compare the methods, in a pairwise manner and on the basis of eleven criteria. The analysis revealed that conflicting objectives preclude the identification of a single most appropriate congestion management method. It found that the APX-based method outranks market splitting and market coupling, but it remained inconclusive with respect to the appropriateness of basic system redispatch in comparison with these methods. The policy objectives of the Ministry thus appear to be different from those presumed elsewhere and by existing literature, considering their explicit preference for market-based methods.
In order to improve the results of this analysis, the Ministry must reassess its objectives with respect to the conflicting criteria of proportionality, and long-term generator and TSO incentives. Also, additional research should improve the conclusiveness of the model results that were used for MCDA, as this would contribute to a more conclusive recommendation on method appropriateness. In particular, such research should encompass the options for incorporating a renewable energy
compensation scheme under market-based methods, and it should, by constructing a more extensive, continuous, agent-based model that is capable of incorporating the strategies pursued by individual generators, provide a broader insight and more detailed data on the extent of congestion and the (resulting) consequences of strategic bidding.

The position of multimodal transfer nodes, on the edge of the city, are of increasing importance in the contemporary daily (city) life. Transfer nodes are a moment of condensated movement in the mobile world, and do have a high potential of human activity and social interaction. The transfer node could functions as an urban node; a place for transfer, work, living, doing grocery, leisure,traveling, meeting and staying. Aside of these potentials multimodal transfer nodes often deal with a low public space quality and an insufficient integration with the (local) nvironment; How to solve problems like the transfer node being a non-place or being a no-go area for the local inhabitants?
This Master’s Thesis is a search for strategic spatial design interventions to develop a multimodal transfer hub into an urban sub centre, enhancing a positive exchange between the node and it’s neighboring environment.

Communication systems evolve day after day at a very fast pace. People not only have high expectations in regard of the conversation quality, but they also need more data download speeds and better coverage. The industry tries to come and fill in this expectation by developing state-of-the art systems that are cost-effective and that ensure good profits. Telecommunication operators require from vendors top class, cheap and reliable equipments for their sites. Vendors on the other hand try to cut down costs by simulating and then developing products. The aim of this project is to simulate three important wireless systems LTE, UMTS and GSM/EDGE (at physical layer level) for base stations, according to the implementations mentioned in the 3GPP standards. The most demanding requirements have been derived in this work for each of the transceiver systems and a realistic system description has been implemented in MatLab 2008b. The tolerance to RF imperfections (DC offset, I-Q amplitude and phase mismatch, cubic nonlinearity, frequency offset, phase noise, etc.) are taken into consideration. Also implementation specific RF imperfection, like the delay and amplitude misalignment in outphasing transmitters has been considered.
The RF imperfections have been considered in equal measures for both the transmitter and the receiver. The resulting study ensured a perfect calibration of the BER curves with the theoretical curves using the uncoded bits. The final system comparison in this thesis has been made only for the communication standard LTE, considering classical IQ Tx configuration, a pure outphasing transmitter and an improved efficiency outphasing Tx. This in order to investigate which concept is more tolerant to RF impairments. The parameters used in the simulations to check the system performances are: EVM, ACPR, scatter plots and BER. In conclusion, this study offers some suggestions for future research activities, related to topics like estimation, equalization, Rayleigh channels and Doppler affected Rayleigh channels.