To help applicants in selecting one or more potential research topics for the PhD is available a list of PhD thesis proposals (please consider that additional proposals can be obtained contacting directly faculty members).

Description:Theatre is a quite structured environment where it is possible to exploit social relationships. A theatrical representation should be realistic and involve the audience. The design and realization of autonomous robot actors requires to face real and challenging problems, typical for autonomous robots (such as localization, scene understanding) and Human-Robot Interaction (emotion expression, social relationships), which are useful not only in the representation, but also in the real world. This thesis continues a track started few years ago that came to the implementation of a robot actor able to express emotions. The focus will be now to integrate social relationships, and understanding what is actually happening on the stage, so to be able to face unpredicted events, and obtain a robot able to improvise on a given canvas.

Description:Recent years have seen the rapid growth of interest for enterprise applications built on top of data-intensive technologies such as MapReduce/Hadoop, NoSQL databases, and stream processing systems fed by mobile and sensor data. Moreover, Cloud platform services for Big Data (e.g., Amazon Elastic MapReduce, S3, Kinesis; Microsoft HDInsights) are now creating massive growth opportunities for software vendors to develop and sell novel data-intensive cloud applications in various market segments, from predictive analytics to environmental monitoring, from e-government to smart cities. Since the software development market expects to be dominated by data-intensive cloud applications in the next years, there is now an urgent need for novel, highly productive, software engineering methodologies capable of supporting the design of data-intensive applications.
The focus of the PhD work is to define a quality-driven framework for developing data-intensive applications that leverage Big Data technologies hosted in private or public clouds. The thesis will develop a methodology and tools for data-aware quality-driven development. The work will focus on quality assessment, architecture enhancement, agile delivery and continuous monitoring of data-intensive applications.
Expertise in Model Driven Methodologies, Hadoop/Spark technology stack, performance evaluation, optimization and operation research would be highly regarded.
The PhD work will be supported by the DICE (Developing Data-Intensive Cloud Applications with Iterative Quality Enhancements) H2020 European project.

Description:the research will focus on aggregating and presenting building monitoring information, based on Building Information Models, to provide a platform for the governance of building focused on energy saving

Description:The reduction of carbon dioxide emissions targeted for the next years is fostering an increased utilization of renewable energy sources (green energies). They not only reduce greenhouse gas emissions, but they also creates jobs for the community, provide energy security, and a virtually-infinite amount of energy for the future. However, the utilization of renewable energy sources poses new challenges to the energy distribution grid due to the inherent intermittent availability of these energy sources. We propose novel mechanisms to use the data centers and cloud services, which represent nowadays one of the major consumers of electricity in the US and around the globe, to provide flexibility in managing and integrating renewable energy resources.

Description:PhD students accepted in the program will work on the EU-funded COMBO project.
COMBO will propose and investigate new integrated approaches for Fixed / Mobile Converged (FMC) broadband access / aggregation networks for different scenarios (dense urban, urban, rural). COMBO architectures will be based on joint optimisation of fixed and mobile access / aggregation networks around the innovative concept of Next Generation Point of Presence (NG-POP).

Description:Cellular traffic increasing x1000 in the next decade, with x2 growth every year, is feasible only if making LTE base stations pervasive over the territory and inter-operative with other protocols. This is the essence of the HetNet paradigm. This viral deployment of small cells in next years needs to have large-capacity backhauling systems such as cables (optical fiber and copper). The PhD research will investigate novel networking architectures and methodologies to relay wireless signals over cable (WoC), integrating into the HetNet the existing last-mile copper to handle multiple small cells as a distributed antennas system. Compared to conventional femto or small-cell system, WoC is expected to have benefits in term of cost and flexibility with very long-obsolescence cycle. In addition, WoC can be easily integrated in the network architectures for large scale antennas systems.
The PhD research will also consider backhauling for LTE/LTE-A where densely deployed micro/femto-cells are mutually connected (mesh like) using E-Band mm-wave over unlicensed spectrum. Specific topic to be addressed will be mesh and redundant mm-wave systems, self-calibrating array-processing algorithms, other technologies for radio access mixing access and backhauling similar to the so called "cloud-RAN".
Some papers are available at the web pages of proponents.

Description:The PhD proposal considers wireless communications in a dense relay node scenario where messages are flooded via dense massively air-interacting nodes in the self-contained cloud. A complex infrastructure cloud creates an equivalent air-interface to the terminal, which is as simple as possible. Source and destination air-interfaces are completely cloud network-structure-blind. The cloud has its own self-contained organizing and processing capability. This concept facilitates energy-efficient, high-throughput and low-latency network communication performed directly at the PHY layer, which is capable of operating in complicated, dense, randomly defined network topologies and domains. Potential applications range from intelligent transport systems to healthcare and even machine-type communication in wireless networks. This research will focus in particular on industrial monitoring and control applications. Investigation is also on the usage of the new LTE-Direct protocol.
The aim of the PhD thesis will be the theoretical setting for cooperative signal processing algorithms that enable the large set of nodes to self-organize and set up an efficient connectivity for a high-performance information transfer between the two terminal nodes. The research activity will consider in particular the definition of fundamental cooperative techniques for a distributed learning of the network state (e.g., timing, location, channel state information). The application scenario and the network model are compliant with the cloud network of the European Research Project Diwine - Dense Cooperative Wireless Cloud Network, 2010-2013 (www.diwine-project.eu).
Some papers are available at the web pages of proponents.

Description:Models play a central role in software engineering. They may be used to reason about requirements, to identify possible missing parts or conflicts. They may be used at design time to analyze the effects and trade-offs of different architectural choices before starting an implementation, anticipating the discovery of possible defects that might be uncovered at later stages, when they might be difficult or very expensive to remove. They may also be used at run time to support continuous monitoring of compliance of the running system with respect to the desired model.
However, models are abstraction of real systems, hence the predictions at design-time need to be validated once the system is deployed in a real system. For example in cloud environments, assumptions on the mix of requests in operation at design-time might differ from the ones observed in the production system depending on customer preferences. Similarly, the performance or reliability profile of certain Cloud resources in practise may differ from the figures assumed at design time.
The aim of this thesis, is to define a feedback loop between the operational systems deployed in the cloud and software design. Quality and cost models used at design-time will be kept alive at run-time and refined by exploiting the information gathered by monitoring the underlying cloud system. The feedback loop wiil integrate run-time data into design-time models for fine tuning and will provide recommendations to the software designer to improve the design time QoS and cost estimates.

Description:Cloud computing is an emerging paradigm which allows the on-demand delivering of software, hardware and data as a services, providing end-users with flexible and scalable services accessible through the Internet. Three different models are used among the providers to deliver the services: (1) Infrastructure as a service (IaaS), (2) platform as a service (PaaS) and (3) software as a service (SaaS). The IaaS model is used to provide disk storage, database and/or computation time on-demand from Internet based Data Centers. PaaS allows to develop and deploy cloud applications exploiting advanced middleware solutions, while the SaaS model provides the opportunity to use/integrate full software applications.
Several issues emerge in this framework due to the changing environment, where the cloud-based services live. First of all, in any time instant resources have to be allocated to handle workload fluctuations since continuous changes occur autonomously and unpredictably. Furthermore, end-users must be guaranteed with a Quality of Servise (QoS) levels stipulated in Service Level Agreement (SLA) contracts usually expressed in terms of performance metrics (e.g., response time and throughput) and availability.
The Ph.D. work we will take the perspective of SaaS providers, which deploy their applications on multiple Clouds and want to maximize their profit, while minimizing the cost for the use of the underlying resources. Indeed, since: (i) Cloud performance can vary at any point in time, (ii) elasticity may not ramp at desired speeds, (iii) unavailability problems exist even when 99.9% up-time is advertised (see, e.g., Amazon EC2 and Microsoft Office 365 outages in 2011), the use of multiple Clouds offered by different providers is needed to support the execution of business critical applications.
The PhD project will investigate the possibility to distribute the workload among multiple IaaS/PaaS Data Centers to allocate the resources at the Data Centers that result less expensive at the considered time instant. Secondly, the interaction among multiple SaaS sharing a common PaaS/IaaS infrastructure will be analyzed. New optimization algorithms based on Lagrangian decomposition and distributed techniques will be developed, considering the possibility to redirect workload to the most suitable Data Center and to determine applications resource allocation at different time scales.
The interaction among SaaS/PaaS/IaaS will be grounded on game theoretic methods and approaches. Since in Cloud systems SaaS behaves selfishly and competes with others SaaS for the use of infrastructural resources supplied by the PaaS/IaaS, to capture the behavior of SaaSs in this conflicting situation in which the best choice for one depends on the choices of the others, the Generalized Nash Equilibrium concept will be used
The effectiveness of the solutions proposed in the Ph.D. work will be evaluated by performing an extensive experimentation considering realistic scenarios, for a variety of system and workload configurations through simulation and by running tests in real Cloud environments.
Expertise in performance evaluation, optimization and operation research would be highly regarded.

Description:In the context of the European Project SysSec we are looking for doctoral candidates willing to work on the creation of algorithms and tools for automating the reverse engineering and analysis of mobile malware.
The work will range from low-level disassembly and reverse engineering to behaviour-oriented analysis, through the use of different types of machine learning algorithms.

Description:The aim of this proposal is to develop methods for the analysis and simulation of energetic system (like smart grids) with particular focus to the development of algorithm for the optimisation and the forecast of faults.

Description:Interactive games involving people and autonomous robots are one of the most challenging robotic research issues. The robot should engage the human player, both physically and behaviorally, while acting in the real world. The task is even harder, due to constraints given by the market, which imposes the robot to cost not more than a videogame console. This research requires developments in Robotics technologies, and, at the same time, the study of a novel framework of interaction. According to the competences of the candidate(s), the accent might be on robot shape, movement, behavior, interaction, signals, game design.

Description:A high amount of biomedical information is today increasingly available and numerous biomolecular data are continuously generated in genomic and proteomic tests via advanced nano-biotechnologies. Great part of bioinformatics is related to the management and mining of such data and information in order to support better understanding of complex biological patho-physiological phenomena. A PhD student majoring on bioinformatics data reasoning will first focus on bioinformatics data integration and analysis by examining different data types and formats available in many biomolecular databanks, by exploring the bio-terminology and bio-ontologies adopted to describe in a controlled way the current biomolecular knowledge, and by investigating different data and information analysis algorithms to evaluate such available knowledge for specific gene and protein sets and infer new biomedical information; will then develop (parts of) a technological framework for experimenting bioinformatics data reasoning upon real biomedical scenarios. This research is part of the VirtualLab project (http://dbgroup.elet.polimi.it/tesi/VirtualLab.pdf)

Description:The Genomic Computing project, within the homonymous research project started in the fall of 2012, focuses on big data analysis and Web technology for genomic data. It is conducted in collaboration with IEO-IIT (http://www.ifom-ieo-campus.it/), and will take advantage of next generation sequencing technology, which opens up to radical innovation in biological and medical research. The project aims at building a powerful data computing infrastructure at the receiving end of DNA sequencing machines, for enabling viewing, querying, analyzing, mining, and searching over a world-wide available collection of genomic data. The project goal is generating an open-source, standard, highly efficient and easily usable computational framework - the "Internet of genomes" - to support scientists in genome-related research. Genome Computing is a follow-up of Search Computing (http://www.search-computing.org/), a project funded by the European Research Council (ERC).
PhD students will be part of a multidisciplinary team with students from DEIB, IEO-IIT and from the Department of Mathematics of Politecnico di Milano; a special instruction track, specifically designed for this team, focuses on basic biological knowledge, on big data management and Web technologies, and on interdisciplinary activities. We are seeking highly motivated candidates who will have the opportunity to develop an interdisciplinary research project, working in close collaboration with many scientists and researchers.

Description:The PhD research program will be centered on the study, design and experimental characterization of new topologies of CMOS mixed-signal Integrated Circuits for the readout of high performance semiconductor detectors for X-ray spectroscopy applications. The research will be focused on the identification and study of all the critical aspects in order to develop new solutions intended to realize circuits with superior performance in term of low power consumption, ultra-low noise and high signal rate processing capabilities. The design activities will be joined to a significant experimental work on the designed ICs.

Description:Heterogeneous computing architectures are a key enabling technology to achieve high-performance, cost-effective computing systems, consisting of multicore processors and specialized islands of computation (e.g. hardware accelerators, FPGAs, GPGPUs). When designing this kind of systems, several aspects come into play (power, performance and dependability), depending on the target application scenario and the user's requirements. In this context, dependability is receiving a lot of attention, not only for systems devoted to safety- and mission-critical application scenarios, but also for general purpose ones. Indeed, it is not necessary and possibly not affordable to adopt full-fledged fault tolerance solutions, able to achieve a complete coverage of all possible faults. We focus on analyzing the criticality of the possible faults and aim at trying to dynamically find an affordable trade-off between the achieved coverage and the other parameters (e.g., performance, power/energy consumption) based on the current working conditions.
This research aims at developing solutions for the implementation of heterogeneous systems able to adapt to changing contexts with respect to dependability properties and working scenario (e.g., user's requirements, workload, system status).

Data mining and other reasoning-based techniques to functional diagnosis: methods and toolsContact person BOLCHINI CRISTIANAEmail: bolchini@elet.polimi.itArea: InformaticaOther members of the research group: Elisa Quintarelli

Description:Functional fault diagnosis of complex circuits is a critical task, both after production and during the operational life of a system, following the occurrence of a failure. This research activity aims at exploiting reasoning-based and machine learning techniques (e.g., Data mining, Bayesian Networks, Decision Trees) to improve the effectiveness and performance of the diagnosis process.

Description: Research Vision
Mobile phones are becoming indispensable: they are now cheaper, smaller and more popular than laptops, and replace them
in a variety of functionalities. In fact, mobile phones are emerging as the main means of interaction among people, and
between people and the surrounding environment. Resources in such devices are very limited, e.g., computational power
and disk space are much reduced; furthermore, both internal and external conditions are rapidly changing and may influence
the behavior of the entire system, e.g. switching between network types causes an unpredictable power consumption and
network instability. In this scenario, mobile phones need to be ready to react to changes in the surrounding environment
conditions and their internal state. A context-aware mobile device is an active system, capable of reacting to environmental
inputs as well as to infer its operating conditions and adapt accordingly. It is able to learn and understand the place where
it is operating and the user habits, and to reconfigure itself for the best possible user experience, even anticipating changes
and reactions. With this knowledge, the mobile device shall be able to meet the requirements by taking advantage of the
context to pursue and optimize concurrent goals, like optimizing the quality of a voice call consuming less energy.
Research Project
Much information about the surrounding environment can be directly obtained using a mobile device, for example the
current position: sensing the device environment is therefore an easy task. Collecting data about user interactions and the
internal state of the device seems instead to be more complex, due to the lack of enabling technologies. In this work an
important step in this direction is made, introducing a powerful enabling solution for sensing mobile device behaviors. This
research envisions a reality where a self-aware mobile device can interact with the surrounding environment and support
the users in performing their everyday actions. A working prototype, called morphone.os, based on the Android OS, will
be designed and implemented. Real-life working scenarios will be proposed, to prove the effectiveness of the morphone.OS
in different situations. The main objectives and competences involved in the research can be summarized as follows:
From Computer Architectures and OS (Marco Santambrogio): (i) Identify what metrics are worth observing, how
to make the measurements without impacting the execution, and what we can do with the results. (ii) Identify what
parts of a system need to be adapted and to quantify the degree to which adaptation can afford savings in metrics
of interest to us. (iii) Build portable systems where the user does not have to redesign, rewrite, and re-tune code for
each new system or environment, where the system automatically optimizes for different platforms; (iv) Design new
parametric, run time adjustable versions of existing hardware and operating system components; (v) develop a first
implementation of the morphone.OS, a self-aware enabled version of the Android OS.
From Machine Learning (Marcello Restelli): (i) Define, specify and formalize the goals, the metrics and evaluation
models to quantify their value (ii) Analyze the requirements and the goals of the introduction of self-awareness in
a mobile device; (iii) Identify how system components can coordinate their actions to produce emerging behaviors
that pursue system-level goals; (iv) Define and develop adaptation policies that drive elements towards the defined
goals. The kind of policies considered will range from simple value-based and/or policy-search approaches to more
complex actor-critic algorithms.
From Context-Awareness (Letizia Tanca): (i) Define, specify and formalize the goals of context-awareness in this
scenario. Then, along the guidelines of a well-known context-aware system design methodology: (ii) Design the context
model representing the above-elicited goals. (iii) Contribute to the definition and development of the adaptation
policies mentioned above. (iv) Define and develop the relationships between the contexts envisaged by the context
model of (ii) and the adaptation policies of (iii), in order to enact the desired context-aware behaviours.
From Human Computer Interaction (Paolo Paolini): (i) understand the general User Experience (UX) requirements
for self-aware mobile devices, in particular those addressing the following aspects: (how is context-aware behaviour of
applications made perceivable by the user? at which degree is the context-aware behaviour under the user’s control,
in which conditions of use or for which classes of services? (ii) identify relevant application domains for self-aware
applications and elicit specific UX requirements for these domains; (iii) define UX quality criteria for self-aware mobile
apps, as well as guidelines that help UX designers meet them more effectively; (iv) develop case study prototypes
and empirically evaluate them against the general and domain specific design requirements as well as the UX quality
criteria for self-aware mobile apps
Research work organization and periods abroad
A brief description of the research work organization, presenting also a proposal for the periods abroad that can be envisioned
due to the Proposal Team connections:
First Year: Mainly spent in Milan, start the research and completing the exams for the entire PhD program;
Second Year: Half of it in Milan and half working at the Northwestern University with Prof Gokhan Memik1 on the
power-consumption model and its impact on the mobile architecture;
Third Year: Half of it in Milan and half working at MIT with Prof Robert Miller2 studying how the users are
perceiving the work done by morphone.OS and on how to enhance the context-aware information on the basis of
crowd-related data.
1 http://users.eecs.northwestern.edu/ memik/
2 http://people.csail.mit.edu/rcm/

Description:In the last years we developed a HW/FW framework based on low-cost components for the rapid prototyping of robotics applications (R2P - Rapid Robot prototyping). This framework pushes the idea of robot components down to the hardware making it possible to have low-cost off-the-shelves plug-and-play modules to build robotics applications. These HW modules communicate by using a real-time protocol build upon the CAN field bus, and a real-time pub/sub middleware (based on ChibiOS) for the easy development and integration of new modules. This bottom-up approach was meant to speed up low-cost robotics by providing an effective framework for rapid robot prototyping. Now we are interested in moving upward to provide a proper model driven approach to the development of robots, and cyber-physical systems in general.

Description:The availability of many small wireless sensors of different types and nature brought to the idea of the Internet of Things, where not only information and data are shared among many different users, but also physical objects are given enough computational and storage power to become "intelligent" and interact with the environment. The birth of such cyber-physical systems, the application of which span from smart houses to smart cities, from health-care systems to energy savings and to mobile systems, require self-adaption to the instantaneous environmental situations. Hence the need of providing Context-awareness as an orthogonal capability from other functional and non-functional requirements. The research group has developed and deployed in several geophysical applications PerLa, a SQL-like language for managing Wireless Sensor Networks devices and data (see http://perlawsn.sourceforge.net/index.php), and is working at an extension of PerLa which adds Context Management capabilities to the language. On this subject various enrichments are possible, leading to a variety of possible PhD thesis that can be discussed with the proposing professor.

Description:In a world made of global interconnections and networking systems, the
variety and abundance of available data generates the need for effective
and efficient gathering, synthesizing, and querying the data, removing the so-called
information noise. The research group has a long experience in building systems where context awareness is integrated with – yet orthogonal to – data management, where the knowledge of the context in which the data are used drives the process of focusing on currently useful information and keeping information-noise at bay. The described activity is called context-aware data tailoring.
Building upon the rich research carried out already by the research group (see http://tanca.dei.polimi.it/images/documents/sac2012.pdf), this thesis proposes various enrichments to the existing system towards context-aware social media, automatic learning of unknown contexts, context-aware preferences and recommendations.

Description:Dealing with Big Data represents nowadays a most critical task, from both a strategic and a short-term perspective. We are in the era of large, decentralized, distributed environments, where the amount of devices and data and their heterogeneity is getting a little out of control every day. Gartner reports that the worldwide information volume is growing at a minimum rate of 59% annually.
The expression "Information overload" was already used by Alvin Toffler in his book “Future Shock”, back in 1970. It refers to the difficulty of understanding and taking decisions when too much information is available.
This (set of) thesis proposes a novel approach for the exploration of large amounts of data using intensional semantic features. The term intension suggests the idea of denoting objects by means of their properties rather than by exhibiting them.
The thesis’ ambitious objective is to build a novel scientific framework that can support the user in an amazing variety of “scouting” activities, like progressive inspecting, observing, investigating, examining, discovering, searching, surveying of datasets, mimicking a sort of “discussion” with the system.
We should notice that, in real life, intensional knowledge is used pretty often, since our brain is much more apt at capturing (and reasoning over) properties of objects, rather than at memorizing long lists of them. While trying to describe reality with 100% accuracy is practically impossible, an approximate description, with accuracy lower than 100%, of a collection of data is still possible.
In more details the thesis’ goals are:
• (as said above), building a novel scientific framework that can support the user in an amazing variety of “scouting” activities, like progressive inspecting, observing, investigating, examining, discovering, searching, surveying of datasets, mimicking a sort of “discussion” with the system.
The ultimate goal is thus to formulate the theory of the intensional datasets, encompassing the notion of “semantic relevance” to synthesize and operate on the relevant features of sets by means of a purposely defined algebra that works on approximate/intensional/extensional set (and feature relevance) representation.
• Developing an original and innovative technology to support the framework (tools to build applications). The ultimate goal is to build a new generation of “explorative portals”, supporting advanced user experiences never made possible before.
• Developing real life applications, with real life datasets, not only as a way to validate the framework and to test the technology, but, most important, as a way to develop new visions and challenges. Application domains could be Education, Marketing, Culture, Tourism, …
The envisaged result goes much beyond the current state of the art of query systems, search engines, data analysis and recommendation systems, sometimes borrowing methods and techniques from them but developing a brand-new theory for modeling and manipulating intensional set representation, and the technology to support it.

Description:Autonomous robots and vehicles need a map of the environment they navigate and their position within it to plan and accomplish their tasks. Building this map for an unknown place while simultaneously localizing in it poses a few challenges especially when the area to be mapped is big and the sensors to be used quite noisy. We are interested in developing general SLAM techniques for the efficient 3D reconstruction of any unstructured environment using both cameras and range sensors (e.g., fusin different perception modalities). These techniques will be applied to the vehicles available at the lab: an unmanned aerial vehicle, an all terrain vehicles, an autonomous wheelchair, few service robots and a golf cart.

Description:Finding the underlying principle explaining the shape of human walking trajectories is an interesting and challenging problem, that is motivated by the need of planning human-like trajectories for bipedal humanoid robots and of estimating the intention of humans walking in an area occupied by robots.
A common approach assumes that goal-directed walking may be planned as a whole on the trajectory level, rather than on successive footsteps, neglecting all biomechanics issues related to motion
generation. In this framework, the generation of a human walking path can be recasted into the solution of an optimal control problem, whose cost function represents the underlying principle explaining the shape of walking trajectories.
The project aims at defining a cost function that can interpret experimental human walking paths, having a clear physical interpretation. This cost function, together with the kinematic model used to represent the walking human, is then exploited to estimate the human intention.

Description:It is widely agreed that unmanned vehicles have an increasing role in a number of human activities. Unmanned vehicles, possibly endowed with a robotic arm and proper sensors, could be involved in military missions and services, agriculture automation, forestry works, archaeological exploration, search and rescue operations in disaster scenes, taking routine measurements and samples in contaminated areas, and so on. Adopting a commercial ATV as the mobile platform is, for different reasons, the best solution, but it also poses a number of challenging problems. Among those the development of proper perception abilities able to reconstruct the state of the vehicle (e.g., position, roll, pitch, yaw, velocity, acceleration, etc.), the surrounding environment (i.e., the map, the terrain characteristics, etc.), and the presence of unexpected obstacles. A single sensing modality is not enough to provide such a reach world perception and thus multi sensor approaches are needed.

Description:It is widely agreed that unmanned vehicles have an increasing role in a number of human activities. Unmanned vehicles, possibly endowed with a robotic arm and proper sensors, could be involved in military missions and services, agriculture automation, forestry works, archaeological exploration, search and rescue operations
in disaster scenes, taking routine measurements and samples in contaminated areas, and so on.
Adopting a commercial ATV as the mobile platform is, for different reasons, the best solution, but it also poses a number of challenging problems. In fact, ATVs feature a more compact structure, shorter and narrower, a higher and varying centre of gravity, higher speed and manoeuvrability, and faster dynamics, compared to special purpose research vehicles. For these reasons, they are difficult to ride, especially on uneven terrains, where varying weather and/or soil conditions may highly affect the vehicle stability.
Such features call for a novel control architecture that can ensure the vehicle stability and improve the drivability.
A various set of different perception abilities is also required. The control system needs high frequency data about the vehicle state (roll, pitch, yaw, velocity, acceleration, etc.), obtained by proprioceptive sensors, and the environment, provided by terrain perception and obstacle detection exteroceptive sensors.

Description:Research focuses both on models and methods to support design and evolution of complex information systems.
The research topics the thesis will be in the areas of information systems and service design, working in particular on quality of services and of data and on energy efficiency in information systems.
The strong connection within Italian and European research projects with leading companies makes the mix between theory and application one of the strength of our PhD.
The candidates should have a general background in modelling of data, processes, and applications, database systems, and software engineering.
A theme scholarship is going to be offered in this area.
Please send enquiries to prof. Barbara Pernici (barbara.pernici@polimi.it) enclosing a CV.

Description:Under this theme, the purpose is to provide a methodology and tools to design, reuse, specialize, and run applications able to take advantage of the Internet of Things (IoT) with functional and non-functional properties taking into account the dynamics inherent to this system.
This topic will focus in on:
• Dynamic evolvable software product line for use at design and runtime to manage agile service lifecycle and create generic software to be specialized at design time adn during the execution (to consider the high dynamicity of IoT). The produced software will be usable over advanced cloud platforms.
• Creation of a unified cloud gathering sensor, actuators with the creation of DaaS (Device as a Service) and more classical data center and cloud service for applications implemention.
• Link between different logical and physical parts of this distributed system will be guided by semantic to ensure scalability, dynamicity and discovery of new capabilities at runtime.
• Self-adaptive management of functional and non-functional properties such as security and SLA (Service Level Agreement) management with multi-layer autonomic management
Use cases are considered particularly in M2M applications:
- use of a Cloud platform for energy management.
- security abnd trust in Cloud
The interested areas are Cloud Computing, Semantic Web, Security, Smart Devices and Ambient Intelligence.

Description:The research activity will be aimed to improve the energy performance of biogas power plants through advanced modelling, identification and control techniques applied to the anaerobic digestion process.
The research, conducted within the knowledge center "Bioenergy Factory" placed in Cremona, in Northern Italy (Lombardy Region), will be both theoretical and experimental, and will be based on pilot and full scale plant measurements.

Description:The purpose of the project where the PhD student would be inserted is to develop a novel SPECT (Single Photon Emission Computed Tomography) system, suitable for insertion in the bore of an existing MRI (Magnetic Resonance Imaging) system, a cost-effective solution for widespread medical applications. The combined system will allow the simultaneous measurement of anatomical (MRI) and functional (SPECT & MRI) information and the evaluation of their correlation in space and time. The SPECT system will allow in-vivo simultaneous visualisation of spectrally resolved molecular and biochemical tumour properties. The multi-modality SPECT/MRI imaging system will be adopted in both preclinical models study and clinical validation on patients.
PhD activity on this project will regard both the development of the detectors and of the SPECT system as well as the development of integrated electronics to be used in the system. Strict connection with partners involved in instrument development as well as with biologists and clinical doctors involved in the experimentation of the instrument is foreseen.