Automatics - Automatica

Control, Robotica and Automation – Controlli, Robotica ed AutomazioneActivity in Control, Robotics and Automation at Engineering Department can be divided into the following research areas.

Additional details, as far as a description of the more recent activities and publications, are available at ISARLab web pages (isar.unipg.it)

Perception in Robotics: Perception is one of the key enablers to deploy autonomous robots in realistic and unpredictable environments. Research in this area focuses on the development of robot perception systems, with specific interest for systems employing vision and range sensors. Current activities cover the study of innovative methodologies for sensor fusion, the use of computer vision and machine learning tools for scene modelling, place recognition, loop closing and in general localization problems.

Aerial and Underwater Robotics: Over the years many lines of research have been activated in the area of Aerial Robotics. This include design, deployment and testing of UAVs systems, flight guidance and control schemes, sensors and actuator faults diagnosis schemes, fault tolerant control, non-linear adaptive and learning control, optical feedback based control schemes for UAVs in flight operations such as autonomous aerial refuelling. Vision based localization and navigation problems are studied also in the Area of Autonomous Underwater Vehicles (AUV).

Medical and biological applications: Development of engineering tools for medical and biological applications, such as artificial pancreas simulation, hydrocephalus pressure control and management, systems biology and cancer modelling.

Technology transfer: The group is also involved in a number of industrial technology transfer projects covering the above areas, as well as general control, and automation problems. In this context the research mission of the group is to develop solutions and methods of interest for service and industrial robotics, autonomous systems, with application to mobile robots, underwater vehicles, and unmanned aerial vehicles.

The study of biological processes and systems represents a new challenge for systems and control theory. Life sciences, biology in particular, are now experiencing a profound revolution driven by the sudden availability of data produced by new experimental techniques. Moreover, there is growing awareness among life scientists that a full understanding of life can’t rely only on the study of the single components of a biological system (just like in the classical reductionist approach): every component is included in a network of dynamic interactions and they are as much important as the component itself. These two facts together made dynamic modeling of biological processes an ideal theoretical framework to support biology in this revolution, to aid in interpretation of data and to extract new information from them. Systems Biology research at SiraLab is currently dealing with dynamic modeling of DNA damage and repair processes in single cells. SiraLab work in Systems Biology applied to translational Oncology thanks to a constantly collaboration with biologists and oncologists. Current focus of our group is in signal transduction networks and robustness of cancer cells.

SIRALab group is also active in the development of information technology tools for medical and biological applications, such as artificial pancreas simulation, hydrocephalus pressure control and management and biomedical image processing.

The Service and Industrial Robotics and Automation Laboratory (SIRALab), which is part of DIEI, carries out research activities on control systems, robotics and autonomous systems, engineering for medicine and biology, and systems biology. The group is also involved in a number of technology transfer projects covering the above areas, as well as general control and automation problems.

Modern robot research focuses on complex robot-human and robot-environment interaction. The ability to perform complex actions is connected to the quality of spatial perception the robot has and this depends not only on good sensors, but mostly on good algorithms that enable these machines to understand data collected by their cameras, lasers, microphones and other sensors they have. Our group works actively on robotics research, computer vision for robotics and on machine learning solutions to fundamental robotic problems, like sensor fusion, robot localization, space perception, movement and human-machine interaction.

The objective of this project is to develop a new kind of visual sensor: complex enough to sense and describe the visual world it observes - yet enjoys the characteristics of simple (e.g. temperature) sensors: ultra-low power consumption, easy connect ability, and a small, unobtrusive form-factor. The proposed vision system is based on a custom CMOS ultra low-power vision sensor, partially embedding a reliable algorithm for people detection and monitoring in the analog stage of the pixel; a low power sensor interface allowing the sensor to live for at least 5 months- with a minimum energy-budget, powered with a tiny battery or a super capacitor, which can be recharged by a 1 cm2 solar-cell; a high-level computing platform which is turned on only when necessary, triggered by the sensor as soon as it detects a potential alert situation.