Democracy was born in the city-states of classical Greece, in the fifth century BC. It reached its most complete form in the city of Athens, in the time of Pericles. The characteristics of Greek democracy are those that come closest to the ideal of direct democracy, in which the group of citizens participates directly and continuously in making decisions about the affairs of the community. However, from an institutional perspective, it is a very simple and primitive construction.

In Athens the citizens met several times a year, it is estimated that at least 40, on the hill of Pnyx to discuss the affairs of the community. The agenda of discussions was established by the "Committee of 50", constituted by members of a "Committee of the 500", representatives, in turn, of the hundred demes that made up the city. The period of public office was very short (less than two months in the "Committee of 50", one year in the "Committee of 500") and the designation was made by lottery methods in the first case and rotation In a second. The discussion and deliberation among citizens formed the basis of this system of democratic participation. Decisions were made, normally, by way of consensus, and at the time of the apogee of the system in Athens a quorum of 6,000 participants was required for the decisions of the assembly to be valid. All this gave rise to a kind of "democracy without a State".

Direct democracy, as practiced in Athens, requires very special conditions of development, which have not occurred again in history. The citizen was a total figure, whose identity did not admit distinction between the public and private spheres: political life appeared as a natural extension of being itself. The interests of citizens were harmonious, a phenomenon typical of a homogeneous society that, moreover, had a small size, which favored direct relations between all. In classical Greece the existence of a wide stratum of slaves was a fundamental condition for the functioning of direct democracy. Thus, citizens were able to meet frequently to decide directly on laws and policy measures.

As Giovanni Sartori points out, after the decline of Greek democracy, the word democracy practically disappeared for a period of 2,000 years. They spoke rather of public res. In Rome, for example, the idea of ​​mixed government was introduced, which represented diverse interests or groups that constituted the community. The system quickly adopted oligarchical features (government of a few), in which the formal commitment of popular participation resulted in a very limited capacity for control.

The expansion and consolidation of Christianity in the Western world displaced political reflection towards the universe of theology: the issue of political participation ceased to be a concern for more than a millennium. In the Middle Ages it reappeared in a different form that, at the time, had little to do with democracy. In several European countries, monarchs, urged by economic needs, called assemblies to deal with matters of State, fundamentally associated with the lifting of taxes and warmongering companies. The members of these assemblies very loosely represented the estates that made up the kingdom: the nobility, the clergy and the bourgeoisie.

From there arose the idea of ​​responsibility of the monarch before some of his subjects; This was the beginning of what is now known as Parliament. In England, in the fourteenth century, Parliament forced the king to sacrifice ministers to grant subsidies and then to present account statements; in France, Spain and Scandinavia similar phenomena happened. However, with the consolidation of the absolutist monarchies, the parliaments stopped being convened from the seventeenth and eighteenth centuries; England was the exception. Even so, the idea of ​​political representation (effective or not) was beginning to penetrate Western political thought. Its origin was far from democratic, but it provided a solution to the problem of participation in complex political communities of large size.

At the end of the Middle Ages and during the Renaissance great transformations began to take shape, which little by little would return to make political participation an important topic of reflection and a popular demand that centuries later would become more universal. In the social, economic and political spheres there were changes that would have repercussions in the world of values.

Structural health monitoring (SHM) is aimed to obtain information about the structural integrity of a system, e.g. via the estimation of its mechanical properties through observationscollected with a network of sensors. In the present work, we provide a method to optimally design sensor networks in terms of spatial configuration, number and accuracy of sensors. The utility of the sensor network is quantified through the expected Shannon information gain of the measurements with respect to the parameters to be estimated. At assigned number of sensors to be deployed over the structure, the optimal sensor placement problem is ruled by the objective function computed and maximized by combining surrogate models and stochastic optimization algorithms. For a general case, two formulations are introduced and compared: (i) the maximization of the information obtained through the measurements, given the appropriate constraints (i.e. identifiability, technological and budgetary ones); (ii) the maximization of the utility efficiency, defined as the ratio between the information provided by the sensor network and its cost. The method is applied to a large-scale structural problem, and the outcomes of the two different approaches are discussed.

It is well known that analyzing the dynamic behavior of reversible gels is a tough job, as it requires a detailed control of geometry, bond lifetimes, etc… . In this context, we use an optofluidic microrehometer to investigate the properties of a system composed by DNA nanostars.

The device, allowing to test samples with volume smaller than 1 uL, consists in a square section microchannel realized in a glass substrate and having a couple of facing waveguides, realized by fs-laser inscription technique, on the two sides of the channel. Using the optical-shooting technique (T. Yang, et al. Scientific Reports 6, 23946 2016; T. Yang et al. Micromachines 8, 65, 2017.), we investigated the system viscosity as a function of the temperature and of the applied optical force, observing the transition from Newtonian to shear-thinning behavior while lowering the temperature below the gelation threshold.

Stress-strain curves analysis allowed assessing the system activation energy, which is in good agreement with that obtained by dynamic light scattering measurements.

While self-managing IT has been the target of every enterprise since the introduction of autonomic computing, it has eluded us till recently. While there has been much progress in using autonomic computing to model, configure, monitor and control the external world outside of the computing processes, today, IT management consumes 70% of the IT budget to keep the applications available, secure and compliant with regulations. For every dollar spent on software development, another $1.38 is spent on managing and maintaining it. Every time a fault occurs, one has to stop, isolate it, diagnose it and fix it and it requires an army of experts from different disciplines to do it. This becomes an impossible task when large scale, and fluctuations both in workload demands and available resource pools are involved. In this paper we describe a new approach that brings finally self-managing properties to applications and workflows using a new computing model. Using this approach, we have reduced IT complexity and created an interoperable global network of clouds that can be used to support self-provisioning workloads with auto-scaling, auto-failover and live migration without disrupting user experience.

This paper investigates the presence of Legionella in the water distribution systems of buildings of the University of Perugia (Italy). Further, as the genus Legionella comprises many different species and serogroups, of which L. pneumophila sg1 is the most often associated to human lung infections, a molecular characterization of the retrieved Legionella isolates is reported.

Legionella was monitored by standard methods analyzing more than 300 water samples collected from 100 taps throughout the university campus. Legionella was absent in the great majority of the samples, while it was found in only five buildings of the entire campus. Molecular analysis indicated that the contaminations were only partially ascribed to L. pneumophila sg1, as other serogroups (sg8 and sg10) as well as other species (L. taurinensis and L. anisa) were also found. Further, in only three cases the levels of contamination were above the limit at which, according to international guidelines, remedial actions are required. In particular, a thermal disinfection, i.e., raising the water temperature above the level at which Legionella cells do not survive, was applied to the hot water supply systems where high temperature could be maintained throughout. On the contrary, in a building in which Legionella contamination originated inside the heat exchanger, a chemical disinfection with silver hydrogen peroxide was carried out.

The case study herein reported indicates how a multidisciplinary approach that integrates microbiological analysis with the survey of building’s plumbing systems can lead to the definition of effective strategies for Legionella prevention and control.

In former studies, we proposed a topology optimization approach to maximize the sensitivity to damage of measurements collected through a network of sensors deployed over flexible, thin plates. Within such frame, a damage must be intended as a change of the structural health characterized by a reduction of the relevant load-carrying capacity. By properly comparing the response of the healthy, undamaged structure and the response of the damaged one, independently of the location of the source of damage, a procedure to optimally deploy a given set of sensors was provided.

In this work we extend the aforementioned approach within a multi-scale frame, to account for (at least) three different length-scales: a macroscopic one, linked to the dimensions of the structure to be monitored; a mesoscopic one, linked to the characteristic size of the damaged region(s); a microscopic one, linked to the size of inertial microelectromechanical systems (MEMS) to be used within a marginally invasive health monitoring system. Results are provided for a square plate simply supported along its border, to show how the micro-sensors are to be deployed to maximize the sensitivity of measurements to damage, and to also discuss the speedup obtained with the proposed multiscale approach in comparison with a standard single-scale one.

Sensors networks for the health monitoring of structural systems have to be designed to achieve both accurate estimations of the relevant mechanical parameters and low cost of the experimental equipment. Therefore, the number, type and location of the sensors have to be chosen so that the uncertainties related to the estimated health are minimized. Several deterministic methods based on the sensitivity of measures with respect to the parameters to be tuned are widely used; despite their low computational cost, these methods do not take into account the uncertainties related to the measurement process.

In former studies, a method based on the maximization of the information associated with the available measurements has been proposed and the use of approximate solutions has been extensively discussed. Here we propose a robust numerical procedure to solve the optimization problem: in order to reduce the computational cost of the overall procedure, Polynomial Chaos Expansion and a stochastic optimization method are employed.

The method is applied to a flexible plate. First of all, we investigate how the information changes with the number of sensors; then we analyze the effect of choosing different types of sensors (with their relevant accuracy) on the information provided by the structural health monitoring system.

A number of in silico methods have been recently applied for searching and designing multi-target compounds. The simplest approach consists in docking the compounds into all the targets independently. Then, only those molecules that show a high score against all the targets at the same times are collected as hit compounds. This approach, however, is quite computationally expensive, particularly when more than two proteins are considered as targets. Moreover, it does not furnish any information on the structural features required for the multi-target potency, thus it is not suitable for the hit optimization process. Several authors circumvented some of these problems by combining pharmacophore models with docking studies. Do to our interest in multi-kinase inhibitor discovery, we decided to derive a multi-kinase pharmacophore model, facing a two stage approach. Firstly, starting from the structures of the ligands we extracted the features of an appropriate multi-TKI scaffold (scaffold pharmacophore). Then, we decorated this scaffold through information derived from the target structures (multi-TKI pharmacophore). The presented methodology for identifying pharmacophore model could be applied also to other interesting pharmacological models for which a multi-target activity would be valuable.

Human bone is one of the most common connective tissue of biological human structure. In relation to the internal microstructure there are two main types of bone tissue: compact in the cortical zone and spongy or trabecular in the internal zone. The porous structure in general is side for the marrow. Considering the relevant function of that tissue, the porosity is not uniform. Porous diameter increase from the cortical to the centre of bones, as the connections of porous increasing with the thickness of the bone tissue.

The presence of serum inside the porous structure of bone tissue produce a different behaviour in bones below loads, and related with the condition of the load is applied. The response of material is different in relation at the level of serum inside the tissue and in relation of the load action direction. In same stress condition the velocity of loading generate different response related with the dimensions of porous and permeability parameters.

In this work, three different type of bone tissue are investigated. From Calcaneus, from skull and form rib of human skeletal system. The specimens are subjected at compression test in, displacement control, until they reach the ultimate stress, in dry and wet condition. It is observed that level of serum.

A 3 groups (one for each tissue type) of 20 specimens each are tested in dry and wet condition. maximum stress, strain, elastic deformation Energy, total deformation energy, are measured. statistical analysis is conducted and qualitative relationship are deducted in reference to the density and specific mechanical characteristics.

The tests show compact tissue as skull are more appropriate to perform load action, instead calcaneus work as reticular structure with high deformation levels.

We have devised a simple optical parameter classification system using the three apparent optical property (AOP) parameters available in the NASA Geospatial Interactive Online Visualization and Analysis Infrastructure (Giovanni). These parameters are produced by the NASA Ocean Biology Processing Group (OBPG). The three AOP parameters are: adg, the absorption coefficient of dissolved and detrital matter; aph, the absorption coefficient of phytoplankton, and bbp, the backscatter coefficient. The Microsoft Excel ternary diagram plotting spreadsheet Tri-Plot (Graham and Midgely 2000) was used for visualization of the three-parameter ocean optical parameter classification scheme. This simple analysis method can be utilized by researchers, continuous water quality monitoring campaigns, citizen scientists, and students.

In this paper we demonstrate the use of this ternary optical classification system by applying it to an examination of the seasonal outflow of the Orinoco River into the eastern Caribbean Sea. End-member optical regimes consist of the river mouth waters during the rainy season, and high clarity Caribbean Sea waters which are not influenced by the optically active constituents in the Orinoco River plume. The variability of the optical characteristics of the surface waters between the rainy and dry seasons is clearly distinguishable in the ternary plots, particularly in the coastal region adjacent to the northern coast of South America.