This work deals with the probabilistic p-center problem, which aims at minimizing the expected maximum distance between any site with demand and its center, considering that each site has demand with a specific probability. The problem is of interest when emergencies may occur at predefined sites with known probabilities. For this problem we propose and analyze different formulations as well as a Variable Neighborhood Search heuristic. Computational tests are reported, showing the potentials and limits of each formulation, the impact of their enhancements, and the effectiveness of the heuristic.

In the present work, a new methodology is presented to translate roughness results from a test machine to different industrial machines without the need to stop production for a long time. First, mathematical models were searched for average roughness Ra in finish honing processes, in both a test and an industrial machine. Regression analysis was employed for obtaining quadratic models. Main factor influencing average roughness Ra was grain size, followed by pressure. Afterwards, several experiments were simulated in the common range of variables for the two machines using the models for average roughness Ra. A new variable DifRa corresponding to the difference between roughness values from the test machine and the industrial machine was defined and a quadratic model was obtained. Once DifRa is modeled, it is possible to predict roughness in a different industrial honing machine from results of the test machine by performing a few experiments in the industrial machine and translating the curves. This will reduce the number of tests to be performed in industrial machines. The suggested new methodology has been tested with two more roughness parameters, maximum height of profile Rz and core roughness depth Rk, proving its validity.

The goal of this paper is to introduce facility capacities into the Reliability Fixed-Charge Location Problem in a sensible way. To this end, we develop and compare different models, which represent a tradeoff between the extreme models currently available in the literature, where a priori assignments are either fixed, or can be fully modified after failures occur. In a series of computational experiments we analyze the obtained solutions and study the price of introducing capacity constraints according to the alternative models both, in terms of computational burden and of solution cost.

We consider the NP-hard problem of scheduling n jobs in F identical parallel flow shops, each consisting of a series of m machines, and doing so with a blocking constraint. The applied criterion is to minimize the makespan, i.e., the maximum completion time of all the jobs in F flow shops (lines). The Parallel Flow Shop Scheduling Problem (PFSP) is conceptually similar to another problem known in the literature as the Distributed Permutation Flow Shop Scheduling Problem (DPFSP), which allows modeling the scheduling process in companies with more than one factory, each factory with a flow shop configuration. Therefore, the proposed methods can solve the scheduling problem under the blocking constraint in both situations, which, to the best of our knowledge, has not been studied previously. In this paper, we propose a mathematical model along with some constructive and improvement heuristics to solve the parallel blocking flow shop problem (PBFSP) and thus minimize the maximum completion time among lines. The proposed constructive procedures use two approaches that are totally different from those proposed in the literature. These methods are used as initial solution procedures of an iterated local search (ILS) and an iterated greedy algorithm (IGA), both of which are combined with a variable neighborhood search (VNS). The proposed constructive procedure and the improved methods take into account the characteristics of the problem. The computational evaluation demonstrates that both of them –especially the IGA– perform considerably better than those algorithms adapted from the DPFSP literature.

This article proposes a methodology to assess building behaviour, whilst taking its life cycle into account. Understanding of the system can be obtained by combining well-known energy consumption calculation engines (TRNSYS) with co-simulation processes defined using Specification and Description Language (SDL). In this instance, to find the best comfort, energy and cost scenarios for energy rehabilitation, Co-simulation is conducted in two phases: the best scenes of passive systems are found, those presented as a priority; and, the active systems are made with ‘brute force analysis’. The article provides the results for a case study: a single-family home built between 1991 and 2007 and located in Mediterranean climate zone. The methodology provides a set of passive energy efficiency measures, to improve until two scales in the building energy labelling system. Using the methodology and the proposed model has enabled us to dramatically reduce the run time until 75% and therefore.

Statistical tests for Hardy–Weinberg equilibrium have been an important tool for detecting genotyping errors in the past, and remain important in the quality control of next generation sequence data. In this paper, we analyze complete chromosomes of the 1000 genomes project by using exact test procedures for autosomal and X-chromosomal variants. We find that the rate of disequilibrium largely exceeds what might be expected by chance alone for all chromosomes. Observed disequilibrium is, in about 60% of the cases, due to heterozygote excess. We suggest that most excess disequilibrium can be explained by sequencing problems, and hypothesize mechanisms that can explain exceptional heterozygosities. We report higher rates of disequilibrium for the MHC region on chromosome 6, regions flanking centromeres and p-arms of acrocentric chromosomes. We also detected long-range haplotypes and areas with incidental high disequilibrium. We report disequilibrium to be related to read depth, with variants having extreme read depths being more likely to be out of equilibrium. Disequilibrium rates were found to be 11 times higher in segmental duplications and simple tandem repeat regions. The variants with significant disequilibrium are seen to be concentrated in these areas. For next generation sequence data, Hardy–Weinberg disequilibrium seems to be a major indicator for copy number variation.

Studies of relatedness have been crucial in molecular ecology over the last decades. Good evidence of this is the fact that studies of population structure, evolution of social behaviours, genetic diversity and quantitative genetics all involve relatedness research. The main aim of this article is to review the most
common graphical methods used in allele sharing studies for detecting and identifying family relationships. Both IBS and IBD based allele sharing studies are considered. Furthermore, we propose two additional graphical methods from the field of compositional data analysis: the ternary diagram and scatterplots of isometric log-ratios of IBS and IBD probabilities. We illustrate all graphical tools with genetic data from the HGDP-CEPH diversity panel, using mainly 377 microsatellites genotyped for 25 individuals from the Maya population of this panel. We enhance all graphics with convex hulls obtained by simulation and use these to confirm the documented relationships. The proposed compositional graphics are shown to be useful in relatedness research, as they also single out the most prominent related pairs. The ternary diagram is advocated for its ability to display all three allele sharing probabilities simultaneously. The log-ratio plots are advocated as an attempt to overcome the problems with the Euclidean distance interpretation in the
classical graphics.

Statistical utilities for the analysis of data by means of the Marshall-Olkin Extended Zipf distribution are presented. The distribution is a two-parameter extension of the widely used Zipf model. By plotting the probabilities in log-log scale, this two-parameter extension allows a concave as well as a convex behavior of the function at the beginning of the distribution, maintaining the linearity, associated to the Zipf model, in the tail.

Cosmopolitan pests such as Brevicoryne brassicae, Lipaphis pseudobrassicae, and Myzus persicae (Aphididae) cause significant damage to Brassicaceae crops. Assessment of the important biotic and abiotic factors that regulate these pests is an essential step in the development of effective Integrated Pest Management programs for these aphids. This study evaluated the influence of leaf position, precipitation, temperature, and parasitism on populations of L. pseudobrassicae, M. persicae, and B. brassicae in collard greens fields in the Triângulo Mineiro region (Minas Gerais state), Brazil. Similar numbers of B. brassicae were found on all parts of the collard green plants, whereas M. persicae and L. pseudobrassicae were found in greatest numbers on the middle and lower parts of the plant. While temperature and precipitation were positively related to aphid population size, their effects were not accumulative, as indicated by a negative interaction term. Although Diaeretiella rapae was the main parasitoid of these aphids, hyperparasitism was dominant; the main hyperparasitoid species recovered from plant samples was Alloxysta fuscicornis. Parasitoids seem to have similar distributions on plants as their hosts. These results may help predict aphid outbreaks and gives clues for specific intra-plant locations when searching for and monitoring aphid populations.

Conclusions from randomized clinical trials (RCT) rely primarily
on the primary endpoint (PE) chosen at the design stage of the study. There should generally be only one PE which should be able to provide the most clinically relevant and scientific evidence regarding the potential eficacy of the new treatment.
Therefore, it is of utmost importance to select it appropriately.
Composite endpoints, consisting of the union of several endpoints, are often used as PE in RCT. Gomez and Lagakos (2013) develop a statistical methodology to evaluate the convenience of using a CE as opposed to one of its components.
Their strategy is based on the asymptotic relative eficiency (ARE), relating the efi is based on the asymptotic relative eficiency (ARE), relating the eciency of using the logrank test based on the CE versus the eficiency based on one of its components. This paper introduces the freeware online platform CompARE that facilitates the study of the performance of different candidate endpoints which could be used as PE at the design stage of a trial. CompARE, through an intuitive
interface, implements the novel ARE method.

Cities present new challenges and needs to satisfy and improve lifestyle for their citizens under the concept “Smart City”. In order to achieve this goal in a global manner, new technologies are required as the robotic one. But Public entities unknown the possibilities offered by this technology to get solutions to their needs. In this paper the development of the Innovative Public Procurement instruments is explained, specifically the process PDTI (Public end Users Driven Technological Innovation) as a driving force of robotic research and development and offering a list of robotic urban challenges proposed by European cities that have participated in such a process. In the next phases of the procedure, this fact will provide novel robotic solutions addressed to public demand that are an example to be followed by other Smart Cities.

The main goal of this work is to develop a methodology for finding nutritional patterns from a variety of individual characteristics which can contribute to better understand the interactions between nutrition and health, provided that the complexity of the phenomenon gives poor performance using classical approaches. An innovative methodology based on a combination of advanced clustering techniques and consistent conceptual interpretation of clusters is proposed to find more understandable patterns or clusters. The Interpreted Integrative Multiview Clustering (I2MC) combines the previously proposed Integrative Multiview Clustering (IMC) with a new interpretation methodology NCIMS. IMC uses crossing operations over the several partitions obtained with the different views. Comparison with other classical clustering techniques is provided to assess the performance of this approach. IMC helps to reduce the high dimensionality of the data based on multiview division of variables. Two innovative Cluster Interpretation methodologies are proposed to support the understanding of the clusters. These are automatic methods to detect the significant variables that describe the clusters; also, a mechanism to deal with the consistency between the interpretations inter clusters of a single partition CI-IMS, or between pairs of nested partitions NCIMS. Some formal concepts are specifically introduced to be used in the NCIMS. I2MC is used to validate the interpretability of the participant’s profiles from an intervention nutritional study. The method has advantages to deal with complex datasets including heterogeneous variables corresponding to different topics and is able to provide meaningful partitions.

Physical activity (PhA) prior to stroke has been associated with good outcomes after the ischemic insult, but there is scarce data on the involved molecular mechanisms. Methods: We studied consecutive acute ischemic stroke patients admitted to a single tertiary stroke center. Pre-stroke PhA was evaluated with the International Physical Activity Questionnaire (METS-minute/week). We studied several circulating angiogenic and neurogenic factors at different time-points: Vascular Endothelial Growth Factor (VEGF), Granulocyte Colony-Stimulating Factor (G-CSF) and Brain-Derived Neurotrophic Factor (BDNF) at admission, day 7, and at 3 months. We considered good functional outcome at 3 months (mRS = 2) as primary endpoint, and final infarct volume as secondary outcome. Results: We studied 83 patients with at least two time-point serum determinations (mean age 69.6 years, median NIHSS 17 at admission). Patients more physically active before stroke had a significantly higher increment of serum Vascular Endothelial Growth Factor (VEGF) at 7th day when compared to less active patients. This increment was an independent predictor of good functional outcome at 3 months and was associated with smaller infarct volume in multivariate analyses adjusted for relevant covariates. We did not find independent associations of G-CSF or BDNF levels neither with level of pre-stroke PhA nor with stroke outcomes. Conclusions: Although there are probably more molecular mechanisms by which physical activity exerts its beneficial effects in stroke outcomes, our observation regarding the potential role of VEGF is plausible and in line with previous experimental studies. Further research in this field is needed.

Prehospital clinical scales to identify patients with acute stroke with a large vessel occlusion and direct them to an endovascular-capable stroke center are needed. We evaluated whether simplification of the Rapid Arterial oCclusion Evaluation (RACE) scale, a 5-item scale previously validated in the field, could maintain its high performance to identify patients with large vessel occlusion. Methods: Using the original prospective validation cohort of the RACE scale, 7 simpler versions of the RACE scale were designed and retrospectively recalculated for each patient. National Institutes of Health Stroke Scale score and proximal large vessel occlusion were evaluated in hospital. Receiver operating characteristic analysis was performed to test performance of the simplified versions to identify large vessel occlusion in patients with suspected stroke. For each version, the threshold with sensitivity closest to the original scale (85%) was used, and the variation in specificity and correct classification were assessed. Results: The study included 341 patients with suspected stroke; 20% had large vessel occlusion. The 7 simpler versions of the RACE scale had slightly lower area under the curve for detecting large vessel occlusion because of lower specificity at the chosen sensitivity level. Correct classification rate decreased 9% if facial palsy was simplified or if eye or gaze deviation was removed, and decreased 4.5% if the aphasia or agnosia cortical sign was removed. Conclusions: We recommend the original RACE scale for prehospital assessment of patients with suspected stroke for its ease of use and its high performance to predict the presence of a large vessel occlusion. The use of simplified versions would reduce its predictive value.

We introduce a simple and interpretable model for functional data analysis for situations where the observations at each location are functional rather than scalar. This new approach is based on a tensor product representation of the function-valued process and utilizes eigenfunctions of marginal kernels. The resulting marginal principal components and product principal components are shown to have nice properties. Given a sample of independent realizations of the underlying function-valued stochastic process, we propose straightforward fitting methods to obtain the components of this model and to establish asymptotic consistency and rates of convergence for the estimates proposed. The methods are illustrated by modelling the dynamics of annual fertility profile functions for 17 countries. This analysis demonstrates that the approach proposed leads to insightful interpretations of the model components and interesting conclusions.

Objective: To investigate the effect of endovascular treatment on cognitive function as a prespecified secondary analysis of the REVASCAT (Endovascular Revascularization With Solitaire Device Versus Best Medical Therapy in Anterior Circulation Stroke Within 8 Hours) trial.
Methods: REVASCAT randomized 206 patients with anterior circulation proximal arterial occlusion stroke to Solitaire thrombectomy or best medical treatment alone. Patients with established dementia were excluded from enrollment. Cognitive function was assessed in person with Trail Making Test (TMT) Parts A and B at 3 months and 1 year after randomization by an investigator masked to treatment allocation. Test completion within 5 minutes, time of completion (seconds), and number of errors were recorded.
Results: From November 2012 to December 2014, 206 patients were enrolled in REVASCAT. TMT was assessed in 82 of 84 patients undergoing thrombectomy and 86 of 87 control patients alive at 3 months and in 71 of 79 patients undergoing thrombectomy and 72 of 78 control patients alive at 1 year. Rates of timely TMT-A completion were similar in both treatment arms, although patients undergoing thrombectomy required less time for TMT-A completion and had higher rates of error-free TMT-A performance. Thrombectomy was also associated with a higher probability of timely TMT-B completion (adjusted odds ratio 3.17, 95% confidence interval 1.51–6.66 at 3 months; and adjusted ratio 3.66, 95% confidence interval 1.60–8.35 at 1 year) and shorter time for TMT-B completion. Differences in TMT completion times between treatment arms were significant in patients with good functional outcome but not in those who were functionally dependent (modified Rankin Scale score >2). Poorer cognitive outcomes were significantly associated with larger infarct volume, higher modified Rankin Scale scores, and worse quality of life.
Conclusions: Thrombectomy improves TMT performance after stroke, especially among patients who reach good functional recovery.

Background: The role of inflammation in mood disorders has received increased attention. There is substantial evidence that cytokine therapies, such as interferon alpha (IFN-alpha), can induce depressive symptoms. Indeed, proinflammatory cytokines change brain function in several ways, such as altering neurotransmitters, the glucocorticoid axis, and apoptotic mechanisms. This study aimed to evaluate the impact on mood of initiating IFN-alpha and ribavirin treatment in a cohort of patients with chronic hepatitis C. We investigated clinical, personality, and functional genetic variants associated with cytokine-induced depression.; Methods: We recruited 344 Caucasian outpatients with chronic hepatitis C, initiating IFN-alpha and ribavirin therapy. All patients were euthymic at baseline according to DSM-IV-R criteria. Patients were assessed at baseline and 4, 12, 24, and 48 weeks after treatment initiation using the Patient Health Questionnaire (PHQ), the Hospital Anxiety and Depression Scale ( HADS), and the Temperament and Character Inventory (TCI). We genotyped several functional polymorphisms of interleukin-28 (IL28B), indoleamine 2,3-dioxygenase (IDO-1), serotonin receptor-1A (HTR1A), catechol-O-methyl transferase (COMT), glucocorticoid receptors (GCR1 and GCR2), brain-derived neurotrophic factor (BDNF), and FK506 binding protein 5 (FKBP5) genes. A survival analysis was performed, and the Cox proportional hazards model was used for the multivariate analysis.; Results: The cumulative incidence of depression was 0.35 at week 24 and 0.46 at week 48. The genotypic distributions were in Hardy-Weinberg equilibrium. Older age (p = 0.018, hazard ratio [HR] per 5 years = 1.21), presence of depression history (p = 0.0001, HR = 2.38), and subthreshold depressive symptoms at baseline (p = 0.005, HR = 1.13) increased the risk of IFN-induced depression. So too did TCI personality traits, with high scores on fatigability (p = 0.0037, HR = 1.17), impulsiveness (p = 0.0200 HR = 1.14), disorderliness (p = 0.0339, HR = 1.11), and low scores on extravagance (p = 0.0040, HR = 0.85). An interaction between HTR1A and COMT genes was found. Patients carrying the G allele of HTR1A plus the Met substitution of the COMT polymorphism had a greater risk for depression during antiviral treatment (HR = 3.83) than patients with the CC (HTR1A) and Met allele (COMT) genotypes. Patients carrying the HTR1A CC genotype and the COMT Val/Val genotype (HR = 3.25) had a higher risk of depression than patients with the G allele (HTR1A) and the Val/Val genotype. Moreover, functional variants of the GCR1 (GG genotype: p = 0.0436, HR = 1.88) and BDNF genes (Val/Val genotype: p = 0.0453, HR = 0.55) were associated with depression.; Conclusions: The results of the study support the theory that IFN-induced depression is associated with a complex pathophysiological background, including serotonergic and dopaminergic neurotransmission as well as glucocorticoid and neurotrophic factors. These findings may help to improve the management of patients on antiviral treatment and broaden our understanding of the pathogenesis of mood disorders.

Non-invasive in vivo diffuse optical characterization of human bone opens a new possibility of diagnosing bone related pathologies. We present an in vivo characterization performed on seventeen healthy subjects at six different superficial bone locations: radius distal, radius proximal, ulna distal, ulna proximal, trochanter and calcaneus. A tailored diffuse optical protocol for high penetration depth combined with the rather superficial nature of considered tissues ensured the effective probing of the bone tissue. Measurements were performed using a broadband system for Time-Resolved Diffuse Optical Spectroscopy (TRS) to assess mean absorption and reduced scattering spectra in the 600–1200 nm range and Diffuse Correlation Spectroscopy (DCS) to monitor microvascular blood flow. Significant variations among tissue constituents were found between different locations; with radius distal rich of collagen, suggesting it as a prominent location for bone related measurements, and calcaneus bone having highest blood flow among the body locations being considered. By using TRS and DCS together, we are able to probe the perfusion and oxygen consumption of the tissue without any contrast agents. Therefore, we predict that these methods will be able to evaluate the impairment of the oxygen metabolism of the bone at the point-of-care.

Trials has 10 years of experience in providing open access publication of protocols for randomised controlled trials. In this editorial, the senior editors and editors-in-chief of Trials discuss editorial issues regarding managing trial protocol submissions, including the content and format of the protocol, timing of submission, approaches to tracking protocol amendments, and the purpose of peer reviewing a protocol submission. With the clarification and guidance provided, we hope we can make the process of publishing trial protocols more efficient and useful to trial investigators and readers.

Randomized clinical trials provide compelling evidence that a study treatment causes an effect on human health. A primary endpoint ought to be chosen to confirm the effectiveness of the treatment and is the basis for computing the number of subjects in a Randomized clinical trial. Often a Composite Endpoint based on a combination of individual endpoints is chosen as a Primary Endpoint. As a tool for a more informed decision between using the Composite Endpoint as Primary Endpoint or one of its components, the ARE method is proposed. This method uses the Asymptotic Relative Efficiency (ARE) between the two possible logrank tests to compare the effect of the treatment. CompARE, a web-based interface tool is presented. CompARE computes the asymptotic relative efficiency in terms of interpretable parameters such as the anticipated probabilities of observing the primary and secondary endpoints and the relative treatment effects on every endpoint given by the corresponding hazard ratios. The ARE method is extended to observational studies as well as to Binary Composite Endpoints. A discussion on how to use the ARE method for the derivation of the sample size when the proportional hazards assumption does not hold, will conclude.

Sustainable mobility is not only a technological question, automotive technology will be part of the solution combined with a paradigm shift from car ownership to vehicle usage, and the application of Information and Communication Technologies (ICT) making possible for a user to have access to a mobility service from anywhere to anywhere at any time. Multiple Passenger Ridesharing and its variants look one of the more promising emerging mobility concepts. However, implementations of these systems accounting specifically for time dependencies, and time windows reflecting users’ needs raise challenges in terms of real-time fleet dispatching and dynamic route calculation. This paper analyzes and evaluates both aspects by microscopic traffic simulation emulating real-time traffic conditions and a real traffic information system, and interacting with a Decision Support. The simulation framework has been implemented in a model of Barcelona’s Central Business District. The paper presents and discusses the achieved results.

Sustainable mobility is not only a technological question, automotive technology will be part of the
solution combined with a paradigm shift from car ownership to vehicle usage, and the application of
Information and Communication Technologies (ICT) making possible for a user to have access to a
mobility service from anywhere to anywhere at any time. Multiple Passenger Ridesharing and its variants look one of the more promising emerging mobility concepts. However, implementations of these systems accounting specifically for time dependencies, and time windows reflecting users’ needs raise challenges in terms of real-time fleet dispatching and dynamic route calculation. This paper analyzes and evaluates both aspects by microscopic traffic simulation emulating real-time traffic conditions and a real traffic information system, and interacting with a Decision Support. The simulation framework has been implemented in a model of Barcelona’s Central Business District. The paper presents and discusses the achieved results.

This paper presents a class of hub network design problems with profit-oriented objectives, which extend several families of classical hub location problems. Potential applications arise in the design of air and ground transportation networks. These problems include decisions on the origin/destination nodes that will be served as well as the activation of different types of edges, and consider the simultaneous optimization of the collected profit, setup cost of the hub network and transportation cost. Alternative models and integer programming formulations are proposed and analyzed. Results from computational experiments show the complexity of such models and highlight their superiority for decision-making.

One of the important issues related with all types of data analysis, either statistical data analysis, machine learning, data mining, data science or whatever form of data-driven modeling, is data quality. The more complex the reality to be analyzed is, the higher the risk of getting low quality data. Unfortunately real data often contain noise, uncertainty, errors, redundancies or even irrelevant information. Useless models will be obtained when built over incorrect or incomplete data. As a consequence, the quality of decisions made over these models, also depends on data quality. This is why pre-processing is one of the most critical steps of data analysis in any of its forms. However, pre-processing has not been properly systematized yet, and little research is focused on this. In this paper a survey on most popular pre-processing steps required in environmental data analysis is presented, together with a proposal to systematize it. Rather than providing technical details on specific pre-processing techniques, the paper focus on providing general ideas to a non-expert user, who, after reading them, can decide which one is the more suitable technique required to solve his/her problem.

Provides an important framework for data analysts in assessing the quality of data and its potential to provide meaningful insights through analysis
Analytics and statistical analysis have become pervasive topics, mainly due to the growing availability of data and analytic tools. Technology, however, fails to deliver insights with added value if the quality of the information it generates is not assured. Information Quality (InfoQ) is a tool developed by the authors to assess the potential of a dataset to achieve a goal of interest, using data analysis. Whether the information quality of a dataset is sufficient is of practical importance at many stages of the data analytics journey, from the pre-data collection stage to the post-data collection and post-analysis stages. It is also critical to various stakeholders: data collection agencies, analysts, data scientists, and management.

Functional Data Analysis (FDA) is a statistical field which has gained importance due to the progress in modern science, mainly in the ability to measure in continous time results of an experiment and the possibility to record them. Many methods such as discriminant analysis, principal components analysis and regression analysis that are used on vector spaces for classification, dimension reduction and modelling have been adapted to the functional case. FDA is concerned on variables that are defined on a continuum or that have continous structure. Therefore, FDA has an important role in the analysis of spectral data sets and images that are mostly recorded in the fields of chemometry, medicine and ecology. Especially in ecology, the analysis of images that are recorded in satellite sensors inform us in a fast and economical way about the use of land, the crop production in land, the water pollution and the amount of minerals include the water. The aim of this study is to propose the use of FDA approach and to predict the amount of Total Suspended Solids (TSS) in the estuary of Guadalquivir river in Cadiz on remote sensing data by using different Functional Linear Regression Models (FLRM). Besides, it is purposed to compare the results obtained from various FLRMs and classical statistical methods practically, to design a simulation study in order to support findings and to determine the best prediction model.

Classical Pre-Post Intervention Studies are often analyzed using traditional statistics. Nevertheless, the nutritional interventions have small effects on the metabolism and traditional statistics are not enough to detect these subtle nutrient effects. Generally, this kind of studies assumes that the participants are adhered to the assigned dietary intervention and directly analyzes its effects over the target parameters. Thus, the evaluation of adherence is generally omitted. Although, sometimes, participants do not effectively adhere to the assigned dietary guidelines. For this reason, the Trajectory Map is proposed as a visual tool where dietary patterns of individuals can be followed during the intervention and can also be related with nutritional prescriptions. The Trajectory Analysis is also proposed allowing both analysis: 1) adherence to the intervention and 2) intervention effects. The analysis is made by projecting the differences of the target parameters over the resulting trajectories between states of different time-stamps which might be considered either individually or by groups. The proposal has been applied over a real nutritional study showing that some individuals adhere better than others and some individuals of the control group modify their habits during the intervention. In addition, the intervention effects are different depending on the type of individuals, even some subgroups have opposite response to the same intervention.

The importance of post-processing the results of clustering when using data mining to
support subsequent decision-making is discussed. Both the formal embedded binary logistic
regression (EBLR) and the visual profile’s assessment grid (PAG) methods are presented
as bridging tools for the real use of clustering results. EBLR is a sequence of logistic
regressions that helps to predict the class of a new object; while PAG is a graphical tool that
visualises the results of an EBLR. PAG interactively determines the most suitable class for a
new object and enables subsequent follow-ups. PAG makes the underlying mathematical
model (EBLR) more understandable, improves usability and contributes to bridging the gap
between modelling and decision-support. When applied to medical problems, these tools
can perform as diagnostic-support tools, provided that the predefined set of profiles refer
to different stages of a certain disease or different types of patients with a same medical
problem, etc. Being a graphical tool, PAG enables doctors to quickly and friendly determine
the profile of a patient in the everyday activity, without necessarily understanding the
statistical models involved in the process, which used to be a serious limitation for wider
application of these methods in clinical praxis. In this work, an application is presented
with 4 functional disability profiles.

Intellectual disability in Down syndrome (DS) is accompanied by altered neuro-architecture, deficient synaptic plasticity, and excitation-inhibition imbalance in critical brain regions for learning and memory. Recently, we have demonstrated beneficial effects of a combined treatment with green tea extract containing (-)-epigallocatechin-3-gallate (EGCG) and cognitive stimulation in young adult DS individuals. Although we could reproduce the cognitive-enhancing effects in mouse models, the underlying mechanisms of these beneficial effects are unknown. Here, we explored the effects of a combined therapy with environmental enrichment (EE) and EGCG in the Ts65Dn mouse model of DS at young age. Our results show that combined EE-EGCG treatment improved corticohippocampal-dependent learning and memory. Cognitive improvements were accompanied by a rescue of cornu ammonis 1 (CA1) dendritic spine density and a normalization of the proportion of excitatory and inhibitory synaptic markers in CA1 and dentate gyrus.