in Perspectives: international postgraduate journal of philosophy (in press), 9

Brian Leiter (2016) throws down two gauntlets to philosophers engaged in dialogue with the broader public. If, with the first, public philosophers recognize that they cannot offer substantive answers but ... [more ▼]

Brian Leiter (2016) throws down two gauntlets to philosophers engaged in dialogue with the broader public. If, with the first, public philosophers recognize that they cannot offer substantive answers but only sophisticated method, they nevertheless fail to realize that said method does not resonate with the very public whom they purport to help. For, with the second, that method does not engage the emotivist and tribalist cast of contemporary public discourse: emotivist because a person’s moral and political beliefs are a function of emotional attitudes or affective responses for which she adduces reasons post hoc; tribalist because the person tracks not the inferential relation between beliefs but her similarity with interlocutors. In order to understand the full extent of this critique, it is necessary, first, to parse strands of public philosophy, distinct discursive sites, and pictures of philosophical practice and, then, to probe the critique’s empirical groundedness and intended scope. These elements in place, it is then possible to sketch public philosophy reconceived along Leiter’s lines as equal part rigor and rhetoric. That sketch may be somewhat filled out through two tactics employed in Jeffrey Stout’s (2004, 2010) work. These form part of a toolkit for philosophical dialogue whereby philosophers get a discursive grip on non-discursive factors underlying public discourse and push back on Leiter's dilemma. [less ▲]

Machine learning algorithms have reached mainstream status and are widely deployed in many applications. The accuracy of such algorithms depends significantly on the size of the underlying training ... [more ▼]

Machine learning algorithms have reached mainstream status and are widely deployed in many applications. The accuracy of such algorithms depends significantly on the size of the underlying training dataset; in reality a small or medium sized organization often does not have the necessary data to train a reasonably accurate model. For such organizations, a realistic solution is to train their machine learning models based on their joint dataset (which is a union of the individual ones). Unfortunately, privacy concerns prevent them from straightforwardly doing so. While a number of privacy-preserving solutions exist for collaborating organizations to securely aggregate the parameters in the process of training the models, we are not aware of any work that provides a rational framework for the participants to precisely balance the privacy loss and accuracy gain in their collaboration. In this paper, by focusing on a two-player setting, we model the collaborative training process as a two-player game where each player aims to achieve higher accuracy while preserving the privacy of its own dataset. We introduce the notion of Price of Privacy, a novel approach for measuring the impact of privacy protection on the accuracy in the proposed framework. Furthermore, we develop a game-theoretical model for different player types, and then either find or prove the existence of a Nash Equilibrium with regard to the strength of privacy protection for each player. Using recommendation systems as our main use case, we demonstrate how two players can make practical use of the proposed theoretical framework, including setting up the parameters and approximating the non-trivial Nash Equilibrium. [less ▲]

An unsolved debate in the field of usable security concerns whether security mechanisms should be visible, or blackboxed away from the user for the sake of usability. However, tying this question to ... [more ▼]

An unsolved debate in the field of usable security concerns whether security mechanisms should be visible, or blackboxed away from the user for the sake of usability. However, tying this question to pragmatic usability factors only might be simplistic. This study aims at researching the impact of displaying security mechanisms on user experience (UX) in the context of e-voting. Two versions of an e-voting application were designed and tested using a between-group experimental protocol (N=38). Version D displayed security mechanisms, while version ND did not reveal any security-related information. We collected data on UX using standardised evaluation scales and semi-structured interviews. Version D performed better overall in terms of UX and need fulfilment. Qualitative analysis of the interviews gives further insights into factors impacting perceived security. Our study adds to existing research suggesting a conceptual shift from usability to UX and discusses implications for designing and evaluating secure systems. [less ▲]

Elucidating molecular consequences of amino-acid-altering missense variants at scale is challenging. In this work, we explored whether features derived from three-dimensional (3D) protein structures can ... [more ▼]

Elucidating molecular consequences of amino-acid-altering missense variants at scale is challenging. In this work, we explored whether features derived from three-dimensional (3D) protein structures can characterize patient missense variants across different protein classes with similar molecular level activities. The identified disease-associated features can advance our understanding of how a single amino acid substitution can lead to the etiology of monogenic disorders. For 1,330 disease-associated genes (>80%, 1,077/1,330 implicated in Mendelian disorders), we collected missense variants from the general population (gnomAD database, N=164,915) and patients (ClinVar and HGMD databases, N=32,923). We in silico mapped the variant positions onto >14k human protein 3D structures. We annotated the protein positions of variants with 40 structural, physiochemical, and functional features. We then grouped the genes into 24 protein classes based on their molecular functions and performed statistical association analyses with the features of population and patient variants. We identified 18 (out of 40) features that are associated with patient variants in general. Specifically, patient variants are less exposed to solvent (p<1.0e-100), enriched on b-sheets (p<2.37e-39), frequently mutate aromatic residues (p<1.0e-100), occur in ligand binding sites (p<1.0e-100) and are spatially close to phosphorylation sites (p<1.0e-100). We also observed differential protein-class-specific features. For three protein classes (signaling molecules, proteases and hydrolases), patient variants significantly perturb the disulfide bonds (p<1.0e-100). Only in immunity proteins, patient variants are enriched in flexible coils (p<1.65e-06). Kinases and cell junction proteins exhibit enrichment of patient variants around SUMOylation (p<1.0e-100) and methylation sites (p<9.29e-11), respectively. In summary, we studied shared and unique features associated with patient variants on protein structure across 24 protein classes, providing novel mechanistic insights. We generated an online resource that contains amino-acid-wise feature annotation-track for 1,330 genes, summarizes the patient-variant-associated features on residue level, and can guide variant interpretation. [less ▲]

in International Conference on Information Systems Security and Privacy (ICISSP) (2019, January 24)

This paper presents an efficient solution for the booking and payments functionality of a car sharing system that allows individuals to share their personal, underused cars in a completely decentralized ... [more ▼]

This paper presents an efficient solution for the booking and payments functionality of a car sharing system that allows individuals to share their personal, underused cars in a completely decentralized manner, annulling the need of an intermediary. Our solution, named SC2Share, leverages smart contracts and uses them to carry out secure and private car booking and payments. Our experiments on SC2Share on the Ethereum testnet guarantee high security and privacy to its users and confirm that our system is cost-efficient and ready for practical use. [less ▲]

in Proceedings of SPIE : The International Society for Optical Engineering (2019), 10894

We fabricated hollow nanoantennas with varying inner channels sizes on a gold-covered silicon nitride membrane. Our fabrication technique allowed us to narrow the size of the inner channels down to 15nm ... [more ▼]

We fabricated hollow nanoantennas with varying inner channels sizes on a gold-covered silicon nitride membrane. Our fabrication technique allowed us to narrow the size of the inner channels down to 15nm. We managed to exclusively decorate the tips of the antennas with thiol-conjugated dyes by creating a concentration gradient through the nanoantennas. Finally, we characterized the antennas in terms of their effect on the lifetime of dyes. We used Atto 520 and Atto 590 for the experiments. We carried out experiments with the antennas decorated with Atto 520, with Atto 590 as well as with the two Atto dyes at the same time. The experiments carried out with the antennas decorated with Atto 520 only and Atto 590 only yielded a lifetime reduction with respect to the confocal case. Interestingly, their lifetime reductions were significantly different. Then, we decorated the antennas with the two dyes at the same time. Even though we could not control the distance between the two dyes, FRET effects were clearly observed. The FRET effects were found to be dependent on the size of the inner channel. We believe that our tip decorated hollow nanoantennas could find application in FRET-based single molecule nanopore technologies. [less ▲]

in International Journal of Mechanical Engineering and Robotics Research (2019)

Before performing a surface finishing process, human operators analyze the workpiece-conditions and react accordingly, i.e. they adapt the contact-situation of the tool with respect to the surface. This ... [more ▼]

Before performing a surface finishing process, human operators analyze the workpiece-conditions and react accordingly, i.e. they adapt the contact-situation of the tool with respect to the surface. This first step is ignored in most suggested automation concepts. Although their performance is satisfactory for the general process thanks to adaptive position- and force-/torque-control algorithms, they are unable to address specific problematic cases as often encountered in practice because of variations in workpiece-dimensions or -positioning. In this work, a human mimicking element is developed to overcome this limitation of current control concepts and to translate human expertise to the robotic manipulator. A rule-based system is designed where human knowledge is encoded as if-then rules. This system is integrated with a previously suggested control strategy in a hierarchical manner. The developed concept is experimentally validated on a KUKA LWR 4+-robotic manipulator. [less ▲]

in Proceedings of SPIE : The International Society for Optical Engineering (2019), 10927

We report on the fabrication and optical characterization of hyperbolic nanoparticles on a transparent substrate. These nanoparticles enable a separation of ohmic and radiative channels in the visible and ... [more ▼]

We report on the fabrication and optical characterization of hyperbolic nanoparticles on a transparent substrate. These nanoparticles enable a separation of ohmic and radiative channels in the visible and near-infrared frequency ranges. The presented architecture opens the pathway towards novel routes to exploit the light to energy conversion channels beyond what is offered by current plasmon-based nanostructures, possibly enabling applications spanning from thermal emission manipulation, theragnostic nano-devices, optical trapping and nano-manipulation, non-linear optical properties, plasmonenhanced molecular spectroscopy, photovoltaics and solar-water treatments, as well as heat-assisted ultra-dense and ultrafast magnetic recording. [less ▲]

in Proceedings of SPIE : The International Society for Optical Engineering (2019), 10894

Here, we propose easy and robust strategies for the versatile integration 2D material flakes on plasmonic nanoholes by means of site selective deposition of MoS2. The methods can be applied both to simple ... [more ▼]

Here, we propose easy and robust strategies for the versatile integration 2D material flakes on plasmonic nanoholes by means of site selective deposition of MoS2. The methods can be applied both to simple metallic flat nanostructures and to complex 3D metallic structures comprising nanoholes. The deposition methods allow the decoration of large ordered arrays of plasmonic structures with single or few layers of MoS2. We show that the plasmonic field generated by the nanohole can interact significantly with the 2D layer, thus representing an ideal system for hybrid 2DMaterial/ Plasmonic investigation. The controlled/ordered integration of 2D materials on plasmonic nanostructures opens a pathway towards new investigation of the following: enhanced light emission; strong coupling from plasmonic hybrid structures; hot electron generation; and sensors in general based on 2D materials. [less ▲]

Concrete-steel composite structures are very efficient in carrying high loads as they combine benefits of both materials concrete and steel. The combination of them can significantly improve the strength ... [more ▼]

Concrete-steel composite structures are very efficient in carrying high loads as they combine benefits of both materials concrete and steel. The combination of them can significantly improve the strength of the composite structure by taking advantage of high compression resistance of concrete and high strength of steel in tension. Recently, there has been renewed interest in the composite structures used in different forms, as beams, slabs, sandwich structures and columns and many methods of structural analyses were utilised. However, none of them was able to eliminate concrete material when it fractured. The presented work concerns circular composite columns CFST under eccentric compression. The principal objective of the project was to investigate a straightforward method based on a finite element analysis employed to estimate the load carrying capacity of columns. This study has also been set out to determine whether the Drucker-Prager material model of concrete without a crack capability could be used for analyses of the CFST columns with the additional elimination of the concrete material when concrete is damaged. The elaborated finite element model was verified with existing test data from the literature. The findings show that the correlation between the test results and the numerical analysis was excellent confirming the feasibility of usage of the proposed method for the assessment of complex cases of the CFST columns. A new part of the work is the employment of a death element feature to eliminate concrete material, which theoretically is not taking any load after reaching its tensile strength. A criterion to eliminate elements from the model is the maximum principal stress greater than tensile strength. The obtained results are excellent; the established goal was met entirely. [less ▲]

in Robotix-Academy Conference for Industrial Robotics 2018 (2018, November 01)

Human-Robot-Interaction technologies in industry 4.0 and modern manufacturing are more and more growing. Using off-line robot programming methods such as Augmented Reality (AR) could gain time and money ... [more ▼]

Human-Robot-Interaction technologies in industry 4.0 and modern manufacturing are more and more growing. Using off-line robot programming methods such as Augmented Reality (AR) could gain time and money as well as improve programming and repair tasks. This paper is a study of the use of AR in smart factories. [less ▲]

in 12th International Symposium on Empirical Software Engineering and Measurement (ESEM'18) (2018, October 11)

Background: Code is repetitive and predictable in a way that is similar to the natural language. This means that code is ``natural'' and this ``naturalness'' can be captured by natural language modelling ... [more ▼]

Background: Code is repetitive and predictable in a way that is similar to the natural language. This means that code is ``natural'' and this ``naturalness'' can be captured by natural language modelling techniques. Such models promise to capture the program semantics and identify source code parts that `smell', i.e., they are strange, badly written and are generally error-prone (likely to be defective). Aims: We investigate the use of natural language modelling techniques in mutation testing (a testing technique that uses artificial faults). We thus, seek to identify how well artificial faults simulate real ones and ultimately understand how natural the artificial faults can be. %We investigate this question in a fault revelation perspective. Our intuition is that natural mutants, i.e., mutants that are predictable (follow the implicit coding norms of developers), are semantically useful and generally valuable (to testers). We also expect that mutants located on unnatural code locations (which are generally linked with error-proneness) to be of higher value than those located on natural code locations. Method: Based on this idea, we propose mutant selection strategies that rank mutants according to a) their naturalness (naturalness of the mutated code), b) the naturalness of their locations (naturalness of the original program statements) and c) their impact on the naturalness of the code that they apply to (naturalness differences between original and mutated statements). We empirically evaluate these issues on a benchmark set of 5 open-source projects, involving more than 100k mutants and 230 real faults. Based on the fault set we estimate the utility (i.e. capability to reveal faults) of mutants selected on the basis of their naturalness, and compare it against the utility of randomly selected mutants. Results: Our analysis shows that there is no link between naturalness and the fault revelation utility of mutants. We also demonstrate that the naturalness-based mutant selection performs similar (slightly worse) to the random mutant selection. Conclusions: Our findings are negative but we consider them interesting as they confute a strong intuition, i.e., fault revelation is independent of the mutants' naturalness. [less ▲]

in Movement Disorders : Official Journal of the Movement Disorder Society (2018, October 03), 33(S2), 525

Objective: To leverage a community of researchers and shared wearable data to develop algorithms to estimate the severity of PD specific symptoms. Background: People with Parkinson’s disease (PwPD) often ... [more ▼]

Objective: To leverage a community of researchers and shared wearable data to develop algorithms to estimate the severity of PD specific symptoms. Background: People with Parkinson’s disease (PwPD) often experience fluctuations in motor symptom severity. Wearable sensors have the potential to help clinicians monitor symptoms over time, outside the clinic. However, to gather accurate and clinically-relevant measures, there is a need to develop robust algorithms based on clinically- labelled data. Methods: The Levodopa Response Trial captured three-axis acceleration from two wrist-worn sensors and a smartphone located at the waist from 29 PwPD continuously over 4 days. On day 1, in an in-clinic visit, participants performed clinical assessments and motor tasks on their regular medication regimen. During these visits, a clinician also provided symptom severity scores for tremor, bradykinesia, and dyskinesia. On days 2 & 3, sensor data was collected while participants were at home. On day 4, participants returned to the clinic for the same assessments as day 1, but arrived without having taken their medication for at least 10 hours. Leveraging this dataset, Sage Bionetworks, the Michael J Fox Foundation and the Robert Wood Johnson Foundation launched the PD Digital Biomarker DREAM Challenge which made a subset of the data available to researchers to develop robust and accurate algorithms for the estimation of specific symptoms’ severity. Results: Teams participating in the challenge used several technical approaches, from signal processing to deep learning. 35 submissions were received for the estimation of action tremor severity. Teams achieved an area under the precision-recall curve (AUPR) of 0.444 to 0.75. As for dyskinesia during movement, 37 submissions were received and the teams achieved an AUPR of 0.175 to 0.477. Finally, 39 submissions were received for the estimation of bradykinesia and the teams achieved an AUPR of 0.413 to 0.95. Null expectations for the testing datasets were 0.432, 0.195, and 0.266, respectively. Conclusions: Making datasets available to the community leverages the creativity of different groups to develop robust and accurate algorithms for the estimation of PD symptom severity. This will lead to better quality and interpretability of data collected in unsupervised settings within the community. [less ▲]

Objectives: In this study, we review the evidence and discuss how the digitalization affects the CHWs programs for tackling non-communicable diseases (NCDs) in low-and-middle income countries (LMICs ... [more ▼]

Objectives: In this study, we review the evidence and discuss how the digitalization affects the CHWs programs for tackling non-communicable diseases (NCDs) in low-and-middle income countries (LMICs). Methods: We conducted a review of literature covering two databases: PubMED and Embase. A total of 97 articles were abstracted for full text review of which 21 are included in the analysis. Existing theories were used to construct a conceptual framework for understanding how digitalization affects the prospects of CHW programs for NCDs. Results: We identified three benefits and three challenges of digitalization. Firstly, it will help improve the access and quality of services, notwithstanding its higher establishment and maintenance costs. Secondly, it will add efficiency in training and personnel management. Thirdly, it will leverage the use of data generated across grass-roots platforms to further research and evaluation. The challenges posed are related to funding, health literacy of CHWs, and systemic challenges related to motivating CHWs. More than 60 digital platforms were identified, including mobile based networking devices (used for behavioral change communication), Web-applications (used for contact tracking, reminder system, adherence tracing, data collection, and decision support), videoconference (used for decision support) and mobile applications (used for reminder system, supervision, patients’ management, hearing screening, and tele-consultation). Conclusion: The digitalization efforts of CHW programs are afflicted by many challenges, yet the rapid technological penetration and acceptability coupled with the gradual fall in costs constitute encouraging signals for the LMICs. Both CHWs interventions and digital technologies are not inexpensive, but they may provide better value for the money. [less ▲]

in Proceedings of 10th IFAC Symposium on Fault Detection, Supervision and Safety for Technical Processes, SAFEPROCESS 2018 (2018, August 29)

In the present paper, a model-based fault/attack tolerant scheme is proposed to deal with cyber-threats on Cyber Physicals Systems. A common scheme based on observers is designed and a state feedback ... [more ▼]

In the present paper, a model-based fault/attack tolerant scheme is proposed to deal with cyber-threats on Cyber Physicals Systems. A common scheme based on observers is designed and a state feedback control based on an event-triggered framework is given with control synthesis and condition on the switching time. An event-based implementation is proposed in order to achieve novel security strategy. Observer and controller gains are deduced by solving su cient Bilinear Matrix Inequality (BMI) condition. Simulation results on a real-time laboratory three tank system are given to show the attack-tolerant control ability despite data deception attacks on both actuators and sensors. [less ▲]

In this work we present E-EVM, a tool that emulates and visualises the execution of smart contracts on the Ethereum Virtual Machine. By working with the readily available bytecode of smart contracts we ... [more ▼]

In this work we present E-EVM, a tool that emulates and visualises the execution of smart contracts on the Ethereum Virtual Machine. By working with the readily available bytecode of smart contracts we are able to display the program's control flow graph, opcodes and stack for each step of contract execution. This tool is designed to aid the user's understanding of the Etheruem Virtual Machine as well as aid the analysis of any given smart contract. As such, it functions as both an analysis and a learning tool. It allows the user to view the code in each block of a smart contract and follow possible control flow branches. It is able to detect loops and suggest optimisation candidates. It is possible to step through a contract one opcode at a time. E-EVM achieved an average of 85.6% code coverage when tested. [less ▲]

Coordination and integration of different traffic control policies have been of considerable interest in research in the last decades and, recently, have been object of large scale implementation trials ... [more ▼]

Coordination and integration of different traffic control policies have been of considerable interest in research in the last decades and, recently, have been object of large scale implementation trials. In the setting of peri-urban motorway systems, coordination of various kinds of controllers must however be accompanied by accurate prediction of both propagation of flows and queues in the network, as well as the users’ response in terms of route choice. In this paper, we showcase through a real-life case study how coordination and prediction are both essential when performing hybrid urban-motorway control. Simulation results of a Model Predictive Control application are compared to simpler local control approaches, and the impact of coordinated intersection control and, additionally, Ramp Metering is evaluated. [less ▲]

in Proceedings of the 2018 American Control Conference (2018, June 27)

A decoupling approach for state estimation of nonlinear systems represented in the polytopic Takagi-Sugeno with unmeasurable premise variables subject to unknown inputs is proposed in this paper. The idea ... [more ▼]

A decoupling approach for state estimation of nonlinear systems represented in the polytopic Takagi-Sugeno with unmeasurable premise variables subject to unknown inputs is proposed in this paper. The idea consists in defining a state and unknown input transformations in order to divide the state vector into two parts, a measurable part and an observable one (decoupled from the unknown input). A classical Luenberger observer to estimate the unmeasurable part is then designed and given in terms of Linear Matrix Inequalities (LMIs) conditions. A numerical example is also presented in order to illustrate the proposed approach. [less ▲]

in Proceedings of the 2018 American Control Conference (2018, June 27)

In the present paper, the problem of networked control system (NCS) cyber security is considered. The geometric approach is used to evaluate the security and vulnerability level of the controlled system ... [more ▼]

In the present paper, the problem of networked control system (NCS) cyber security is considered. The geometric approach is used to evaluate the security and vulnerability level of the controlled system. The proposed results are about the so-called false data injection attacks and show how imperfectly known disturbances can be used to perform undetectable, or at least stealthy, attacks that can make the NCS vulnerable to attacks from malicious outsiders. A numerical example is given to illustrate the approach. [less ▲]

Fully autonomous driving is one if not the killer application for the upcoming decade of real-time systems. However, in the presence of increasingly sophisticated attacks by highly skilled and well ... [more ▼]

Fully autonomous driving is one if not the killer application for the upcoming decade of real-time systems. However, in the presence of increasingly sophisticated attacks by highly skilled and well equipped adversarial teams, autonomous driving must not only guarantee timeliness and hence safety. It must also consider the dependability of the software concerning these properties while the system is facing attacks. For distributed systems, fault-and-intrusion tolerance toolboxes already offer a few solutions to tolerate partial compromise of the system behind a majority of healthy components operating in consensus. In this paper, we present a concept of an intrusion-tolerant architecture for autonomous driving. In such a scenario, predictability and recovery challenges arise from the inclusion of increasingly more complex software on increasingly less predictable hardware. We highlight how an intrusion tolerant design can help solve these issues by allowing timeliness to emerge from a majority of complex components being fast enough, often enough while preserving safety under attack through pre-computed fail safes. [less ▲]

This position paper lays out current and future studies which we conduct on the UX aspects of security and privacy, our goal being to understand which factors influence privacy-related decision-making. We ... [more ▼]

This position paper lays out current and future studies which we conduct on the UX aspects of security and privacy, our goal being to understand which factors influence privacy-related decision-making. We advocate using UX design methods in order to study interindividual differences, system-related and contextual factors involved in privacy and security attitudes and behaviors. These results will contribute to user-tailored and personalized privacy initiatives and guide the design of future technologies. [less ▲]

Autonomous vehicles have the potential to fundamentally change existing transportation systems. Beyond legal concerns, these societal evolutions will critically depend on user acceptance. As an emerging mode of public transportation [7], Autonomous mobility on demand (AMoD) is of particular interest in this context. The aim of the present study is to identify the main components of acceptability (before first use) and acceptance (after first use) of AMoD, following a user experience (UX) framework. To address this goal, we conducted three workshops (N=14) involving open discussions and a ride in an experimental autonomous shuttle. Using a mixed-methods approach, we measured pre-immersion acceptability before immersing the participants in an on-demand transport scenario, and eventually measured post-immersion acceptance of AMoD. Results show that participants were reassured about safety concerns, however they perceived the AMoD experience as ineffective. Our findings highlight key factors to be taken into account when designing AMoD experiences. [less ▲]

Until recently, organic vapor sensors using liquid crystals (LCs) have employed rigid glass substrates for confining the LC, and bulky equipment for vapor detection. Previously, we demonstrated that coaxially electrospinning nematic LC within the core of polymer fibers provides an alternative and improved form factor for confinement. This enables ppm level sensitivity to harmful industrial organics, such as toluene, while giving the flexibility of textile-like sheets (imparted by polymer encapsulation). Moreover, toluene vapor responses of the [less ▲]

The functional interpretation of genetic variation in disease-associated genes is far outpaced by data generation. Existing algorithms for prediction of variant consequences do not adequately distinguish ... [more ▼]

The functional interpretation of genetic variation in disease-associated genes is far outpaced by data generation. Existing algorithms for prediction of variant consequences do not adequately distinguish pathogenic variants from benign rare variants. This lack of statistical and bioinformatics analyses, accompanied by an ever-increasing number of identified variants in biomedical research and clinical applications, has become a major challenge. Established methods to predict the functional effect of genetic variation use the degree of amino acid conservation across species in linear protein sequence alignment. More recent methods include the spatial distribution pattern of known patient and control variants. Here, we propose to combine the linear conservation and spatial constrained based scores to devise a novel score that incorporates 3-dimensional structural properties of amino acid residues, such as the solvent-accessible surface area, degree of flexibility, secondary structure propensity and binding tendency, to quantify the effect of amino acid substitutions. For this study, we develop a framework for large-scale mapping of established linear sequence-based paralog and ortholog conservation scores onto the tertiary structures of human proteins. This framework can be utilized to map the spatial distribution of mutations on solved protein structures as well as homology models. As a proof of concept, using a homology model of the human Nav1.2 voltage-gated sodium channel structure, we observe spatial clustering in distinct domains of mutations, associated with Autism Spectrum Disorder (>20 variants) and Epilepsy (>100 variants), that exert opposing effects on channel function. We are currently characterizing all variants (>300k individuals) found in ClinVar, the largest disease variant database, as well as variants identified in >140k individuals from general population. The variant mapping framework and our score, informed with structural information, will be useful in identifying structural motifs of proteins associated with disease risk. [less ▲]

Fulfilling the legal requirements of mandated disclosure is a challenge in many contexts. Privacy communication is no exception, especially for those who seek to effectively inform individuals about the ... [more ▼]

Fulfilling the legal requirements of mandated disclosure is a challenge in many contexts. Privacy communication is no exception, especially for those who seek to effectively inform individuals about the use of their data. Lawyers across countries and industries are facing recurring problems when (re)writing privacy notices and terms. Visual and interactive design patterns have been suggested as the solution, yet our analysis shows that they are lacking on most privacy policies. This indicates the need for standardization and an actionable pattern library, which we propose in this paper. [less ▲]

Must the participant to public discourse have knowledge of her beliefs, attitudes and reasons as well as belief-formation processes to have justified political belief? In this paper, we test this question ... [more ▼]

Must the participant to public discourse have knowledge of her beliefs, attitudes and reasons as well as belief-formation processes to have justified political belief? In this paper, we test this question with reference to Jeffrey Stout’s (2004) approach to public discourse and public philosophy. After defining self- knowledge and justification along the lines of James Pryor (2004), we map thereon Stout’s view of public discourse and public philosophy as democratic piety, earnest storytelling and Brandomian expressive rationality. We then lay out Brian Leiter’s (2016) naturalistic critique of public philosophy as “discursive hygiene” to see whether Stoutian public philosophy survives the former’s emotivist-tribalist gauntlet. Lastly, we find that Leiter’s critique proves less radical than it may appear and requires the moderating influence of a public philosophy like Stout’s. All in all, Stoutian public discourse and public philosophy powerfully illustrates a strong, necessary connection between self-knowledge and political justification. Post-truth is not post-justification. [less ▲]

In formal (abstract and structured) argumentation theory, a central notion is that of an attack between a counterargument and the argument that it is challenging. Unlike the notion of an inconsistency ... [more ▼]

In formal (abstract and structured) argumentation theory, a central notion is that of an attack between a counterargument and the argument that it is challenging. Unlike the notion of an inconsistency between two statements in classical logic, this notion of an attack between arguments can be asymmetric, i.e. an argument A can attack an argument B without B attacking A. While this property of the formal systems studied by argumentation theorist has been motivated by considerations about the human practice of argumentation in natural language, there have not been any systematic studies on the connection between the directionality of attacks in argumentation-theoretic formalisms and the way humans actually interpret conflicts between arguments in a non-symmetric way. In this paper, we report on the result of two empirical cognitive studies that aim at filling this gap, one study with ordinary adults (undergraduate students) and one study with adult experts in formal argumentation theory. We interpret the results in light of the notions and distinctions defined in the ASPIC+ framework for structured argumentation, and discuss the relevance of our findings to past and future empirical studies about the link between human argumentation and formal argumentation theory. [less ▲]

In abstract argumentation theory, multiple argumentation semantics have been proposed that allow to select sets of jointly acceptable arguments from a given set of arguments based on the attack relation ... [more ▼]

In abstract argumentation theory, multiple argumentation semantics have been proposed that allow to select sets of jointly acceptable arguments from a given set of arguments based on the attack relation between arguments. The existence of multiple argumentation semantics raises the question which of these semantics predicts best how humans evaluate arguments, possibly depending on the thematic con- text of the arguments. In this study we report on an empirical cognitive study in which we tested how humans evaluate sets of arguments de- pending on the abstract structure of the attack relation between them. Two pilot studies were performed to validate the intended link between argumentation frameworks and sets of natural language arguments. The main experiment involved a group deliberation phase and made use of three different thematic contexts of the argument sets involved. The data strongly suggest that independently of the thematic contexts that we have considered, strong acceptance and strong rejection according to the CF2 and preferred semantics are a better predictor for human argument acceptance than the grounded semantics (which is identical to strong acceptance/rejection with respect to complete semantics). Furthermore, the data suggest that CF2 semantics predicts human argument acceptance better than preferred semantics, but the data for this comparison is limited to a single thematic context. [less ▲]

Blockchain is an emerging foundational technology with the potential to create a novel economic and social system. The complexity of the technology poses many challenges and foremost amongst these are ... [more ▼]

Blockchain is an emerging foundational technology with the potential to create a novel economic and social system. The complexity of the technology poses many challenges and foremost amongst these are monitoring and management of blockchain-based decentralized applications. In this paper, we design, implement and evaluate a novel system to enable management operations in smart contracts. A key aspect of our system is that it facilitates the integration of these operations through dedicated ’managing’ smart contracts to provide data filtering as per the role of the smart contract-based application user. We evaluate the overhead costs of such data filtering operations after post-deployment analyses of five categories of smart contracts on the Ethereum public testnet, Rinkeby. We also build a monitoring tool to display public blockchain data using a dashboard coupled with a notification mechanism of any changes in private data to the administrator of the monitored decentralized application. [less ▲]

Mining pools are collection of workers that work together as a group in order to collaborate in the proof of work and reduce the variance of their rewards when mining. In order to achieve this, Mining ... [more ▼]

Mining pools are collection of workers that work together as a group in order to collaborate in the proof of work and reduce the variance of their rewards when mining. In order to achieve this, Mining pools distribute amongst the workers the task of finding a block so that each worker works on a different subset of the candidate solutions. In most mining pools the selection of transactions to be part of the next block is performed by the pool manager and thus becomes more centralized. A mining Pool is expected to give priority to the most lucrative transactions in order to increase the block reward however changes to the transaction policy done without notification of workers would be difficult to detect. In this paper we treat the transaction selection policy performed by miners as a classification problem; for each block we create a dataset, separate them by mining pool and apply feature selection techniques to extract a vector of importance for each feature. We then track variations in feature importance as new blocks arrive and show using a generated scenario how a change in policy by a mining pool could be detected. [less ▲]

We investigate an architecture where a plasmonic vortex excited in a gold surface propagates on an adiabatically tapered magnetic tip and detaches to the far-field while carrying a well-defined optical ... [more ▼]

We investigate an architecture where a plasmonic vortex excited in a gold surface propagates on an adiabatically tapered magnetic tip and detaches to the far-field while carrying a well-defined optical angular momentum. We analyze the out-coming light and show that, despite generally high losses of flat magnetic surface, our 3D structure exhibits high energy throughput. Moreover, we show that once a magneto-optical activity is activated inside the magnetic tip a modulation of the total power transmittance is possible. [less ▲]

In the last several years, computer-based simulation has become an important analysis and design tool in many engineering fields. The common practice involves the use of low-fidelity models, which in most ... [more ▼]

In the last several years, computer-based simulation has become an important analysis and design tool in many engineering fields. The common practice involves the use of low-fidelity models, which in most cases are able to provide fairly accurate results while maintaining a low computational cost. However, for complex systems such as nuclear reactors, more detailed models are required for the in-depth analysis of the problem at hand, due for example to the complex geometries of the physical domain. Nevertheless, such models are affected by potentially critical uncertainties and inaccuracies. In this context, the use of data assimilation methods such as the Kalman filter to integrate local experimental data witihin the numerical model looks very promising as a high-fidelity analysis tool. In this work, the focus is the application of such methods to the problem of fluid-dynamics analysis of the reactor. Indeed, in terms of nuclear reactor investigation, a detailed characterization of the coolant behaviour within the reactor core is of manda- tory importance in order to understand, among others, the operating conditions of the system, and the potential occurrence of accident scenarios. In this context, the use of data assimilation methods allows the extraction of information of the thermo-dynamics state of the system in a benchmarked transitory in order to increase the fidelity of the com- putational model. Conversely to the current application of control-oriented black-box in the nuclear energy community, in this work the integration of the data-driven paradigm into the numerical formulation of the CFD problem is proposed. In particular, the al- gorithm outlined embeds the Kalman filter into a segregated predictor-corrector formu- lation, commonly adopted for CFD analysis. Due to the construction of the developed method, one of the main challenges achieved is the preservation of mass-conservation for the thermo-dynamics state during each time instant. As a preliminary verification, the proposed methodology is validated on a benchmark of the lid-driven cavity. The obtained results highlight the efficiency of the proposed method with respect to the state-of-art low fidelity approach. [less ▲]

A parallel dual-grid multiscale DEM-VOF coupling is here investigated. Dual- grid multiscale couplings have been recently used to address different engineering problems involving the interaction between ... [more ▼]

A parallel dual-grid multiscale DEM-VOF coupling is here investigated. Dual- grid multiscale couplings have been recently used to address different engineering problems involving the interaction between granular phases and complex fluid flows. Nevertheless, previous studies did not focus on the parallel performance of such a coupling and were, therefore, limited to relatively small applications. In this contribution, we propose an insight into the performance of the dual-grid multiscale DEM-VOF method for three- phase flows when operated in parallel. In particular,we focus on a famous benchmark case for three-phase flows and assess the influence of the partitioning algorithm on the scalability of the dual-grid algorithm. [less ▲]

Aim of this work is the comparison of different turbulent models based on the Reynolds Averaged Navier-Stokes (RANS) equations in order to find out which model is the most suitable for the study of the ... [more ▼]

Aim of this work is the comparison of different turbulent models based on the Reynolds Averaged Navier-Stokes (RANS) equations in order to find out which model is the most suitable for the study of the channel thermal-hydraulics of the TRIGA Mark II reactor. Only the steady state behaviour (i.e. the full power stationary operational conditions) of the reactor has been considered. To this end, the RAS (Reynolds-Averaged Simulation) models available in the open source CFD software OpenFOAM have been applied to the most internal channel of the TRIGA and assessed against a Large Eddy Simulation (LES) model. The results of the latter approach, expressed in terms of axial velocity, turbulent viscosity, turbulent kinetic energy, and temperature have been compared with the results obtained by the RAS models available in OpenFOAM (k − ε, k − ω and Reynolds Stress Transport). Heat transfer is taken into account as well by means of the turbulent energy diffusivity parameter. The simulation results demonstrate how, amongst the RAS models, the k − ω SST is the one whose results are closer to the LES simulation. This model seems to be the best one for the treatment of turbulent flow within the TRIGA subchannel, offering a good compromise between accuracy and computational requirements. Since it is much less expensive than an LES model, it can be applied even to full core calculation, in order to obtain accurate results with less computational effort. [less ▲]

Cognitive biases are a core component of contemporary cognitive-affective models that try to explain pain experience, distress and disability in children and adults experiencing pain. The idea that ... [more ▼]

Cognitive biases are a core component of contemporary cognitive-affective models that try to explain pain experience, distress and disability in children and adults experiencing pain. The idea that children and adults with pain show cognitive biases for pain-related information, i.e. they selectively attend to pain-related information at the cost of other information (attentional bias), interpret ambiguous stimuli as pain-related (interpretation bias) or have biased memories for painful events (memory bias), has been particularly influential in this context. Notwithstanding the considerable progress made in the understanding of cognitive biases related to pain and threat, a number of questions remains unanswered and future challenges linger. A first challenge is to further delineate the characteristics of cognitive biases, including their content specificity and dynamics. A second challenge relates to the understanding of how cognitive biases interrelate with each other and possibly reinforce one another. A third challenge relates to the translation of findings on cognitive biases for pain into clear strategies and recommendations to optimize and evaluate pain treatment programs. Presenters in this symposium will address each of the above-mentioned lingering challenges by both critically reviewing the available evidence on cognitive biases in children and/ or adults experiencing pain and presenting novel research using innovative study set-ups and unique methods for assessing and modifying cognitive biases in children and adults experiencing pain. [less ▲]

in Proceedings of 7th International Energy and Sustainability Conference (IESC) (2018)

In recent years, the possibility of combining photovoltaics (PV) and solar thermal collectors into one solar hybrid module (PVT-module) has been increasingly investigated. PVT-modules produce thermal and ... [more ▼]

In recent years, the possibility of combining photovoltaics (PV) and solar thermal collectors into one solar hybrid module (PVT-module) has been increasingly investigated. PVT-modules produce thermal and electrical energy at the same time. Since the efficiency of a photovoltaic module decreases with increasing temperature, the temperature of the heat transfer media is often limited to about 30 °C and the PVT-module is combined with a heat pump, which increases the temperature on the “warm side”. A common approach is to integrate the PVT-module directly as an evaporator in a heat pump system (PVT-direct). This paper presents the development of a control strategy for a PVT-based CO2 heat pump that takes into account solar radiation, ambient temperature, wind speed, evaporator temperature and compressor power. The developed control strategy provides different operating modes depending on the solar radiation supply as well as the ambient temperature. [less ▲]

Germline and brain-specific somatic variants have been reported as an underlying cause in patients with epilepsy-associated neuropathologies, including focal cortical dysplasias (FCDs) and long-term ... [more ▼]

Germline and brain-specific somatic variants have been reported as an underlying cause in patients with epilepsy-associated neuropathologies, including focal cortical dysplasias (FCDs) and long-term epilepsy associated tumors (LEAT). However, evaluation of identified neuropathology associated variants in genetic screens is complex since not all observed variants contribute to the etiology of neuropathologies not even in genuinely disease-associated genes. Here, we critically reevaluated the pathogenicity of 12 previously published disease-related genes and of 79 neuropathology-associated missense variants listed in the PubMed and ClinVar databases. We (1) assessed the evolutionary gene constraint using the pLI and the missense z score, (2) used the latest American College of Medical Genetics and Genomics (ACMG) guidelines, and (3) performed bioinformatic variant pathogenicity prediction analyses using PolyPhen-2, CADD and GERP. Constraint analysis classified only seven out of 12 genes to be likely disease-associated. Furthermore, 78 (89%) of 88 neuropathology-associated missense variants were classified as being of unknown significance (VUS) and only 10 (11%) as being likely pathogenic (LPII). Pathogenicity prediction yielded a discrimination between LPII variants and a discrimination for VUS compared with rare variant scores from individuals present in the Genome Aggregation Database (gnomAD). In summary, our results demonstrate that interpretation of variants associated with neuropathologies is complex while the application of current ACMG guidelines including bioinformatic pathogenicity prediction can help improving variant evaluation. Furthermore, we will augment this set of literature-identified variants at the conference by results from our variant screen using self-generated deep sequencing data in >150 candidate genes in >50 patients not yet analyzed. [less ▲]

The authors’ first challenge is to decipher the complexity of Islamic Finance despite the opacity of the sector. A second focal point is the agent’s agenda; in the Islamic Finance industry, contributors ... [more ▼]

The authors’ first challenge is to decipher the complexity of Islamic Finance despite the opacity of the sector. A second focal point is the agent’s agenda; in the Islamic Finance industry, contributors mandate intermediaries (agents) to transfer their contributions to social causes according to the Shariah; in principle, Islamic financial institutions must create value for their stakeholders by offering Shariah-compliant products and services. An underlying assumption of agency theory is that agents attempt to maximize their personal welfare and compensation, but such behaviour may not always be in the best interests of other stakeholders, and an analysis of the agent’s agenda can help explain how agents can fall off the pedestal of altruism. Relationships between Islamic banks and three key stakeholders (contributors, beneficiaries and regulators) are also explored via a complexity-aware monitoring process. Contributors provide funds to an Islamic bank (agent), and in return, the agent should be accountable to the contributors, but the form and degree of accountability can vary depending on the organization’s mission. There are many unanswered questions regarding the monitoring process. One objective of the article is to consider whether agents act in the best interests of the stakeholders. Finally, the authors explore the following question: Can blockchain technology and smart contracts support and enhance the transparency feature, which is the core underlying principle of all transactions in the Islamic Finance industry? A qualitative research framework was adopted because of the constraints of the enigmatic, secretive Islamic Finance culture. [less ▲]

in Proceedings of 2017 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM) (2017, December)

In numerous sectors and industries worldwide, there is a trend towards an intercompany and often international division of value creation and related work tasks. To overcome the challenges of complex ... [more ▼]

In numerous sectors and industries worldwide, there is a trend towards an intercompany and often international division of value creation and related work tasks. To overcome the challenges of complex cross-enterprise supply chain networks, innovative approaches to visualize, assess and enhance value streams are sought. The StreaM method, which is described in this paper, enables a comprehensive analysis, design and planning of cross-company product and information flows on different levels of value stream detail. At the same time, the entire methodology is based on a common understanding of key symbols, parameters and calculation procedures. In addition, the use of the developed StreaM method and the associated model in a case study proves its practical applicability in an industrial setting. In further validation projects, the transfer of the “Standardized cross-enterprise Value Stream Management Method” to other industry sectors is envisaged to continuously improve energy, trade or service processes. [less ▲]

Fast growth of Internet content and availability of electronic devices such as smart phones and laptops has created an explosive content demand. As one of the 5G technology enablers, caching is a ... [more ▼]

Fast growth of Internet content and availability of electronic devices such as smart phones and laptops has created an explosive content demand. As one of the 5G technology enablers, caching is a promising technique to off-load the network backhaul and reduce the content delivery delay. Satellite communications provides immense area coverage and high data rate, hence, it can be used for large-scale content placement in the caches. In this work, we propose using hybrid mono/multi-beam satellite-terrestrial backhaul network for off-line edge caching of cellular base stations in order to reduce the traffic of terrestrial network. The off-line caching approach is comprised of content placement and content delivery phases. The content placement phase is performed based on local and global content popularities assuming that the content popularity follows Zipf-like distribution. In addition, we propose an approach to generate local content popularities based on a reference Zipf-like distribution to keep the correlation of content popularity. Simulation results show that the hybrid satellite-terrestrial architecture considerably reduces the content placement time while sustaining the cache hit ratio quite close to the upper-bound compared to the satellite-only method. [less ▲]

in Abstract book of the 20th International Conference on Intelligent Transportation Systems (2017, October)

It is intuitive that there is a causal relationship between human mobility and signaling events in mobile phone networks. Among these events, not only the initiation of calls and data sessions can be used ... [more ▼]

It is intuitive that there is a causal relationship between human mobility and signaling events in mobile phone networks. Among these events, not only the initiation of calls and data sessions can be used in analyses, but also handovers between different locations that reflect mobility. In this work, we investigate if handovers can be used as a proxy metric for flows in the underlying road network, especially in urban environments. More precisely, we show that characteristic profiles of handovers within and between clusters of mobile network cells exist. We base these profiles on models from road traffic flow theory, and show that they can be used for traffic state estimation using floating-car data as ground truth. The presented model can be beneficial in areas with good mobile network coverage but low road traffic counting infrastructure, e.g. in developing countries, but also serve as an additional predictor for existing traffic state monitoring systems. [less ▲]

in IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (2017, October)

We present a precoded multi-user communication test-bed to demonstrate forward link interference mitigation techniques in a multi-beam satellite system scenario which will enable a full frequency reuse ... [more ▼]

We present a precoded multi-user communication test-bed to demonstrate forward link interference mitigation techniques in a multi-beam satellite system scenario which will enable a full frequency reuse scheme. The developed test-bed provides an end-to-end precoding demonstration, which includes a transmitter, a multi-beam satellite channel emulator and user receivers. Each of these parts can be reconfigured accordingly to the desired test scenario. Precoded communications allow full frequency reuse in multiple-input multiple-output (MIMO) channel environments, where several coordinated antennas simultaneously transmit to a number of independent receivers. The developed real-time transmission test-bed assist in demonstrating, designing and benchmarking of the new Symbol-Level Precoding (SLP) techniques, where the data information is used, along with the channel state information, in order to exploit the multi-user interference and transform it into useful power at the receiver side. The demonstrated SLP techniques are designed in order to be computationally efficient, and can be generalized to others multi-channel interference scenarios. [less ▲]

in Proceedings of the European Conference on Cognitive Ergonomics 2017 (2017, October)

"I hope that this survey is a joke because it made me laugh so much". This quote is just one example of many negative respondents' reactions gathered during a large-scale user experience (UX) study ... [more ▼]

"I hope that this survey is a joke because it made me laugh so much". This quote is just one example of many negative respondents' reactions gathered during a large-scale user experience (UX) study. Unfortunately, the survey was no joke, rather a well-constructed and validated standardized UX scale. This paper critically reflects on the use and relevance of standardized UX scales for the evaluation of UX in business contexts. We report on a real-world use case where the meCUE questionnaire has been used to assess employees' experience (N=263) with their organization's intranet. Strong users' reactions to the survey's items and statistical analyses both suggest that the scale is unsuitable for the evaluation of business-oriented systems. Drawing on the description of this inadequacy, we discuss the quality of academic UX tools, calling into question the relevance for practice of academic methods. [less ▲]

in Proceedings of the 43rd International Conference on Very Large Data Bases 2017 (2017, August), 10

Due to their promise of delivering real-time network insights, today's streaming analytics platforms are increasingly being used in the communications networks where the impact of the insights go beyond ... [more ▼]

Due to their promise of delivering real-time network insights, today's streaming analytics platforms are increasingly being used in the communications networks where the impact of the insights go beyond sentiment and trend analysis to include real-time detection of security attacks and prediction of network state (i.e., is the network transitioning towards an outage). Current streaming analytics platforms operate under the assumption that arriving traffic is to the order of kilobytes produced at very high frequencies. However, communications networks, especially the telecommunication networks, challenge this assumption because some of the arriving traffic in these networks is to the order of gigabytes, but produced at medium to low velocities. Furthermore, these large datasets may need to be ingested in their entirety to render network insights in real-time. Our interest is to subject today's streaming analytics platforms --- constructed from state-of-the art software components (Kafka, Spark, HDFS, ElasticSearch) --- to traffic densities observed in such communications networks. We find that filtering on such large datasets is best done in a common upstream point instead of being pushed to, and repeated, in downstream components. To demonstrate the advantages of such an approach, we modify Apache Kafka to perform limited \emph{native} data transformation and filtering, relieving the downstream Spark application from doing this. Our approach outperforms four prevalent analytics pipeline architectures with negligible overhead compared to standard Kafka. [less ▲]

in Proceedings of IEEE International Conference on Communications (ICC) 2017 (2017, July 31)

Cognitive Radio (CR) communication has been considered as one of the promising technologies to enable dynamic spectrum sharing in the next generation of wireless networks. Among several possible enabling ... [more ▼]

Cognitive Radio (CR) communication has been considered as one of the promising technologies to enable dynamic spectrum sharing in the next generation of wireless networks. Among several possible enabling techniques, Spectrum Sensing (SS) is one of the key aspects for enabling opportunistic spectrum access in CR Networks (CRN). From practical perspectives, it is important to design low-complexity wideband CR receiver having low resolution Analog to Digital Converter (ADC) working at a reasonable sampling rate. In this context, this paper proposes a novel spatio-temporal wideband SS technique by employing multiple antennas and one-bit quantization at the CR node, which subsequently enables the use of a reasonable sampling rate. In our analysis, we show that for the same sensing performance requirements, the proposed wideband receiver can have lower power consumption than the conventional CR receiver equipped with a single-antenna and a high-resolution ADC. Furthermore, the proposed technique exploits the spatial dimension by estimating the direction of arrival of Primary User (PU) signals, which is not possible by the conventional SS methods and can be of a significant benefit in a CRN. Moreover, we evaluate the performance of the proposed technique and analyze the effects of one-bit quantization with the help of numerical results. [less ▲]

In this paper, a constructive procedure to design functional unknown input observer for nonlinear continuous time systems under the Polytopic Takagi-Sugeno framework (also known as multiple models systems ... [more ▼]

In this paper, a constructive procedure to design functional unknown input observer for nonlinear continuous time systems under the Polytopic Takagi-Sugeno framework (also known as multiple models systems) is proposed. Applying the Lyapunov theory, Linear Matrix Inequalities (LMI)s conditions are deduced which are solved for feasibility to obtain observer design matrices. To reject the effect of unknown input, classical approach of decoupling the unknown input for the linear case is used. A comparative study between single and Polytopic Lyapunov function is made in order to prove the relaxation effect of the Multiple functions. A solver based solution is then proposed. It will be shown through applicative example (a Quadrotor Aerial Robots Landing) that even if the proposed LMIs solver based solution may look conservative, an adequate choice of the solver makes it suitable for the application of the proposed approach. [less ▲]

in Proceedings of 2017 5th IEEE International Conference on Models and Technologies for Intelligent Transportation Systems (MT-ITS) (2017, June 28)

In this paper, the authors present a Two-Step approach that sequentially adjusts generation and distribution values of the (dynamic) OD matrix. While the proposed methodology already provided excellent ... [more ▼]

In this paper, the authors present a Two-Step approach that sequentially adjusts generation and distribution values of the (dynamic) OD matrix. While the proposed methodology already provided excellent results for updating demand flows on a motorway, the aim of this paper is to validate this conclusion on a real network: Luxembourg City. This network represents the typical middle-sized European city in terms of network dimension. Moreover, Luxembourg City has the typical structure of a metropolitan area, composed of a city centre, ring, and suburb areas. An innovative element of this paper is to use mobile network data to create a time-dependent profile of the generated demand inside and outside the ring. To support the claim that the model is ready for practical implementation, it is interfaced with PTV Visum, one of the most widely adopted software tools for traffic analysis. Results of these experiments provide a solid empirical ground in order to further develop this model and to understand if its assumptions hold for urban scenarios. [less ▲]

Objective: To unravel the genetic factors that play a role in PD we used the whole exome sequencing data available as a part of Parkinson Progression Markers Initiative (PPMI). Background: Parkinson’s ... [more ▼]

Objective: To unravel the genetic factors that play a role in PD we used the whole exome sequencing data available as a part of Parkinson Progression Markers Initiative (PPMI). Background: Parkinson’s disease (PD) is a complex disease. Besides variants in high-risk genes such as LRRK2 and PARK2, multiple genes associated to sporadic PD were discovered via genome-wide association studies. Yet, there is a large number of genetic factors that need to be deciphered. Methods: To unravel the genetic factors that play a role in PD we used the whole exome sequencing data available as a part of Parkinson Progression Markers Initiative (PPMI). The dataset comprised of 435 PD cases and 162 ethnically matched controls, respectively. We performed burden tests at single variant, gene and geneset levels on common and rare exonic and splice-variants. We also looked for severity of rare highly deleterious variants (CADD phred score>30) using the CADD score as well as singleton (variants seen in only one individual across cases and controls) rare variants. Additionally, we performed the functional enrichment analysis with the genes harboring rare highly deleterious variants (case uniq genes) that are only present in cases. Results: We observed an increased mutational burden of singleton variants in PD cases compared to the controls in nonsynonymous+LOF variants (empirical P-value 0.005) but not in the synonymous variants (empirical P-value 0.09). We observed a higher significant burden (P-value 0.028) as well as higher significant severity (empirical P-value 0.027) of rare, highly deleterious nonsynonymous variants, but not in the synonymous variants of the candidate genes (P-value 0.686, empirical P-value 0.556 for burden and severity respectively). The network analysis of genes having deleterious variants only present in cases (Case uniq) showed a significant increase in connectivity compared to random networks (P-value 0.0002). Pathway analysis of those genes showed a significant enrichment of pathways and biological process implicated in the nervous system functioning and the etiology of PD. Conclusions: Our study supports the complex disease notion of PD by highlighting the convoluted architecture of PD where case uniq genes including LRRK2 are implicated in several biological processes and pathways related to PD. The main finding of this study is to discover the complex genetics of PD at an exome wide level. [less ▲]

in Proceedings of 6th IEEE International Conference on Systems and Control (ICSC) 2017 (2017, May)

In this paper, a step by step algorithm is given to design functional unknown input observer for nonlinear discrete time systems under the Polytopic Takagi-Sugeno framework (also known as multiple models ... [more ▼]

In this paper, a step by step algorithm is given to design functional unknown input observer for nonlinear discrete time systems under the Polytopic Takagi-Sugeno framework (also known as multiple models systems). Applying the Lyapunov theory and the L2 attenuation, Linear Matrix Inequalities (LMI)s conditions are deduced which are solved for feasibility to obtain observer design matrices. To reject the effect of unknown input, classical approach of decoupling the unknown input for the linear case is used. A solver based solution is proposed. The novelty of the proposed approach consists in solving simultaneously both structural constraints and LMIs, which ensure a mean for the efficient design of the gains of the observers. To illustrate the proposed theoretical results, an application example of model reference tracking control applied to an electro-mechanical model of a motor with a time varying parameter is discussed. [less ▲]

in Abstract book of the 4th IEEE/ACM International Conference on Mobile Software Engineering and Systems (MobileSoft 2017) (2017, May)

To devise efficient approaches and tools for detecting malicious packages in the Android ecosystem, researchers are increasingly required to have a deep understanding of malware. There is thus a need to ... [more ▼]

To devise efficient approaches and tools for detecting malicious packages in the Android ecosystem, researchers are increasingly required to have a deep understanding of malware. There is thus a need to provide a framework for dissecting malware and locating malicious program fragments within app code in order to build a comprehensive dataset of malicious samples. Towards addressing this need, we propose in this work a tool-based approach called HookRanker, which provides ranked lists of potentially malicious packages based on the way malware behaviour code is triggered. With experiments on a ground truth set of piggybacked apps, we are able to automatically locate the malicious packages from piggybacked Android apps with an accuracy of 83.6% in verifying the top five reported items. [less ▲]

In the present paper, a networked control system under both cyber and physical attacks is considered. An adapted formulation of the problem under physical attacks, data deception and false data injection ... [more ▼]

In the present paper, a networked control system under both cyber and physical attacks is considered. An adapted formulation of the problem under physical attacks, data deception and false data injection attacks, is used for controller synthesis. Based on the classical fault tolerant detection (FTD) tools, a residual generator for attack/fault detection based on observers is proposed. An event-triggered and Bilinear Matrix Inequality (BMI) implementation is proposed in order to achieve novel and better security strategy. The purpose in using this implementation would be to reduce (limit) the total number of transmissions to only instances when the networked control system (NCS) needs attention. It is important to note that the main contribution of this paper is to establish the adequate event-triggered and BMI-based methodology so that the particular structure of the mixed attacked/faulty structure can be re-formulated within the classical FTD paradigm. Experimental results are given to illustrate the developed approach efficiency on a pilot three-tank system. The plant model is presented and the proposed control design is applied to the system. [less ▲]

Introduction: The project focuses on the integration device-based assessment (DBA) with a mobile application (mPower) into the longitudinal deeply-phenotyped HELP-PD (Health in the Elderly Luxembourgish ... [more ▼]

Introduction: The project focuses on the integration device-based assessment (DBA) with a mobile application (mPower) into the longitudinal deeply-phenotyped HELP-PD (Health in the Elderly Luxembourgish Population with a focus on Parkinson’s disease) cohort for patients with Parkinsonism in Luxembourg and the Greater Region to monitor frequency and degree of variation in symptoms of Parkinsonism, to identify potential sources and modulators of variation and to evaluate how symptoms are correlated with these modulators across patients. Methods: We integrate for the first time the mPower iOS app into a deeply phenotyped cohort. mPower is one of the first apps to use Apple’s Research Kit framework and combines a traditional survey-based approach with more granular and precise data gained from a person’s iPhone related to sensor- (e.g. step count, GPS-tracking) or task-based assessments (e.g. finger tapping, tremor detection, sustained phonation, simple gait analysis, memory test). Anonymized longitudinal data is sent to a repository, then retrieved, matched, and correlated with conventional HELP-PD data from a total of 47 screening instruments for motor and non-motor functions in Parkinsonism obtained from annual visits of study participants. 14 patients with clinically confirmed IPD are currently included in the pilot phase. Results/Discussion: We modified the mPower app and successfully integrated it into HELP-PD’s novel database infrastructure, allowing for a wide variety of analyses. The reporting system is able to handle multiple DBAs, with the implementation of an in-depth gait analysis system currently pending. Considerable attention was given to data protection. The system is currently fully functional with the pilot phase having started in June 2016. First correlations with traditional clinical data are planned for early 2017. [less ▲]

In times of wide availability of yearly mortality information of age and period groups all over the world, we lack in tools that detect and graph fine-grained deviations from mortality trends. We provide ... [more ▼]

In times of wide availability of yearly mortality information of age and period groups all over the world, we lack in tools that detect and graph fine-grained deviations from mortality trends. We provide a new age-period-cohort based methodology, combining information from age-period (AP) and APC-Detrended (APCD) analyses to detect all-cause mortality increases. Plotting the resulting AP coefficients and APCD residuals in equilateral Lexis diagrams, mortality patterns can easily be distinguished as age, period, or cohort trends and fluctuations. Additionally, we detect abnormalities as interactions of age and period (‘big red spots’). We then investigate the ‘red spots’ of mortality of young-adult cohorts in the early 1990s in Spain, other southern European countries and the U.S. to delineate their simultaneously occurring public health crises. Additional analyses with WHO mortality data show that mortality increases are mostly due to increased HIV/AIDS mortality. We discuss possible applications of the new method. [less ▲]

Explicitly including the dynamics of users' route choice behaviour in optimal traffic control applications has been of interest for researchers in the last five decades. This has been recognized as a very ... [more ▼]

in Proceedings of 2017 International Energy and Sustainability Conference (IESC 2016) (2017)

In recent years, the possibility of combining photovoltaics (PV) and solar thermal collectors into one solar hybrid module (PVT-module) has been increasingly investigated. PVT-modules produce thermal and ... [more ▼]

In recent years, the possibility of combining photovoltaics (PV) and solar thermal collectors into one solar hybrid module (PVT-module) has been increasingly investigated. PVT-modules produce thermal and electrical energy at the same time. Since the efficiency of a photovoltaic module decreases with increasing temperature, the temperature of the heat transfer media is often limited to about 30 °C and the PVT-module is combined with a heat pump, which increases the temperature on the “warm side”. A common approach is to integrate the PVT-module directly as an evaporator in a heat pump system (PVT-direct). This paper presents a thermal model of a PVT-direct module as the heat source for a R744/CO2 heat pump. Due to the combined effect of flow channel patterns, solar radia-tion, the ambient conditions and possible condensation and frost formation, heat transfer and thermal distribution conditions of the PVT-direct evaporator are inevitably com-plicated to determine. The proposed thermal model of this hybrid solar module that has CO2 direct evaporation in microchannels will be used to simulate the behavior of the module under different climatic operating conditions. Fur-thermore, it will quantify all energy inputs and/or losses as well as their influence on the total energy supplied by the PVT-module. This will be used to investigate the overall CO2-PVT heat pump system performance in prospective simulations. [less ▲]

Risk treatment is an important part of risk management, and deals with the question which security controls shall be implemented in order to mitigate risk. Indeed, most notably when the mitigated risk is ... [more ▼]

in Proceedings of SPIE : The International Society for Optical Engineering (2017), 10346

Nanoporous gold is a very promising material platform for several plasmonic applications. Nanoporous gold can be formed by dealloying Au–Ag alloys, previously grown by means of Ag-Au co-sputtering. The ... [more ▼]

Nanoporous gold is a very promising material platform for several plasmonic applications. Nanoporous gold can be formed by dealloying Au–Ag alloys, previously grown by means of Ag-Au co-sputtering. The optical response is completely determined by the nanostructured film features, that only depend on the initial alloy composition. It has been extensively used as SERS substrate both as thin film and nanofabricated fancy designs. Here we explore the potential application of nanoporous gold as SERS substrate as it is coupled and decorated with Ag nanoparticles. Significant enhancement has been observed in comparison with bare nanoporous film. [less ▲]

in Abstract book of the 16th IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom) (2017)

App updates and repackaging are recurrent in the Android ecosystem, filling markets with similar apps that must be identified and analysed to accelerate user adoption, improve development efforts, and ... [more ▼]

App updates and repackaging are recurrent in the Android ecosystem, filling markets with similar apps that must be identified and analysed to accelerate user adoption, improve development efforts, and prevent malware spreading. Despite the existence of several approaches to improve the scalability of detecting repackaged/cloned apps, researchers and practitioners are eventually faced with the need for a comprehensive pairwise comparison to understand and validate the similarities among apps. This paper describes the design of SimiDroid, a framework for multi-level comparison of Android apps. SimiDroid is built with the aim to support the understanding of similarities/changes among app versions and among repackaged apps. In particular, we demonstrate the need and usefulness of such a framework based on different case studies implementing different analysing scenarios for revealing various insights on how repackaged apps are built. We further show that the similarity comparison plugins implemented in SimiDroid yield more accurate results than the state-of-the-art. [less ▲]

In this case study we describe the iterative process of paper prototyping, using a board game, to co-design a location-based mobile application. The end goal of the application is to motivate reflection ... [more ▼]

In this case study we describe the iterative process of paper prototyping, using a board game, to co-design a location-based mobile application. The end goal of the application is to motivate reflection on historical topics about migration. The board game serves to capture the core concerns of this application by simulating movement through the city. Three play tests highlighted the users' interest and issues with the historical content, the way this content is represented, and the players' responses to the interactions and motivating mechanisms of the application. Results show that the board game helped capture important design preferences and problems, ensuring the improvement of our scenario. This feedback can help reduce development effort and implement a future technology prototype closer to the needs of our end users. [less ▲]

Bitcoin is currently the most popular digital currency. It operates on a decentralised peer-to-peer network using an open source cryptographic protocol. In this work, we create a model of the selection ... [more ▼]

Bitcoin is currently the most popular digital currency. It operates on a decentralised peer-to-peer network using an open source cryptographic protocol. In this work, we create a model of the selection process performed by mining pools on the set of unconfirmed transactions and then attempt to predict if an unconfirmed transaction will be part of the next block by treating it as a supervised classification problem. We identified a vector of features obtained through service monitoring of the Bitcoin transaction network and performed our experiments on a publicly available dataset of Bitcoin transaction. [less ▲]

Innovative denoising techniques based on Stationary Wavelet Transform (SWT) have started being applied to Pulsed Thermography (PT) sequences, showing marked potentialities in improving defect detection. In this contribution, a SWT-based denoising procedure is performed on high and low resolution PT sequences. Samples under test are two composite panels with known defects. The denoising procedure undergoes an optimization step. An innovative criterion for selecting the optimal decomposition level in multi-scale SWT-based denoising is proposed. The approach is based on a comparison, in the wavelet domain, of the information content in the thermal image with noise propagated. The optimal wavelet basis is selected according to two performance indexes, respectively based on the probability distribution of the information content of the denoised frame, and on the Energy-to-Shannon Entropy ratio. After the optimization step, denoising is applied on the whole thermal sequence. The approximation coefficients at the optimal level are moved to the frequency domain, then low-pass filtered. Linear Minimum Mean Square Error (LMMSE) is applied to detail coefficients at the optimal level. Finally, Pulsed Phase Thermography (PPT) is performed. The performance of the optimized denoising method in improving the defect detection capability respect to the non-denoised case is quantified using the Contrast Noise Ratio (CNR) criterion. [less ▲]

Anorexia nervosa is characterized by fear of weight gain. This is reflected in amygdala activation during confrontation with distorted photographs of oneself simulating weight gain. In contrast ... [more ▼]

Anorexia nervosa is characterized by fear of weight gain. This is reflected in amygdala activation during confrontation with distorted photographs of oneself simulating weight gain. In contrast, photographs of emaciated women induce startle attenuation, suggesting a positive valuation of extreme slimness. To combine these findings, we applied an affective startle modulation paradigm containing photos of the participants simulating weight gain and photos simulating weight loss. We assessed eye-blink startle responses via EMG in 20 women with anorexia nervosa (AN; mean age = 25 years; mean BMI = 23) and 20 healthy control women (HC; mean age = 25 years; mean BMI = 23). We were able to replicate affective startle modulation of standard positive, negative, and neutral pictures, except for an absence of startle attenuation for positive pictures in AN. Body images did not modulate the startle response in either group. This was in contrast to the subjective ratings, in which the AN group indicated negative valence and high arousal for distorted body images. The body photographs used in our study emphasized general body shape and it appears that this was not threatening to AN patients. Photos highlighting body details might produce different results. Considering that body image exposure, a frequently used intervention tool for AN, aims at fear reduction through habituation, it is essential to determine which aspects of the body actually elicit fear responses to maximize therapy outcome. [less ▲]

In this work, a numerical approach to predict the behavior of a pure water jet developing inside a nozzle for Abrasive Water Jet Cutting (AWJC) is investigated. In a standard AWJC configuration, the water ... [more ▼]

In this work, a numerical approach to predict the behavior of a pure water jet developing inside a nozzle for Abrasive Water Jet Cutting (AWJC) is investigated. In a standard AWJC configuration, the water jet carries the major energy content of the entire system, and is responsible for accelerating abrasive particles that will perform the cutting action of hard materials. Therefore an accurate simulation of a pure water jet can bring significant insight on the overall AWJC process. Capturing the behavior of a multiphase high-speed flow in a complex geometry is however particularly challenging. In this work, we adopt a combined approach based on the Volume of Fluid (VOF) and Large Eddy Simulation (LES) techniques in order to respectively capture the water/air interface and to model turbulent structures of the flow. The aim of this contribution is to investigate how the two techniques apply to the specific problem, and to offer general guidelines for practitioners willing to adopt them. Costs considerations will be then presented with particular reference to the usage of the OpenFOAM® environment. The reported results are meant to provide guidance for AWJ applications and future developments of AWJ nozzles. [less ▲]