Informatics, School ofhttp://hdl.handle.net/1842/1022019-02-06T12:59:18Z2019-02-06T12:59:18ZUnderstanding mobile network quality and infrastructure with user-side measurementsFida, Mah-RukhRukh, Mahhttp://hdl.handle.net/1842/332382018-11-19T11:54:51Z2019-11-29T00:00:00ZUnderstanding mobile network quality and infrastructure with user-side measurements
Fida, Mah-Rukh; Rukh, Mah
Measurement collection is a primary step towards analyzing and optimizing performance
of a telecommunication service. With an Mobile Broadband (MBB) network,
the measurement process has not only to track the network’s Quality of Service (QoS)
features but also to asses a user’s perspective about its service performance. The later
requirement leads to “user-side measurements” which assist in discovery of performance
issues that makes a user of a service unsatisfied and finally switch to another
network.
User-side measurements also serve as first-hand survey of the problem domain. In
this thesis, we exhibit the potential in the measurements collected at network edge by
considering two well-known approaches namely crowdsourced and distributed testbed-based
measurements. Primary focus is on exploiting crowdsourced measurements
while dealing with the challenges associated with it. These challenges consist of differences
in sampling densities at different parts of the region, skewed and non-uniform
measurement layouts, inaccuracy in sampling locations, differences in RSS readings
due to device-diversity and other non-ideal measurement sampling characteristics. In
presence of heterogeneous characteristics of the user-side measurements we propose
how to accurately detect mobile coverage holes, to devise sample selection process
so to generate a reliable radio map with reduced sample cost, and to identify cellular
infrastructure at places where the information is not public. Finally, the thesis unveils
potential of a distributed measurement test-bed in retrieving performance features
from domains including user’s context, service content and network features, and understanding
impact from these features upon the MBB service at the application layer.
By taking web-browsing as a case study, it further presents an objective web-browsing
Quality of Experience (QoE) model.
2019-11-29T00:00:00ZActive provenance for data intensive researchSpinuso, Alessandrohttp://hdl.handle.net/1842/331812018-10-30T10:19:01Z2018-11-29T00:00:00ZActive provenance for data intensive research
Spinuso, Alessandro
The role of provenance information in data-intensive research is a significant topic of
discussion among technical experts and scientists. Typical use cases addressing traceability,
versioning and reproducibility of the research findings are extended with more
interactive scenarios in support, for instance, of computational steering and results
management. In this thesis we investigate the impact that lineage records can have on
the early phases of the analysis, for instance performed through near-real-time systems
and Virtual Research Environments (VREs) tailored to the requirements of a specific
community. By positioning provenance at the centre of the computational research
cycle, we highlight the importance of having mechanisms at the data-scientists’ side
that, by integrating with the abstractions offered by the processing technologies, such
as scientific workflows and data-intensive tools, facilitate the experts’ contribution to
the lineage at runtime. Ultimately, by encouraging tuning and use of provenance for
rapid feedback, the thesis aims at improving the synergy between different user groups
to increase productivity and understanding of their processes.
We present a model of provenance, called S-PROV, that uses and further extends
PROV and ProvONE. The relationships and properties characterising the workflow’s
abstractions and their concrete executions are re-elaborated to include aspects related
to delegation, distribution and steering of stateful streaming operators. The model is
supported by the Active framework for tuneable and actionable lineage ensuring the
user’s engagement by fostering rapid exploitation. Here, concepts such as provenance
types, configuration and explicit state management allow users to capture complex
provenance scenarios and activate selective controls based on domain and user-defined
metadata. We outline how the traces are recorded in a new comprehensive system,
called S-ProvFlow, enabling different classes of consumers to explore the provenance
data with services and tools for monitoring, in-depth validation and comprehensive
visual-analytics. The work of this thesis will be discussed in the context of an existing
computational framework and the experience matured in implementing provenance-aware
tools for seismology and climate VREs. It will continue to evolve through
newly funded projects, thereby providing generic and user-centred solutions for data-intensive
research.
2018-11-29T00:00:00ZCryptographic techniques for hardware securityTselekounis, IoannisTselekounis, Yiannishttp://hdl.handle.net/1842/331482018-10-23T09:56:55Z2018-11-29T00:00:00ZCryptographic techniques for hardware security
Tselekounis, Ioannis; Tselekounis, Yiannis
Traditionally, cryptographic algorithms are designed under the so-called black-box model, which considers adversaries that receive black-box access to the hardware implementation. Although a "black-box" treatment covers a wide range of attacks, it fails to capture reality adequately, as real-world adversaries can exploit physical properties of the implementation, mounting attacks that enable unexpected, non-black-box access, to the components of the cryptographic system. This type of attacks is widely known as physical attacks, and has proven to be a significant threat to the real-world security of cryptographic systems. The present dissertation is (partially) dealing with the problem of protecting cryptographic memory against physical attacks, via the use of non-malleable codes, which is a notion introduced in a preceding work, aiming to provide privacy of the encoded data, in the presence of adversarial faults. In the present thesis we improve the current state-of-the-art on non-malleable codes and we provide practical solutions for protecting real-world cryptographic implementations against physical attacks. Our study is primarily focusing on the following adversarial models: (i) the extensively studied split-state model, which assumes that private memory splits into two parts, and the adversary tampers with each part, independently, and (ii) the model of partial functions, which is introduced by the current thesis, and models adversaries that access arbitrary subsets of codeword locations, with bounded cardinality. Our study is comprehensive, covering one-time and continuous, attacks, while for the case of partial functions, we manage to achieve a stronger notion of security, that we call non-malleability with manipulation detection, that in addition to privacy, it also guarantees integrity of the private data. It should be noted that, our techniques are also useful for the problem of establishing, private, keyless communication, over adversarial communication channels. Besides physical attacks, another important concern related to cryptographic hardware security, is that the hardware fabrication process is assumed to be trusted. In reality though, when aiming to minimize the production costs, or whenever access to leading-edge manufacturing facilities is required, the fabrication process requires the involvement of several, potentially malicious, facilities. Consequently, cryptographic hardware is susceptible to the so-called hardware Trojans, which are hardware components that are maliciously implanted to the original circuitry, having as a purpose to alter the device's functionality, while remaining undetected. Part of the present dissertation, deals with the problem of protecting cryptographic hardware against Trojan injection attacks, by (i) proposing a formal model for assessing the security of cryptographic hardware, whose production has been partially outsourced to a set of untrusted, and possibly malicious, manufacturers, and (ii) by proposing a compiler that transforms any cryptographic circuit, into another, that can be securely outsourced.
2018-11-29T00:00:00ZNeurocomputational model for learning, memory consolidation and schemasDupuy, Nathaliehttp://hdl.handle.net/1842/331442018-10-23T09:32:41Z2018-11-29T00:00:00ZNeurocomputational model for learning, memory consolidation and schemas
Dupuy, Nathalie
This thesis investigates how through experience the brain acquires and stores memories,
and uses these to extract and modify knowledge. This question is being studied
by both computational and experimental neuroscientists as it is of relevance for neuroscience,
but also for artificial systems that need to develop knowledge about the world
from limited, sequential data. It is widely assumed that new memories are initially
stored in the hippocampus, and later are slowly reorganised into distributed cortical
networks that represent knowledge. This memory reorganisation is called systems consolidation.
In recent years, experimental studies have revealed complex hippocampal-neocortical
interactions that have blurred the lines between the two memory systems,
challenging the traditional understanding of memory processes. In particular, the prior
existence of cortical knowledge frameworks (also known as schemas) was found to
speed up learning and consolidation, which seemingly is at odds with previous models
of systems consolidation. However, the underlying mechanisms of this effect are not
known.
In this work, we present a computational framework to explore potential interactions
between the hippocampus, the prefrontal cortex, and associative cortical areas
during learning as well as during sleep. To model the associative cortical areas, where
the memories are gradually consolidated, we have implemented an artificial neural network
(Restricted Boltzmann Machine) so as to get insight into potential neural mechanisms
of memory acquisition, recall, and consolidation.
We analyse the network’s properties using two tasks inspired by neuroscience experiments.
The network gradually built a semantic schema in the associative cortical
areas through the consolidation of multiple related memories, a process promoted by
hippocampal-driven replay during sleep. To explain the experimental data we suggest
that, as the neocortical schema develops, the prefrontal cortex extracts characteristics
shared across multiple memories. We call this information meta-schema. In our model,
the semantic schema and meta-schema in the neocortex are used to compute consistency,
conflict and novelty signals. We propose that the prefrontal cortex uses these
signals to modulate memory formation in the hippocampus during learning, which in
turn influences consolidation during sleep replay.
Together, these results provide theoretical framework to explain experimental findings
and produce predictions for hippocampal-neocortical interactions during learning
and systems consolidation.
2018-11-29T00:00:00Z