Degree Level

Search results

Pages

The computational complexity of exploring distance properties of large graphs such as real-world social networks which consist of millions of nodes is extremely expensive. Recomputing distances in subgraphs of the original graph will add to the cost. One way to avoid this is to use subgraphs where the distance between any pair of vertices is the same as in the original graph. Such a subgraph is called {\em isometric}. A connected graph is {\em distance preserving}, for which we use the... Show moreThe computational complexity of exploring distance properties of large graphs such as real-world social networks which consist of millions of nodes is extremely expensive. Recomputing distances in subgraphs of the original graph will add to the cost. One way to avoid this is to use subgraphs where the distance between any pair of vertices is the same as in the original graph. Such a subgraph is called {\em isometric}. A connected graph is {\em distance preserving}, for which we use the abbreviation dp, if it has an isometric subgraph of every order. In this framework we study dp graphs from both the structural and algorithmic perspectives. First, we study the structural nature of dp graphs. This involves classifying graphs based on the dp property and the relation between dp graphs to other graph classes. Second, we study the recognition problem of dp graphs. We intend to develop efficient algorithms for finding isometric subgraphs as well as deciding whether a graph is dp or not. Show less

Species diversity is an important measurement of ecological communities.Scientists believe that there is a strong relationship between speciesdiversity and ecosystem processes. However efforts to investigate microbialdiversity using whole genome shotgun reads data are still scarce. With novel applications of data structuresand the development of novel algorithms, firstly we developed an efficient k-mer countingapproach and approaches to enable scalable streaming analysis of large and error... Show moreSpecies diversity is an important measurement of ecological communities.Scientists believe that there is a strong relationship between speciesdiversity and ecosystem processes. However efforts to investigate microbialdiversity using whole genome shotgun reads data are still scarce. With novel applications of data structuresand the development of novel algorithms, firstly we developed an efficient k-mer countingapproach and approaches to enable scalable streaming analysis of large and error-prone short-read shotgun data sets. Then based on these efforts, we developed a statistical framework allowing for scalable diversity analysis of large,complex metagenomes without the need for assembly or reference sequences. Thismethod is evaluated on multiple large metagenomes from differentenvironments, such as seawater, human microbiome, soil. Given the velocity ingrowth of sequencing data, this method is promising for analyzing highlydiverse samples with relatively low computational requirements. Further, as themethod does not depend on reference genomes, it also provides opportunities totackle the large amounts of unknowns we find in metagenomicdatasets. Show less

Preserving privacy and integrity of private data has become core requirements for many distributed systems across different parties. In these systems, one party may try to compute or aggregate useful information from the private data of other parties. However, this party is not be fully trusted by other parties. Therefore, it is important to design security protocols for preserving such private data. Furthermore, one party may want to query the useful information computed from such private... Show morePreserving privacy and integrity of private data has become core requirements for many distributed systems across different parties. In these systems, one party may try to compute or aggregate useful information from the private data of other parties. However, this party is not be fully trusted by other parties. Therefore, it is important to design security protocols for preserving such private data. Furthermore, one party may want to query the useful information computed from such private data. However, query results may be modified by a malicious party. Thus, it is important to design query protocols such that query result integrity can be verified.In this dissertation, we study four important privacy and integrity preserving problems for different distributed systems. For two-tiered sensor networks, where storage nodes serve as an intermediate tier between sensors and a sink for storing data and processing queries, we proposed SafeQ, a protocol that prevents compromised storage nodes from gaining information from both sensor collected data and sink issued queries, while it still allows storage nodes to process queries over encrypted data and the sink to detect compromised storage nodes when they misbehave. For cloud computing, where a cloud provider hosts the data of an organization and replies query results to the customers of the organization, we propose novel privacy and integrity preserving schemes for multi-dimensional range queries such that the cloud provider can process encoded queries over encoded data without knowing the actual values, and customers can verify the integrity of query results with high probability. For distributed firewall policies, we proposed the first privacy-preserving protocol for cross-domain firewall policy optimization. For any two adjacent firewalls belonging to two different administrative domains, our protocol can identify in each firewall the rules that can be removed because of the other firewall. For network reachability, one of the key factors for capturing end-to-end network behavior and detecting the violation of security policies, we proposed the first cross-domain privacy-preserving protocol for quantifying network reachability. Show less

Information retrieval addresses the fundamental problem of how to identify the objects from database that satisfies the information needs of users. Facing the information overload, the major challenge in search algorithm design is to ensure that useful information can be found both accurately and efficiently from large databases.To address this challenge, different indexing and retrieval methods had been proposed for different types of data, namely sparse data (e.g. documents), dense data (e... Show moreInformation retrieval addresses the fundamental problem of how to identify the objects from database that satisfies the information needs of users. Facing the information overload, the major challenge in search algorithm design is to ensure that useful information can be found both accurately and efficiently from large databases.To address this challenge, different indexing and retrieval methods had been proposed for different types of data, namely sparse data (e.g. documents), dense data (e.g. dense feature vectors) and bag-of-features (e.g. local feature represented images). For sparse data, inverted index and document retrieval models had been proved to be very effective for large scale retrieval problems. For dense data and bag-of-feature data, however, there are still some open problems. For example, Locality Sensitive Hashing, a state-of-the-art method for searching high dimensional vectors, often fails to make a good tradeoff between precision and recall. Namely, it tends to achieve high preci- sion but with low recall or vice versa. The bag-of-words model, a popular approach for searching objects represented bag-of-features, has a limited performance because of the information loss during the quantization procedure.Since the general problem of searching objects represented in dense vectors and bag-of-features may be too challenging, in this dissertation, we focus on nearly duplicate search, in which the matched objects is almost identical to the query. By effectively exploring the statistical proper- ties of near duplicities, we will be able to design more effective indexing schemes and search algorithms. Thus, the focus of this dissertation is to design new indexing methods and retrieval algorithms, for near duplicate search in large scale databases, that accurately capture the data simi- larity and delivers more accurate and efficient search. Below, we summarize the main contributions of this dissertation:Our first contribution is a new algorithm for searching near duplicate bag-of-features data. The proposed algorithm, named random seeding quantization, is more efficient in generating bag-of- words representations for near duplicate images. The new scheme is motivated by approximating the optimal partial matching between bag-of-features, and thus produces a bag-of-words representation capturing the true similarities of the data, leading to more accurate and efficient retrieval of bag-of-features data.Our second contribution, termed Random Projection Filtering, is a search algorithm designed for efficient near duplicate vector search. By explicitly exploiting the statistical properties of near duplicity, the algorithm projects high dimensional vectors into lower dimensional space and filter out irrelevant items. Our effective filtering procedure makes RPF more accurate and efficient to identify nearly duplicate objects in databases.Our third contribution is to develop and evaluate a new randomized range search algorithm for near duplicate vectors in high dimensional spaces, termed as Random Projection Search. Different from RPF, the algorithm presented in this chapter is suitable for a wider range of applications be- cause it does not require the sparsity constrains for high search accuracy. The key idea is to project both the data points and the query point into an one dimensional space by a random projection, and perform one dimensional range search to find the subset of data points that are within the range of a given query using binary search. We prove the theoretical guarantee for the proposed algorithm and evaluate its empirical performance on a dataset of 1.1 billion image features. Show less

Non-protein-coding RNAs (ncRNAs) are RNA molecules that function directly at the level of RNA without translating into protein. They play important biological functions in all three domains of life, i.e. Eukarya, Bacteria and Archaea. To understand the working mechanisms and the functions of ncRNAs in various species, a fundamental step is to identify both known and novel ncRNAs from large-scale biological data.Large-scale genomic data includes both genomic sequence data and NGS sequencing... Show moreNon-protein-coding RNAs (ncRNAs) are RNA molecules that function directly at the level of RNA without translating into protein. They play important biological functions in all three domains of life, i.e. Eukarya, Bacteria and Archaea. To understand the working mechanisms and the functions of ncRNAs in various species, a fundamental step is to identify both known and novel ncRNAs from large-scale biological data.Large-scale genomic data includes both genomic sequence data and NGS sequencing data. Both types of genomic data provide great opportunity for identifying ncRNAs. For genomic sequence data, a lot of ncRNA identification tools that use comparative sequence analysis have been developed. These methods work well for ncRNAs that have strong sequence similarity. However, they are not well-suited for detecting ncRNAs that are remotely homologous. Next generation sequencing (NGS), while it opens a new horizon for annotating and understanding known and novel ncRNAs, also introduces many challenges. First, existing genomic sequence searching tools can not be readily applied to NGS data because NGS technology produces short, fragmentary reads. Second, most NGS data sets are large-scale. Existing algorithms are infeasible on NGS data because of high resource requirements. Third, metagenomic sequencing, which utilizes NGS technology to sequence uncultured, complex microbial communities directly from their natural inhabitants, further aggravates the difficulties. Thus, massive amount of genomic sequence data and NGS data calls for efficient algorithms and tools for ncRNA annotation.In this dissertation, I present three computational methods and tools to efficiently identify ncRNAs from large-scale biological data. Chain-RNA is a tool that combines both sequence similarity and structure similarity to locate cross-species conserved RNA elements with low sequence similarity in genomic sequence data. It can achieve significantly higher sensitivity in identifying remotely conserved ncRNA elements than sequence based methods such as BLAST, and is much faster than existing structural alignment tools. miR-PREFeR (miRNA PREdiction From small RNA-Seq data) utilizes expression patterns of miRNA and follows the criteria for plant microRNA annotation to accurately predict plant miRNAs from one or more small RNA-Seq data samples. It is sensitive, accurate, fast and has low-memory footprint. metaCRISPR focuses on identifying Clustered Regularly Interspaced Short Palindromic Repeats (CRISPRs) from large-scale metagenomic sequencing data. It uses a kmer hash table to efficiently detect reads that belong to CRISPRs from the raw metagonmic data set. Overlap graph based clustering is then conducted on the reduced data set to separate different CRSIPRs. A set of graph based algorithms are used to assemble and recover CRISPRs from the clusters. Show less

Bluetooth wireless technology is today present in billions of smartphones, mobile devices, and portable electronics. With the prevalence of personal Bluetooth devices, a practical Bluetooth traffic sniffer is of increasing interest due to the following. First, it has been reported that a traffic sniffer is an essential, day-to-day tool for Bluetooth engineers and applications developers; and second, as the communication between Bluetooth devices is privacy-sensitive in nature, exploring the... Show moreBluetooth wireless technology is today present in billions of smartphones, mobile devices, and portable electronics. With the prevalence of personal Bluetooth devices, a practical Bluetooth traffic sniffer is of increasing interest due to the following. First, it has been reported that a traffic sniffer is an essential, day-to-day tool for Bluetooth engineers and applications developers; and second, as the communication between Bluetooth devices is privacy-sensitive in nature, exploring the possibility of Bluetooth traffic sniffing in practical settings sheds lights into potential user privacy leakage. To date, sniffing Bluetooth traffic has been widely considered an extremely intricate task due to wideband spread spectrum of Bluetooth, pseudo-random frequency hopping adopted by Bluetooth at baseband, and the interference in the open 2.4 GHz band.This thesis addresses these challenges by introducing novel traffic sniffers that capture Bluetooth packets in practical environments. In particular, we present the following systems. (i) BlueEar, the first practical Bluetooth traffic sniffing system only using general, inexpensive wireless platforms. BlueEar features a novel dual-radio architecture where two inexpensive, Bluetooth-compliant radios coordinate with each other to eavesdrop on hopping subchannels in indiscoverable mode. Statistic models and lightweight machine learning tools are integrated to learn the adaptive hopping behavior of the target. Our results show that BlueEar maintains a packet capture rate higher than 90% consistently in dynamic settings. In addition, we discuss the implications of the BlueEar approach on Bluetooth LE sniffing and present a practical countermeasure that effectively reduces the packet capture rate of sniffer by 70%, which can be easily implemented on the Bluetooth master while requiring no modification to slave devices like keyboards and headsets. Show less

Geo-replicated data stores are one of the integral parts of today's Internet services. Service providers usually replicate their data on different data centers worldwide to achieve higher performance and data durability. However, when we use this approach, the consistency between replicas becomes a concern. At the highest level of consistency, we want strong consistency that provides the illusion of having only a single copy of the data. However, strong consistency comes with high performance... Show moreGeo-replicated data stores are one of the integral parts of today's Internet services. Service providers usually replicate their data on different data centers worldwide to achieve higher performance and data durability. However, when we use this approach, the consistency between replicas becomes a concern. At the highest level of consistency, we want strong consistency that provides the illusion of having only a single copy of the data. However, strong consistency comes with high performance and availability costs. In this work, we focus on weaker consistency models that allow us to provide high performance and availability while preventing certain inconsistencies. Session guarantees (aka. client-centric consistency models) are one of such weaker consistency models that prevent some of the inconsistencies from occurring in a client session. We provide modified versions of session guarantees that, unlike traditional session guarantees, do not cause the problem of slowdown cascade for partitioned systems. We present a protocol to provide session guarantees for eBay NuKV that is a key-value store designed for eBay's internal services with high performance and availability requirements. We utilize Hybrid Logical Clocks (HLCs) to provide wait-free write operations while providing session guarantees. Our experiments, done on eBay cloud platform, show our protocol does not cause significant overhead compared with eventual consistency. In addition to session guarantees, a large portion of this dissertation is dedicated to causal consistency. Causal consistency is especially interesting as it is has been proved to be the strongest consistency model that allows the system to be available even during network partitions. We provide CausalSpartanX protocol that, using HLCs, improves current time-based protocols by eliminating the effect of clock anomalies such as clock skew between servers. CausalSpartanX also supports non-blocking causally consistent read-only transactions that allow applications to read a set of values that are causally consistent with each other. Read-only transactions provide a powerful abstraction that is impossible to be replaced by a set of basic read operations. CausalSpartanX, like other causal consistency protocols, assumes sticky clients (i.e. clients that never change the replica that they access). We prove if one wants immediate visibility for local updates in a data center, clients have to be sticky. Based on the structure of CausalSpartanX, we provide our Adaptive Causal Consistency Framework (ACCF) that is a configurable framework that generalizes current consistency protocols. ACCF provides a basis for designing adaptive protocols that can constantly monitor the system and clients' usage pattern and change themselves to provide better performance and availability. Finally, we present our Distributed Key-Value Framework (DKVF), a framework for rapid prototyping and benchmarking consistency protocols. DKVF lets protocol designers only focus on their high-level protocols, delegating all lower level communication and storage tasks to the framework. Show less

"The goal of this thesis is to develop statistical and learning algorithms for the design, analysis, measurement, and modeling of networking and security systems with specific focus on RFID systems, network performance metrics, user security, and software security. Next, I give a brief overview of these four areas of focus." -- Abstract.

The goal of this thesis is to identify measurement, modeling, and optimization opportunities for large scale networks -- with specific focus on cellular networks and online social networks. These networks are facing unprecedented operational challenges due to their very large scale.Cellular networks are experiencing an explosive increase in the volume of traffic for the last few years. This unprecedented increase in the volume of mobile traffic is attributed to the increase in the subscriber... Show moreThe goal of this thesis is to identify measurement, modeling, and optimization opportunities for large scale networks -- with specific focus on cellular networks and online social networks. These networks are facing unprecedented operational challenges due to their very large scale.Cellular networks are experiencing an explosive increase in the volume of traffic for the last few years. This unprecedented increase in the volume of mobile traffic is attributed to the increase in the subscriber base, improving network connection speeds, and improving hardware and software capabilities of modern smartphones. In contrast to the traditional fixed IP networks, mobile network operators are faced with the constraint of limited radio frequency spectrum at their disposal. As the communication technologies evolve beyond 3G to Long Term Evolution (LTE), the competition for the limited radio frequency spectrum is becoming even more intense. Therefore, mobile network operators increasingly focus on optimizing different aspects of the network by customized design and management to improve key performance indicators (KPIs).Online social networks are increasing at a very rapid pace, while trying to provide more content-rich and interactive services to their users. For instance, Facebook currently has more than 1.2 billion monthly active users and offers news feed, graph search, groups, photo sharing, and messaging services. The information for such a large user base cannot be efficiently and securely managed by traditional database systems. Social network service providers are deploying novel large scale infrastructure to cope with these scaling challenges.In this thesis, I present novel approaches to tackle these challenges by revisiting the current practices for the design, deployment, and management of large scale network systems using a combination of theoretical and empirical methods. I take a data-driven approach in which the theoretical and empirical analyses are intertwined. First, I measure and analyze the trends in data and then model the identified trends using suitable parametric models. Finally, I rigorously evaluate the developed models and the resulting system design prototypes using extensive simulations, realistic testbed environments, or real-world deployment. This methodology is to used to address several problems related to cellular networks and online social networks. Show less

Evolution is a highly contingent process, where the quality of the solutions produced is affected by many factors. I explore and describe the contributions of three such aspects that influence overall evolutionary potential: the prior history of a population, the type and frequency of mutations that the organisms are subject to, and the composition of the underlying genetic hardware. I have systematically tested changes to a digital evolution system, Avida, measuring evolutionary potential in... Show moreEvolution is a highly contingent process, where the quality of the solutions produced is affected by many factors. I explore and describe the contributions of three such aspects that influence overall evolutionary potential: the prior history of a population, the type and frequency of mutations that the organisms are subject to, and the composition of the underlying genetic hardware. I have systematically tested changes to a digital evolution system, Avida, measuring evolutionary potential in seven different computational environments ranging in complexity of the underlying fitness landscapes. I have examined trends and general principles that these measurements demonstrate and used my results to optimize the evolutionary potential of the system, broadly enhancing performance. The results of this work show that history and mutation rate play significant roles in evolutionary potential, but the final fitness levels of populations are remarkably stable to substantial changes in the genetic hardware and a broad range of mutation types. Show less

Finite-state verification (FSV) techniques are intended for proving properties of software systems. Although significant progress has been made in the last decade automating FSV techniques, the adoption of these techniques by software developers is low. The Specification Pattern System (SPS) is intended to assist users in creating such specifications. It identifies common specification patterns and indicates how to translate the patterns into a variety of different specification languages.... Show moreFinite-state verification (FSV) techniques are intended for proving properties of software systems. Although significant progress has been made in the last decade automating FSV techniques, the adoption of these techniques by software developers is low. The Specification Pattern System (SPS) is intended to assist users in creating such specifications. It identifies common specification patterns and indicates how to translate the patterns into a variety of different specification languages. However, the patterns in the SPS are defined informally and their translations are not verified. This work discusses the informal nature of these definitions, proposes a formalization for them and provides formal proofs for the translation of patterns to Linear Temporal Logic. Show less

The Internet of Things (IoT), which networks versatile devices for information exchange, remote sensing, monitoring and control, is finding promising applications in nearly every field. However, due to its high density and enormous spectrum requirement, the practical development of IoT technology seems to be not available until the release of the largemillimeter wave (mmWave) band (30GHz-300GHz). Compared to existing lower band systems (such as 3G, 4G), mmWave band signals generally require... Show moreThe Internet of Things (IoT), which networks versatile devices for information exchange, remote sensing, monitoring and control, is finding promising applications in nearly every field. However, due to its high density and enormous spectrum requirement, the practical development of IoT technology seems to be not available until the release of the largemillimeter wave (mmWave) band (30GHz-300GHz). Compared to existing lower band systems (such as 3G, 4G), mmWave band signals generally require line of sight (LOS) path and suffer from severe fading effects, leading to much smaller coverage area. For network design and management, this implies that: (i) MmWave band alone could not support the IoT networks, but has to be integrated with the existing lower band systems through secure and effective spectrum sharing, especially in the lower frequency bands; and (ii) The IoT networks will have very high density node distribution, which is a significant challenge in network design, especially with the scarce energy budget of IoT applications. Motivated by these observations, in this dissertation, we consider three problems: (1) How to achieve secure and effective spectrum sharing? (2) How to accommodate the energy limited IoT devices? (3) How to evaluate the Quality of Service (QoS) in the high density IoT networks? We aim to develop innovative techniques for the design, evaluation and management of future IoT networks under both benign and hostile environments. The maincontributions of this dissertation are outlined as follows.First, we develop a secure and efficient spectrum sharing scheme in single-carrier wireless networks. Cognitive radio (CR) is a key enabling technology for spectrum sharing, where the unoccupied spectrum is identified for secondary users (SUs), without interfering with the primary user (PU). A serious security threat to the CR networks is referred to as primary user emulation attack (PUEA), in which a malicious user (MU) emulates the signal characteristics of the PU, thereby causing the SUs to erroneously identify the attacker as the PU. Here, we consider full-band PUEA detection and propose a reliable AES-assisted DTV scheme, where an AES-encrypted reference signal is generated at the DTV transmitter and used as the sync bits of the DTV data frames. For PU detection, we investigate the cross-correlation between the received sequence and reference sequence. The MU detection can be performed by investigating the auto-correlation of the received sequence. We further develop a secure and efficient spectrum sharing scheme in multi-carrier wireless networks. We consider sub-band malicious user detection and propose a secure AES-based DTV scheme, where the existing reference sequence used to generate the pilot symbols in the DVB-T2 frames is encrypted using the AES algorithm. The resulted sequence is exploited for accurate detection of the authorized PU and the MU. Second, we develop an energy efficient transmission scheme in CR networks using energy harvesting. We propose a transmitting scheme for the SUs such that each SU can perform information reception and energy harvesting simultaneously. We perform sum-rate optimization for the SUs under PUEA. It is observed that the sum-rate of the SU network can be improved significantly with the energy harvesting technique. Potentially, the proposed scheme can be applied directly to the energy-constrained IoT networks.Finally, we investigate QoS performance analysis methodologies, which can provide insightful feedbacks to IoT network design and planning. Taking the spatial randomness of the IoT network into consideration, we investigate coverage probability (CP) and blocking probability (BP) in relay-assisted OFDMA networks using stochastic geometry. More specifically, we model the inter-cell interference from the neighboring cells at each typical node, and derive the CP in the downlink transmissions. Based on their data rate requirements, we classify the incoming users into different classes, and calculate the BP using the multi-dimensional loss model. Show less

The past few decades have witnessed the incredible advancements in modeling, digitizing and visualizing techniques for three–dimensional shapes. Those advancements led to an explosion in the number of three–dimensional models being created for design, manufacture, architecture, medical imaging, etc. At the same time, the structure, function, stability, and dynamics of proteins, subcellular structures, organelles, and multiprotein complexes have emerged as a leading interest in... Show moreThe past few decades have witnessed the incredible advancements in modeling, digitizing and visualizing techniques for three–dimensional shapes. Those advancements led to an explosion in the number of three–dimensional models being created for design, manufacture, architecture, medical imaging, etc. At the same time, the structure, function, stability, and dynamics of proteins, subcellular structures, organelles, and multiprotein complexes have emerged as a leading interest in structural biology, another major source of large and complex geometric models. Geometric modeling not only provides visualizations of shapes for large biomolecular complexes but also fills the gap between structural information and theoretical modeling, and enables the understanding of function, stability, and dynamics.We first propose, for tessellated volumes of arbitrary topology, a compact data structure that offers constant–time–complexity incidence queries among cells of any dimensions. Our data structure is simple to implement, easy to use, and allows for arbitrary, user–defined 3–cells such as prisms and hexahedra, while remaining highly efficient in memory usage compared to previous work. We also provide the analysis on its time complexity for commonly–used incidence and adjacency queries such as vertex and edge one–rings.We then introduce a suite of computational tools for volumetric data processing, information extraction, surface mesh rendering, geometric measurement, and curvature estimation for biomolecular complexes. Particular emphasis is given to the modeling of Electron Microscopy Data Bank (EMDB) data and Protein Data Bank (PDB) data. Lagrangian and Cartesian representations are discussed for the surface presentation. Based on these representations, practical algorithms are developed for surface area and surface–enclosed volume calculation, and curvature estimation. Methods for volumetric meshing have also been presented. Because the technological development in computer science and mathematics has led to a variety of choices at each stage of the geometric modeling, we discuss the rationales in the design and selection of various algorithms. Analytical test models are designed to verify the computational accuracy and convergence of proposed algorithms. We selected six EMDB data and six PDB data to demonstrate the efficacy of the proposed algorithms in handling biomolecular surfaces and explore their capability of geometric characterization of binding targets. Thus, our toolkit offers a comprehensive protocol for the geometric modeling of proteins, subcellular structures, organelles, and multiprotein complexes.Furthermore, we present a method for computing “choking” loops—a set of surface loops that describe the narrowing of the volumes inside/outside of the surface and extend the notion of surface homology and homotopy loops. The intuition behind their definition is that a choking loop represents the region where an offset of the original surface would get pinched. Our generalized loops naturally include the usual 2g handles/tunnels computed based on the topology of the genus–g surface, but also include loops that identify chokepoints or bottlenecks, i.e., boundaries of small membranes separating the inside or outside volume of the surface into disconnected regions. Our definition is based on persistent homology theory, which gives a measure to topological structures, thus providing resilience to noise and a well–defined way to determine topological feature size.Finally, we explore the application of persistent homology theory in protein folding analysis. The extremely complex process of protein folding brings challenges for both experimental study and theoretical modeling. The persistent homology approach studies the Euler characteristics of the protein conformations during the folding process. More precisely, the persistence is measured by the variation of van der Waals radius, which leads to the change of protein 3D structures and uncovers the inter–connectivity. Our results on fullerenes demonstrate the potential of our geometric and topological approach to protein stability analysis. Show less

In this dissertation, we concentrate on the problem of automated addition of fault-tolerance that transforms a fault-intolerant program to be a fault-tolerant program. We solve this problem via model repair. Model repair is a correct-by-construct technique to revise an existing model so that the revised model satisfies the given correctness criteria, such as safety, liveness, or fault-tolerance. We consider two problems of using model repair to add fault-tolerance. First, if the repaired... Show moreIn this dissertation, we concentrate on the problem of automated addition of fault-tolerance that transforms a fault-intolerant program to be a fault-tolerant program. We solve this problem via model repair. Model repair is a correct-by-construct technique to revise an existing model so that the revised model satisfies the given correctness criteria, such as safety, liveness, or fault-tolerance. We consider two problems of using model repair to add fault-tolerance. First, if the repaired model violates the assumptions (e.g., partial observability, inability to detect crashed processes, etc) made in the underlying system, then it cannot be implemented. We denote these requirements as realizability constraints. Second, the addition of fault-tolerance may fail if the program cannot fully recover after certain faults occur. In this dissertation, we propose a lazy repair approach to address realizability issues in adding fault-tolerance. Additionally, we propose a technique to automatically add graceful degradation to a program, so that the program can recover with partial functionality (that is identified by the designer to be the critical functionality) if full recovery is impossible.A model repair technique transforms a model to another model that satisfies a new set of properties. Such a transformation should also maintain the mapping between the model and the underlying program. For example, in a distributed program, every process is restricted to read (or write) some variables in other processes. A model that represents this program should also disallow the process to read (or write) those inaccessable variables. If these constraints are violated, then the corresponding model will be unrealizable. An unrealizable model (in this context, a model that violates the read/write restrictions) may make it impossible to obtain the corresponding implementation.%In this dissertation, we call the read (or write) restriction as a realizability constraint in distributed systems. An unrealizable model (a model that violates the realizability constraints) may complicate the implementation by introducing extra amount of modification to the program. Such modification may in turn break the program's correctness.Resolving realizability constraints increases the complexity of model repair. Existing model repair techniques introduce heuristics to reduce the complexity. However, this heuristic-based approach is designed and optimized specifically for distributed programs. We need a more generic model repair approach for other types of programs, e.g., synchronous programs, cyber-physical programs, etc. Hence, in this dissertation, we propose a model repair technique, i.e., lazy repair, to add fault-tolerance to programs with different types of realizability constraints. It involves two steps. First, we only focus on repairing to obtain a model that satisfies correctness criteria while ignoring realizability constraints. In the second step, we repair this model further by removing behaviors while ensuring that the desired specification is preserved. The lazy repair approach simplifies the process of developing heuristics, and provides a tradeoff in terms of the time saved in the first step and the extra work required in the second step. We demonstrate that lazy repair is applicable in the context of distributed systems, synchronous systems and cyber-physical systems.In addition, safety critical systems such as airplanes, automobiles and elevators should operate with high dependability in the presence of faults. If the occurrence of faults breaks down some components, the system may not be able to fully recover. In this scenario, the system can still operate with remaining resources and deliver partial but core functionality, i.e., to display graceful degradation. Existing model repair approaches, such as addition of fault-tolerance, cannot transform a program to provide graceful degradation. In this dissertation, we propose a technique to add fault-tolerance to a program with graceful degradation. In the absence of faults, such a program exhibits ideal behaviors. In the presence of faults, the program is allowed to recover with reduced functionality. This technique involves two steps. First, it automatically generates a program with graceful degradation based on the input fault-intolerant program. Second, it adds fault-tolerance to the output program from first step. We demonstrate that this technique is applicable in the context of high atomicity programs as well as low atomicity programs (i.e., distributed programs). We also present a case study on adding multi-graceful degradation to a dangerous gas detection and ventilation system. Through this case study, we show that our approach can assist the designer to obtain a program that behaves like the deployed system. Show less

The networking industry is facing enormous challenges of scaling devices to support theexponential growth of internet traﬃc as well as increasing number of features being implemented inside the network. Algorithmic hardware improvements to networking componentshave largely been neglected due to the ease of leveraging increased clock frequency and compute power and the risks of implementing complex hardware designs. As clock frequencyslows its growth, algorithmic solutions become important to... Show moreThe networking industry is facing enormous challenges of scaling devices to support theexponential growth of internet traﬃc as well as increasing number of features being implemented inside the network. Algorithmic hardware improvements to networking componentshave largely been neglected due to the ease of leveraging increased clock frequency and compute power and the risks of implementing complex hardware designs. As clock frequencyslows its growth, algorithmic solutions become important to ﬁll the gap between currentgeneration capability and next generation requirements. This paper presents algorithmicsolutions to networking problems in three domains: Deep Packet Inspection(DPI), ﬁrewall(and other) ruleset compression and non-cryptographic hashing. The improvements in DPIare two-pronged: ﬁrst in the area of application-level protocol ﬁeld extraction, which allowssecurity devices to precisely identify packet ﬁelds for targeted validity checks. By usingcounting automata, we achieve precise parsing of non-regular protocols with small, constantper-ﬂow memory requirements, extracting at rates of up to 30gbps on real traﬃc in softwarewhile using only 112 bytes of state per ﬂow. The second DPI improvement is on the longstanding regular expression matching problem, where we complete the HFA solution to theDFA state explosion problem with eﬃcient construction algorithms and optimized memorylayout for hardware or software implementation. These methods construct automata toocomplex to be constructed by previous methods in seconds, while being capable of 29gbpsthroughput with an ASIC implementation. Firewall ruleset compression enables more ﬁrewall entries to be stored in a ﬁxed capacity pattern matching engine, and can also be usedto reorganize a ﬁrewall speciﬁcation for higher performance software matching. A novelrecursive structure called TUF is given to unify the best known solutions to this problemand suggest future avenues of attack. These algorithms, with little tuning, achieve a 13.7%improvement in compression on large, real-life classiﬁers, and can achieve the same results asexisting algorithms while running 20 times faster. Finally, non-cryptographic hash functionscan be used for anything from hash tables to track network ﬂows to packet sampling fortraﬃc characterization. We give a novel approach to generating hardware hash functionsin between the extremes of expensive cryptographic hash functions and low quality linearhash functions. To evaluate these mid-range hash functions properly, we develop new evaluation methods to better distinguish non-cryptographic hash function quality. The hashfunctions described in this paper achieve low-latency, wide hashing with good avalanche anduniversality properties at a much lower cost than existing solutions. Show less

The exponential cost reduction of next-generation sequencing (NGS) enabled researchers to sequence a large number of organisms in order to answer various questions in biology, ecology, health, etc. For newly sequenced genomes, gene prediction and homology search against characterized protein sequence databases are two fundamental tasks for annotating functional elements in the genomes. The main goal of gene prediction is to identify the gene locus and their structures. As there is... Show moreThe exponential cost reduction of next-generation sequencing (NGS) enabled researchers to sequence a large number of organisms in order to answer various questions in biology, ecology, health, etc. For newly sequenced genomes, gene prediction and homology search against characterized protein sequence databases are two fundamental tasks for annotating functional elements in the genomes. The main goal of gene prediction is to identify the gene locus and their structures. As there is accumulating evidence showing important functions of RNAs (ncRNAs), comprehensive gene prediction should include both protein-coding genes and ncRNAs. Homology search against protein sequences can aid identification of functional elements in genomes. Although there are intensive research in the fields of gene prediction, ncRNA search, and homology search, there are still unaddressed challenges. In this dissertation, I made contributions in these three areas. For gene prediction, I designed an HMM-based ab initio gene prediction tool that considers G+C gradient in grass genomes. For homology search, I designed a method that can align short reads against protein families using profile HMMs. For ncRNA search, I designed a ncRNA alignment tool that can align highly structured ncRNAs using only sequence similarity. Below I summarize my contributions.Despite decades of research about gene prediction, existing gene prediction tools are not carefully designed to deal with variant G+C content and 5'-3' changing patterns inside coding regions. Thus, these tools can miss genes with positive or negative G+C gradient in grass genomes such as rice, maize, sorghum, etc. I implemented a tool named AUGUSTUS-GC that accounts for 5'-3' G+C gradient. Our tool can accurately predict protein-coding genes in plant genomes especially grass genomes.A large number of sequencing projects produced short reads from the whole genomes or transcriptomic data. I designed a short reads homology search tool that employs paired-end reads to improve homology search sensitivity. The experimental results show that our tool can achieve significantly better sensitivity and accuracy in aligning short reads that are part of remote homologs.Despite the extensive studies of ncRNA search, the existing tools that heavily depend on the secondary structure in homology search cannot efficiently handle RNA-seq data that is accumulating rapidly. It will be ideal if we can have a faster ncRNA homology search tool with similar accuracy as those adopting secondary structure. I implemented an accurate ncRNA alignment tool called glu-RNA that can achieve similar accuracy to structural alignment tools while keeping the same running time complexity as sequence alignment tools. The experimental results demonstrate that our tool can achieve more accurate alignments than the popular sequence alignment tools and a well-known structural alignment program. Show less

In the last decade, we have witnessed the wide adoption of a variety of wireless technologies like WiFi, Cellular, Bluetooth, ZigBee, and Near-field Communication(NFC). However, the fast growth of wireless networks generates significant cross-technology interference, which leads to network performance degradation and potential security breach. In this dissertation, we propose two novel physical layer techniques to deal with the interference, and improve the performance and security of sensor... Show moreIn the last decade, we have witnessed the wide adoption of a variety of wireless technologies like WiFi, Cellular, Bluetooth, ZigBee, and Near-field Communication(NFC). However, the fast growth of wireless networks generates significant cross-technology interference, which leads to network performance degradation and potential security breach. In this dissertation, we propose two novel physical layer techniques to deal with the interference, and improve the performance and security of sensor networks and mobile systems, respectively. First, we exploit the WiFi interference as a ``blessing" in the design of sensor networks and develop novel WiFi interference detection techniques for ZigBee sensors. Second, utilizing these techniques, we design three efficient network services: WiFi discovery which detects the existence of nearby WiFi networks using ZigBee sensors, WiFi performance monitoring which measures and tracks performance of WiFi networks using a ZigBee sensor network, and time synchronization which provides synchronized clocks for sensor networks based on WiFi signals. Third, we design a novel, noninvasive NFC security system called {\em nShield} to reduce the transmission power of NFC radios, which protects NFC against passive eavesdropping. nShield implements a novel adaptive RF attenuation scheme, in which the extra RF energy of NFC transmissions is determined and absorbed by nShield. At the same time, nShield scavenges the extra RF energy to sustain the perpetual operation. Together with the extremely lo-power design, it enables nShield to provide the host uninterrupted protection against malicious eavesdropping. The above systems are implemented and extensively evaluated on a testbed of sensor networks and smartphones. Show less

The Bluetooth is a wireless interface that enables electronic devices to establish short-range, ad-hoc wireless connections. This kind of short-range wireless networking is known as Wireless Personal Area Networks (WPAN). Because of its attractive features of small size, low cost, and low power, Bluetooth gains a world wide usage. It is embedded in many portable computing devices and considered as a good replacement for local wire connections. Since wireless data is inherently exposed to... Show moreThe Bluetooth is a wireless interface that enables electronic devices to establish short-range, ad-hoc wireless connections. This kind of short-range wireless networking is known as Wireless Personal Area Networks (WPAN). Because of its attractive features of small size, low cost, and low power, Bluetooth gains a world wide usage. It is embedded in many portable computing devices and considered as a good replacement for local wire connections. Since wireless data is inherently exposed to eavesdropping, the security and confidentiality is a central issue for wireless standard as well as Bluetooth. To maintain security and confidentiality of wireless packets, the Bluetooth system mainly relies on the Frequency Hopping mechanism to equivocate an adversary. By this technique, a wireless channel is accessed for transmitting a packet. For each wireless packet, a single channel is selected in a pseudo random way. This kind of randomness in channel selection makes it difficult for an eavesdropped to predict the next channel to be accessed. Hence, capturing Bluetooth wireless packets is a challenge. In this work, we investigate the Frequency Hopping sequence and specifically the hop selection kernel. We analyze the operation of the kernel hardware by partitioning it into three parts. Based on this modeling, we propose an attacking method for the hop selection kernel. The proposed method shows how to expose the clock value hidden in the kernel. This helps to predict Bluetooth hopping sequence and, hence, capturing Bluetooth wireless packet is possible. Show less

We explore methods for visually plausible fluid simulation on deforming surfaces withinhomogeneous diffusion properties. While there are methods for fluid simulation onsurfaces, not much research effort focused on the influence of the motion of underlyingsurface, in particular when it is not a rigid surface, such as knitted or woven textilesin motion. The complexity involved makes the simulation challenging to account forthe non-inertial local frames typically used to describe the motion and... Show moreWe explore methods for visually plausible fluid simulation on deforming surfaces withinhomogeneous diffusion properties. While there are methods for fluid simulation onsurfaces, not much research effort focused on the influence of the motion of underlyingsurface, in particular when it is not a rigid surface, such as knitted or woven textilesin motion. The complexity involved makes the simulation challenging to account forthe non-inertial local frames typically used to describe the motion and the anisotropiceffects in diffusion, absorption, adsorption. Thus, our primary goal is to enable fastand stable method for such scenarios.First, in preparation of the material properties for the surface domain, we describetextiles with salient feature direction by bulk material property tensors in order toreduce the complexity, by employing 2D homogenization technique, which effectivelyturns microscale inhomogeneous properties into homogeneous properties in macroscaledescriptions. We then use standard texture mapping techniques to map these tensorsto triangles in the curved surface mesh, taking into account the alignment of each localtangent space with correct feature directions of the macroscale tensor. We show thatthis homogenization tool is intuitive, flexible and easily adjusted.Second, for efficient description of the deforming surface, we offer a new geometryrepresentation for the surface with solely angles instead of vertex coordinates, to reducestorage for the motion of underlying surface. Since our simulation tool relies heavilyon long sequences of 3D curved triangular meshes, it is worthwhile exploring suchefficient representations to make our tool practical by reducing the memory accessduring real-time simulations as well as reducing the file sizes. Inspired by angle-basedrepresentations for tetrahedral meshes, we use spectral method to restore curved surfaceusing both angles of the triangles and dihedral angles between adjacent triangles in themesh. Moreover, in many surface deformation sequences, it is often sufficient to updatethe dihedral angles while keeping the triangle interior angles fixed.Third, we propose a framework for simulating various effects of fluid flowing on deformingsurfaces. We directly applied our simulator on curved surface meshes insteadof in parameter domains, whereas many existing simulation methods require a parameterizationon the surface. We further demonstrate that fictitious forces induced bythe surface motion can be added to the surface-based simulation at a small additionalcost. These fictitious forces can be decomposed into different components. Only therectilinear and Coriolis components are relevant to our choice of local frames. Othereffects, such as diffusion, adsorption, absorption, and evaporation are also incorporatedfor realistic stain simulation.Finally, we explore the extraction of Lagrangian Coherent Structure (LCS), whichis often referred to as the skeleton of fluid motion. The LCS structures are often describedby ridges of the finite time Lyapunov exponent (FTLE) fields, which describethe extremal stretching of fluid parcels following the flow. We proposed a novel improvementto the ridge marching algorithm, which extract such ridges robustly for thetypically noisy FTLE estimates even in well-defined fluid flows. Our results are potentiallyapplicable to visualizing and controlling fluid trajectory patterns. In contrastto current methods for LCS calculation, which are only applicable to flat 2D or 3Ddomains and sensitive to noise, our ridge extraction is readily applicable to curvedsurfaces even when they are deforming.The collection of these computational tools will facilitate generation of realisticand easy to adjust surface fluid animation with various physically plausible effects onsurface. Show less

The rapid growth of the Internet has helped to popularize video streaming services, which has now become the most dominant content on the Internet. The management of video streaming traffic is complicated by its enormous volume, diverse communication protocols and data formats, and the widespread adoption of encryption. In this thesis, the aim is to develop a novel firewall framework, named Soft-margined Firewall, for managing encrypted video streaming traffic while avoiding violation of user... Show moreThe rapid growth of the Internet has helped to popularize video streaming services, which has now become the most dominant content on the Internet. The management of video streaming traffic is complicated by its enormous volume, diverse communication protocols and data formats, and the widespread adoption of encryption. In this thesis, the aim is to develop a novel firewall framework, named Soft-margined Firewall, for managing encrypted video streaming traffic while avoiding violation of user privacy. The system distinguishes itself from conventional firewall systems by incorporating machine learning and Traffic Analysis (TA) as a traffic detection and blocking mechanism. The goal is to detect unknown network traffic, including traffic that is encrypted, tunneled through Virtual Private Network, or obfuscated, in realistic application scenarios. Existing TA methods have limitations in that they can deal only with simple traffic patterns – usually, only a single source of traffic is allowed in a tunnel, and a trained classifier is not portable between network locations, requiring redundant training. This work aims to address these limitations with new techniques in machine learning. The three main contributions of this work are: 1) developing new statistical features around traffic surge periods that can better identify websites with dynamic contents; 2) a two-stage classifier architecture to solve the mixed-traffic problem with state-of-the-art TA features; and 3) leveraging a novel natural-language inspired feature to solve the mixed-traffic problem using Deep-Learning methods. A fully working Soft-margin Firewall with the above distinctive features have been designed, implemented, and verified for both conventional classifiers and the proposed deep-learning based classifiers. The efficacy of the proposed system is confirmed via experiments conducted on actual network setups with a custom-built prototype firewall and OpenVPN servers. The proposed feature-classifier combinations show superior performance compared to previous state-of-the-art results. The solution that combines natural-language inspired traffic feature and Deep-Learning is demonstrated to be able to solve the mixed-traffic problem, and capable of predicting multiple labels associated with one sample. Additionally, the classifier can classify traffic recorded from locations that are different from where the trained traffic was collected. These results are the first of their kind and are expected to lead the way of creating next-generation TA-based firewall systems. Show less