Wills, Eric David, 1977-
2008-10-16T17:38:23Z
2008-10-16T17:38:23Z
2008-03
http://hdl.handle.net/1794/7508
xx, 287 p. ; ill. (some col.) A print copy of this title is available through the UO Libraries under the call numbers: SCIENCE QP310.W3 W55 2008
Digital three-dimensional (3D) models are useful for biomechanical analysis because they can be interactively visualized and manipulated. Synthesizing and analyzing animal locomotion with these models, however, is difficult due to the large number of joints in a fully articulated skeleton, the complexity of the individual joints, and the huge space of possible configurations, or poses, of the skeleton taken as a whole. A joint may be capable of several biological movements, each represented by a degree of freedom (DOF). A quadrupedal model may require up to 100 DOFs to represent the limbs and trunk segments only, resulting in extremely large spaces of possible body configurations. New methods are presented here that allow limbs with any number of biomechanical DOFs to be kinematically exercised and mapped into a visualization space. The spaces corresponding to the ranges of motion of the left and right limbs are automatically intersected and pruned using biological and locomotion constraints. Hind and fore spaces are similarly constrained so that Genetic Algorithms (GAs) can be used to quickly find smooth, and therefore plausible, kinematic quadrupedal locomotion paths through the spaces. Gaits generated for generic dog and reptile models are compared to published gait data to determine the viability of kinematics-only gait generation and analysis; gaits generated for Apatosaurus, Triceratops , and Tyrannosaurus dinosaur models are then compared to those generated for the extant animals. These methods are used for several case studies across the models including: isolating scapulothorax and shoulder joint functionality during locomotion, determining optimal ankle heights for locomotion, and evaluating the effect of limb phase parameters on quadrupedal locomotion.
Adviser: Kent A. Stevens
58178 bytes
19632923 bytes
application/pdf
application/pdf
en_US
University of Oregon
University of Oregon theses, Dept. of Computer and Information Science, Ph. D., 2008
Biomechanics
Kinematics
Genetic algorithm
Visualization
Dinosaurs
Walking
Gaits
Gait animation
Paleontology
Anatomy and physiology
Animals
Computer science
Gait animation and analysis for biomechanically-articulated skeletons
Thesis

Rejaie, Reza
Alur, Abhijit
2015-01-14T15:55:45Z
2015-01-14T15:55:45Z
2015-01-14
http://hdl.handle.net/1794/18700
DNS (Domain Name System) names contain a wide variety of information, such as geographic location, speed of the interface, type of interface, etc. However, extracting this information is challenging since this information does not have a consistent format across different ISPs (internet service providers) or even a particular ISP.
We present a new tool, GINIE, which extracts useful information and some common dictionary words from a DNS name. We use three ISPs and a CAIDA (Center for Applied Internet Data Analysis) dataset to demonstrate these capabilities.
Information extracted with GINIE provides valuable insight about the infrastructure of the three ISPs and shows the availability and type of information in a collection of DNS names from many ISPs that exist in a typical dataset. The embedded information from DNS names can be used (with some additional active measurements) to infer the geo-aware topology of an ISP.
en_US
University of Oregon
All Rights Reserved.
dig
DNS
GINIE
Gathering Information about Network Infrastructure from DNS Names and Its Applications
Electronic Thesis or Dissertation
M.S.
masters
Department of Computer and Information Science
University of Oregon

Dou, Dejing
Liu, Haishan
Liu, Haishan
2012-12-07T23:15:58Z
2012-12-07T23:15:58Z
2012
http://hdl.handle.net/1794/12567
Data mining is the nontrivial extraction of implicit, previously unknown, and potentially useful information from data. It is widely acknowledged that the role of domain knowledge in the discovery process is essential. However, the synergy between domain knowledge and data mining is still at a rudimentary level. This motivates me to develop a framework for explicit incorporation of domain knowledge in a data mining system so that insights can be drawn from both data and domain knowledge. I call such technology "semantic data mining."
Recent research in knowledge representation has led to mature standards such as the Web Ontology Language (OWL) by the W3C's Semantic Web initiative. Semantic Web ontologies have become a key technology for knowledge representation and processing. The OWL ontology language is built on the W3C's Resource Description Framework (RDF) that provides a simple model to describe information resources as a graph. On the other hand, there has been a surge of interest in tackling data mining problems where objects of interest can be best described as a graph of interrelated nodes. I notice that the interface between domain knowledge and data mining can be achieved by using graph representations. Therefore I explore a graph-based approach for modeling both knowledge and data and for analyzing the combined information source from which insight can be drawn systematically.
In summary, I make three main contributions in this dissertation to achieve semantic data mining. First, I develop an information integration solution based on metaheuristic optimization when data mining task require accessing heterogeneous data sources. Second, I describe how a graph interface for both domain knowledge and data can be structured by employing the RDF model and its graph representations. Finally, I describe several graph theoretic analysis approaches for mining the combined information source. I showcase the utility of the proposed methods on finding semantically associated itemsets, a particular case of the frequent pattern mining. I believe these contributions in semantic data mining can provide a novel and useful way to incorporate domain knowledge.
This dissertation includes published and unpublished coauthored material.
en_US
University of Oregon
All Rights Reserved.
domain knowledge
graph mining
ontology
semantic data mining
A Graph-based Approach for Semantic Data Mining
Electronic Thesis or Dissertation

Thomas, Kristine A.
2010-09-15T23:34:12Z
2010-09-15T23:34:12Z
2010-06
http://hdl.handle.net/1794/10724
xi, 56 p. : ill. (some col.) A print copy of this thesis is available through the UO Libraries. Search the library catalog for the location and call number.
Image processing is a powerful tool for increasing the reliability and
reproducibility of disease diagnostics. In the hands of pathologists, image processing
provides quantitative data from histological images which supplement the
qualitative data currently used by specialists. This thesis presents a novel method
for analyzing digitized images of hematoxylin and eosin (H&E) stained histology
slides to detect and quantify inflammatory polymorphonuclear leukocytes to aid in
the grading of acute inflammation of the placenta as an example of the use of image
processing in aid of diagnostics.
Methods presented in this thesis include segmentation, a novel threshold
selection technique and shape analysis. The most significant contribution is the
automated color threshold selection algorithm for H&E stained histology slides
which is the only unsupervised method published to date.
Committee in charge:
Dr. John Conery, Chair;
Dr. Matthew J. Sottile
en_US
University of Oregon
University of Oregon theses, Dept. of Computer and Information Science, M.S., 2010;
Medical diagnostic imaging
Diagnostic imaging
Image processing
Image Processing as Applied to Medical Diagnostics
Thesis

Brittell, Megen E.
2012-04-19T00:48:34Z
2012-04-19T00:48:34Z
2011-12
http://hdl.handle.net/1794/12168
xi, 79 p. ： ill. (some col.)
Graphics provide a rich display medium that facilitates identification of spatial patterns but are inaccessible to people who are blind or low vision. Audio provides an alternative medium through which to display information. Prior research has explored audio display of lines representing functions and location of screen objects within a graphical user interface; however, presentation of spatial attributes of lines (angle, number of segments, etc.) of geographic data has received limited attention.
This thesis explores a theoretical foundation for designing audio displays and presents an experimental evaluation of line symbology. Sighted users who were blindfolded and blind users performed a line following task and a matching task to evaluate the line symbology. Observed differences between the conditions did not reach statistical significance. User preferences and observed strategies are discussed.
Committee in charge: Dr. Michal Young， Chair
en_US
University of Oregon
University of Oregon theses, Dept of Computer and Information Science, M.S., 2011;
rights_reserved
Computer science
Applied science
Assistive technology
Audio interfaces
Human computer interaction
Improving Accessibility of Spatial Information: A Technique Using Parametrized Audio to Symbolize Lines
Thesis

Malony, Allen
Poliakoff, David
2015-08-18T23:14:38Z
2015-08-18T23:14:38Z
2015-08-18
http://hdl.handle.net/1794/19352
Modern computational software is increasingly large in terms of lines of code, number of developers, intended longevity, and complexity of intended architectures. While tools exist to mitigate the problems this type of software causes for the development of functional software, no solutions exist to deal with the problems it causes for performance. This thesis introduces a design called the Software Development Performance Analysis System, or SDPAS. SDPAS observes the performance of software tests as software is developed, tracking builds, tests, and developers in order to provide data with which to analyze a software development process. SDPAS integrates with the CMake build and test suite to obtain data about builds and provide consistent tests, with git to obtain data about how software is changing. SDPAS also integrates with TAU to obtain performance data and store it along with the data obtained from other tools. The utility of SDPAS is observed on two pieces of production software.
en_US
University of Oregon
All Rights Reserved.
Computer Science
HPC
Performance
Regression
TAU
Version control
Integrating Performance Analysis in Parallel Software Engineering
Electronic Thesis or Dissertation
M.S.
masters
Department of Computer and Information Science
University of Oregon

Rejaie, Reza
Rasti Ekbatani, Hassan
2013-07-11T19:59:39Z
2013-07-11T19:59:39Z
2013-07-11
http://hdl.handle.net/1794/12979
During the past decade, the Internet has witnessed a dramatic increase in the popularity of Peer-to-Peer (P2P) applications. This has caused a significant growth in the volume of P2P traffic. This trend has been particularly alarming for the Internet Service Providers (ISPs) that need to cope with the associated cost but have limited control in routing or managing P2P traffic. To alleviate this problem, researchers have proposed mechanisms to reduce the volume of external P2P traffic for individual ISPs. However, prior studies have not examined the global effect of P2P applications on the entire network, namely the traffic that a P2P application imposes on individual underlying Autonomous Systems (ASs). Such a global view is particularly important because of the large number of geographically scattered peers in P2P applications.
This dissertation examines the global effect of P2P applications on the underlying AS-level Internet. Toward this end, first we leverage a large number of complete overlay snapshots from a large-scale P2P application, namely Gnutella, to characterize the connectivity and evolution of its overlay structure. We also conduct a case study on the performance of BitTorrent and its correlation with peer- and group-level properties. Second, we present and evaluate Respondent-driven sampling as a promising technique to collect unbiased samples for characterizing peer properties in large-scale P2P overlays without requiring the overlay's complete snapshot. Third, we propose a new technique leveraging the geographical location of peers in an AS to determine its geographical footprint and identify the cities where its Points-of-Presence (PoPs) are likely to be located. Fourth, we present a new methodology to characterize the effect of a given P2P overlay on the underlying ASs. Our approach relies on the large scale simulation of BGP routing over the AS-level snapshots of the Internet to identify the imposed load on each transit AS. Using our methodology, we characterize the impact of Gnutella overlay on the AS-level underlay over a
4-year period. Our investigation provides valuable insights on the global impact of large scale P2P overlay on individual ASs.
This dissertation includes my previously published and co-authored material.
en_US
University of Oregon
All Rights Reserved.
AS-level Topology
Geo-IP location
P2P Characterization
P2P Overlay
Investigating the Mutual Impact of the P2P Overlay and the AS-level Underlay
Electronic Thesis or Dissertation
Ph.D.
doctoral
Department of Computer and Information Science
University of Oregon

Dou, Dejing
Jiang, Shangpu
2016-02-24T00:32:13Z
2016-02-24T00:32:13Z
2016-02-23
http://hdl.handle.net/1794/19724
Machine learning and data mining have provided plenty of tools for extracting knowledge from data. Yet, such knowledge may not be directly applicable to target applications and might need further manipulation: The knowledge might contain too much noise, or the target application may use a different representation or terminology.
In this dissertation, we study three problems related to knowledge management and manipulation. First, given a knowledge base (KB) automatically extracted from the text, we explore how to refine it based on the dependencies among the possible KB instances and their confidence values. Second, when the target application to which we want to apply our knowledge uses a different schema, we explore how to translate the knowledge based on the mapping between the schemas. Sometimes, the mapping between two schemas can be discovered automatically, so the third problem we consider is whether we can find the mapping more accurately using the corresponding knowledge contained in the two schemas.
We notice that a large fraction of data and knowledge can be represented in relational models, which can be formalized with first-order logic. Moreover, uncertainty is a common feature existing in these problems, e.g., the confidence values associated with the KB instances, the probabilistic knowledge rules to be translated, or the schemas not perfectly aligned with each other. Therefore, we adopt statistical relational learning, which combines first-order logic with probabilistic models, to resolve these problems. In particular, we use Markov logic networks (MLNs), which consist of sets of weighted first-order formulas. MLNs are a powerful and flexible language for representing hard and soft constraints of relational domains.
We develop the MLN formulations for each of these problems, and we use the representation, inference and learning approaches in the literature with certain
adaptations to solve them. The experiment results show that MLNs successfully provide solutions to these problems or achieve better performances than the existing methods.
This dissertation includes previously published and unpublished coauthored material.
en_US
University of Oregon
All Rights Reserved.
Knowledge Base Refinement and Knowledge Translation with Markov Logic Networks
Electronic Thesis or Dissertation
Ph.D.
doctoral
Department of Computer and Information Science
University of Oregon

Huck, Kevin A., 1972-
2010-01-13T01:58:13Z
2010-01-13T01:58:13Z
2009-03
http://hdl.handle.net/1794/10087
xvi, 231 p. : ill. A print copy of this thesis is available through the UO Libraries. Search the library catalog for the location and call number.
Parallel applications running on high-end computer systems manifest a complex combination of performance phenomena, such as communication patterns, work distributions, and computational inefficiencies. Current performance tools compute results that help to describe performance behavior, as well as to understand performance problems and how they came about. Unfortunately, parallel performance tool research has been limited in its contributions to large-scale performance data management and analysis, automated performance investigation, and knowledge-based performance problem reasoning.
This dissertation discusses the design of a performance analysis methodology and framework which integrates scalable data management, dimension reduction, clustering, classification and correlation analysis of individual trials of large dimensions, and comparative analysis between multiple application executions. Analysis process workflows can be captured, automating what would otherwise be time-consuming and possibly error prone tasks. More importantly, process automation provides an extensible interface to the analysis process. The methods also integrate context metadata and a rule-based system in order to capture expert performance analysis knowledge about known anomalous behavior patterns. Applying this knowledge to performance analysis results and associated metadata provides a mechanism for diagnosing the causes of performance problems, rather than just summarizing results. Our prototype implementation of our data mining framework, PerfExplorer, and our data management framework, PerfDMF, are applied in large-scale performance studies to demonstrate each thesis contribution. The dissertation concludes with a discussion of future research directions.
Adviser: Allen D. Malony
en_US
University of Oregon
University of Oregon theses, Dept. of Computer and Information Science, Ph. D., 2009;
Parallel performance
Data mining
Dimension reduction
Clustering
Computer science
Knowledge support for parallel performance data mining
Thesis

Boothe, Peter Mattison, 1978-
2010-04-24T00:59:02Z
2010-04-24T00:59:02Z
2009-09
http://hdl.handle.net/1794/10328
xiv, 183 p. : ill. A print copy of this thesis is available through the UO Libraries. Search the library catalog for the location and call number.
As the Internet has evolved over time, the interconnection patterns of the members of this "network of networks" have changed. Can we characterize those changes? Have those changes been good or bad? What does "good" mean in this context? Has market power been centralizing or decentralizing? How certain can we be of our answer? What are the limitations of our data? These are the questions which motivate this dissertation. In this dissertation, we answer these questions and more by carefully taking a long-term quantitative study of the evolution of the topology of the Internet's AS graph. In order to do this study, we spend most of the dissertation developing methods of data processing and data analysis all informed by ideas from networking, data mining, graph theory, and statistics. The contributions are both theoretical and practical. The theoretical contributions include an in-depth analysis of the complexity of AS graph measurement as well as of the difficulty of reconstructing the AS graph from available data. The practical contributions include the design of graph metrics to capture properties of interest, usable approximation algorithms for several AS graph analysis methods, and an analysis of the evolution of the AS graph over time.
It is our hope that these methods may prove useful in other domains, and that the conclusions about the evolution of the Internet topology prove useful for Internet operators, network researchers, policy makers, and others.
Committee in charge: Andrzej Proskurowski, Chairperson, Computer & Information Science;
Arthur Farley, Member, Computer & Information Science;
Jun Li, Member, Computer & Information Science;
Anne van den Nouweland, Outside Member, Economics
en_US
University of Oregon
University of Oregon theses, Dept. of Computer and Information Science, Ph. D., 2009;
Data mining
Graph theory
AS graph
Computer science
Measuring the Internet AS graph and its evolution
Thesis

Conery, John
Burkhart, Joshua
2013-10-03T23:38:06Z
2013-10-03T23:38:06Z
2013-10-03
http://hdl.handle.net/1794/13338
How to assess the quality of a genome assembly without the help of a reference sequence is an open question. Only a few techniques are currently used in the literature and each has obvious bias. An additional method, restriction enzyme associated DNA (RAD) marker alignment, is proposed here. With high enough density, this method should be able to assess the quality of de novo assemblies without the biases of current methods.
With the growing ambition to sequence new genomes and the accelerating ability to do so cost effectively, methods to assess the quality of reference-free genome assemblies will become increasingly important. In addition to the existing methods of EST and conserved sequence alignment, RAD marker alignment may contribute to this effort.
en_US
University of Oregon
All Rights Reserved.
assembly
assessment
free
genome
quality
reference
A Method for Reference-Free Genome Assembly Quality Assessment
Electronic Thesis or Dissertation
M.S.
masters
Department of Computer and Information Science
University of Oregon

Dunn, Nathan A.
2008-02-10T03:22:54Z
2008-02-10T03:22:54Z
2006-08
http://hdl.handle.net/1794/3263
145 p. Advisers: John Conery (Computer and Information Science)and Shawn Lockery (Biology)
A print copy of this title is available through the UO Libraries under the call number: SCIENCE QA76.87 .D96 2006
This thesis makes two major contributions: it introduces a novel method for analysis of artificial neural networks and provides new models of the nematode Caenorhabditis elegans nervous system. The analysis method extracts neural network motifs,or subnetworks of recurring neuronal function, from optimized neural networks. The method first creates models for each neuron relating network stimulus to neuronal response, then clusters the model parameters, and finally combines the neurons into multi-neuron motifs based on their cluster category. To infer biological function, this analysis method was applied to neural networks optimized to reproduce C. elegans behavior, which converged upon a small number of motifs. This allowed both a
quantitative exploration of network function as well as discovery of larger motifs. Neural network models of C. elegans anatomical connectivity were optimized to reproduce two C. elegans behaviors: chemotaxis (orientation towards a maximum chemical attractant concentration) and thermotaxis (orientation towards a set temperature). Three chemotaxis motifs were identified. Experimental evidence suggests that chemotaxis is driven by a differentiator motif with two important features. The first feature was a fast, excitatory pathway in parallel with one or more slow, inhibitory pathways. The second feature was inhibitory feedback on all self-connections and recurrent loops, which regulates neuronal response. Six thermotaxis motifs were identified. Every motif consisted of two circuits, each a previously discovered chemotaxis motif with most having a dedicated sensory neuron. One circuit was thermophilic (heat-seeking) and the other was cryophilic (cold-seeking). Experimental evidence suggests that the cryophilic circuit is a differentiator motif and the thermophilic circuit functions by klinokinesis.
NSF: IBN-0080068
10809715 bytes
245871 bytes
2263 bytes
application/pdf
text/plain
text/plain
en_US
University of Oregon theses, Dept. of Computer and Information Science, 2006, PhD
Neural networks (Computer science)
Caenorhabditis elegans
Computational biology
Biological models
C. elegans
Neural network analysis
Biological modeling
A Novel Neural Network Analysis Method Applied to Biological Neural Networks
Thesis

Li, Jun
Memon, Ghulam
2014-06-17T19:41:21Z
2014-06-17T19:41:21Z
2014-06-17
http://hdl.handle.net/1794/17907
The Internet has evolved into a medium centered around content: people watch videos on YouTube, share their pictures via Flickr, and use Facebook to keep in touch with their friends. Yet, the only globally deployed service to discover content - i.e., Domain Name System (DNS) - does not discover content at all; it merely translates domain names into locations. The lack of persistent naming, in particular, makes content discovery, instead of domain discovery, challenging. Content Distribution Networks (CDNs), which augment DNSs with location-awareness, also suffer from the same problem of lack of persistent content names. Recently, several infrastructure- level solutions to this problem have emerged, but their fundamental limitation is that they fail to preserve the autonomy of network participants. Specifically, the storage requirements for resolution within each participant may not be proportional to their capacity. Furthermore, these solutions cannot be incrementally deployed. To the best of our knowledge, content discovery services based on peer-to-peer (P2P) networks are the only ones that support persistent content names. These services also come with the built-in advantage of scalability and deployability. However, P2P networks have been deployed in the real-world only recently, and their real-world characteristics are not well understood. It is important to understand these real-world characteristics in order to improve the performance and propose new designs by identifying the weaknesses of existing designs. In this dissertation, we first propose a novel, lightweight technique for capturing P2P traffic. Using our captured data, we characterize several aspects of P2P networks and draw conclusions about their weaknesses. Next, we create a botnet to demonstrate the lethality of the weaknesses of P2P networks. Finally, we address the weaknesses of P2P systems to design a P2P-based content discovery service, which resolves the drawbacks of existing content discovery systems and can operate at Internet-scale.
This dissertation includes both previously published/unpublished and co-authored material.
en_US
University of Oregon
All Rights Reserved.
botnet
content discovery
monitoring p2p networks
persistent names
structured overlay
unstructured overlay
On P2P Networks and P2P-Based Content Discovery on the Internet
Electronic Thesis or Dissertation
Ph.D.
doctoral
Department of Computer and Information Science
University of Oregon

Le Pendu, Paea Jean-Francois, 1974-
2010-08-03T23:39:45Z
2010-08-03T23:39:45Z
2010-03
http://hdl.handle.net/1794/10575
xi, 89 p. : ill. A print copy of this thesis is available through the UO Libraries. Search the library catalog for the location and call number.
On the one hand, ontologies provide a means of formally specifying complex descriptions and relationships about information in a way that is expressive yet amenable to automated processing and reasoning. When data are annotated using terms from an ontology, the instances inhere in formal semantics. Compared to an ontology, which may have as few as a dozen or as many as tens of thousands of terms, the annotated instances for the ontology are often several orders of magnitude larger, from millions to possibly trillions of instances. Unfortunately, existing reasoning techniques cannot scale to these sizes.
On the other hand, relational database management systems provide mechanisms for storing, retrieving, and maintaining the integrity of large amounts of data. Relational database management systems are well known for scaling to extremely large sizes of data, some claiming to manage over a quadrillion data.
This dissertation defines ontology databases as a mapping from ontologies to relational databases in order to combine the expressiveness of ontologies with the scalability of relational databases. This mapping is sound and, under certain conditions, complete. That is, the database behaves like a knowledge base which is faithful to the semantics of a given ontology. What distinguishes this work is the treatment of the relational database management system as an active reasoning component rather than as a passive storage and retrieval system.
The main contributions this dissertation will highlight include: (i) the theory and implementation particulars for mapping ontologies to databases, (ii) subsumption based reasoning, (iii) inconsistency detection, (iv) scalability studies, and (v) information integration (specifically, information exchange). This work is novel because it is the first attempt to embed a logical reasoning system, specified by a Semantic Web ontology, into a plain relational database management system using active database technologies. This work also introduces the not-gadget , which relaxes the closed-world assumption and increases the expressive power of the logical system without significant cost. This work also demonstrates how to deploy the same framework as an information integration system for data exchange scenarios, which is an important step toward semantic information integration over distributed data repositories.
Committee in charge: Dejing Dou, Chairperson, Computer & Information Science;
Zena Ariola, Member, Computer & Information Science;
Christopher Wilson, Member, Computer & Information Science;
Monte Westerfield, Outside Member, Biology
en_US
University of Oregon
University of Oregon theses, Dept. of Computer and Information Science, Ph. D., 2010;
Ontology databases
Semantic Web
Knowledge representation
Information science
Artificial intelligence
Computer science
Ontology databases
Thesis

Butler, Kevin
Mood, Benjamin
Mood, Benjamin
2012-12-07T23:10:27Z
2012-12-07T23:10:27Z
2012
http://hdl.handle.net/1794/12505
Secure function evaluation (SFE) on mobile devices, such as smartphones, allows for the creation of new privacy-preserving applications. Generating the circuits on smartphones which allow for executing customized functions, however, is infeasible for most problems due to memory constraints. In this thesis, we develop a new methodology for generating circuits that is memory-efficient. Using the standard SFDL language for describing secure functions as input, we design a new pseudo- assembly language (PAL) and a template-driven compiler, generating circuits which can be evaluated with the canonical Fairplay evaluation framework. We deploy this compiler and demonstrate larger circuits can now be generated on smartphones. We show our compiler's ability to interface with other execution systems and perform optimizations on that execution system. We show how runtime generation of circuits can be used in practice. Our results demonstrate the feasibility of generating garbled circuits on mobile devices.
This thesis includes previously published co-authored material.
en_US
University of Oregon
All Rights Reserved.
computation
phones
privacy
Optimizing Secure Function Evaluation on Mobile Devices
Electronic Thesis or Dissertation

Magharei, Nazanin, 1979-
2011-04-15T01:05:19Z
2012-03-06T18:41:15Z
2010-12
http://hdl.handle.net/1794/11089
xxii, 413 p. : ill.
Streaming multimedia content over the Internet is extremely popular mainly due to emerging applications such as IPTV, YouTube and e-learning. All these applications require simultaneous streaming of multimedia content from one or multiple sources to a large number of users. Such applications impose unique requirements in terms of server bandwidth and playback delay which are difficult to achieve in a scalable fashion with the traditional client-server architecture. Peer-to-peer (P2P) overlays offer a promising approach to support scalable streaming applications, that we broadly refer to as "P2P streaming". Design of a scalable P2P streaming mechanism that accommodates heterogeneity of peers' bandwidth and copes with dynamics of peer participation while ensuring in-time delivery of the multimedia content to individual peers is extremely challenging. Besides these fundamental challenges, P2P streaming applications are facing practical issues such as encouraging peers' contribution and decreasing the costly inter-ISP P2P traffic.
In this dissertation, we study several aspects of live P2P streaming with the goal of improving the performance of such systems. This dissertation can be categorized into two parts as follows. ( i ) We present the design and evaluation of a mesh-based live P2P streaming mechanism, called PRIME. Further, we perform a head-to-head comparison between the two approaches on live P2P streaming, namely tree-based and mesh-based. We demonstrate the superiority of the mesh-based approach. In the quest for a systematic comparison of existing mesh-based solutions on live P2P streaming, we leverage the insights from our design in PRIME and propose an evaluation methodology. Utilizing the evaluation methodology, we compare the performance of existing mesh-based live P2P streaming solutions. ( ii ) From a more practical perspective, we tackle some of the existing practical issues in the deployment of live P2P streaming applications, namely providing incentives for participating peers to contribute their resources and designing ISP-friendly live P2P streaming protocols with the ultimate goal of reducing costly inter-ISP traffic. In the end, this dissertation reveals fundamental trade-offs in the design, comparison and meaningful evaluation of basic and practical live P2P streaming mechanisms under realistic settings.
This dissertation includes my previously published and my co-authored materials.
Committee in charge: Prof. Reza Rejaie, Chair;
Prof. Virginia Lo;
Prof. Jun Li;
Prof. David Levin;
Prof. Markus Hofmann
en_US
University of Oregon
University of Oregon theses, Dept. of Computer and Information Science, Ph. D., 2010;
Peer-to-peer architecture (Computer networks)
Streaming technology (Telecommunications)
Computer science
Streaming multimedia
Peer-to-peer streaming: Design and challenges
Thesis

Norris, Boyana
Shaila, Nashid
2016-10-27T18:37:45Z
2016-10-27T18:37:45Z
2016-10-27
http://hdl.handle.net/1794/20452
Understanding the performance of applications on modern multi- and manycore platforms is a difficult task and involves complex measurement, analysis, and modeling. The Roofline model is used to assess an application's performance on a given architecture. Not much work has been done with the Roofline model using real measurements. Because it can be a very useful tool for understanding application performance on a given architecture, in this thesis we demonstrate the use of architectural roofline data with measured data for analyzing the performance of different benchmarks. We first explain how to use different toolkits to measure the performance of a program. Next, these data are used to generate the roofline plots, based on which we can decide how can we make the application more efficient and remove bottlenecks. Our results show that this can be a powerful tool for analyzing performance of applications over different architectures and different code versions.
en_US
University of Oregon
All Rights Reserved.
high performance computing
performance analysis
performance modeling
roofline model
TAU
PAPI
Vtune
SDE
Performance Analysis and Modeling of Parallel Applications in the Context of Architectural Rooflines
Electronic Thesis or Dissertation
M.S.
masters
Department of Computer and Information Science
University of Oregon

Fickas, Stephen
Arab Yar Mohammadi, Mahshid
Arab Yar Mohammadi, Mahshid
2012-10-26T01:43:56Z
2012-10-26T01:43:56Z
2012
http://hdl.handle.net/1794/12349
My interest is in applying a domain model to help elicit personal requirements for the problem of community travel for people with cognitive impairments. The domain model I took advantage of is the ACT model, which is embedded in the tool I designed for defining required prompts for travel. I set up a study to look at the use of the domain model to help travel-planners generate personalized prompts for a traveler. My goal is to better understand the mechanisms of running a human-performance study and to get a first look at how the domain model can be understood by travel-planners. The study shows that most participants prefer the ACT-based tool to free-thinking and writing down prompts. I found out that the tool helps participants define more organized and concise prompts, but not necessarily a higher number of prompts, compared to the free-think approach. The tool captures prompts for some steps that are neglected while free-thinking. However, some steps of the ACT model need to be disambiguated or presented more effectively in the tool.
en_US
University of Oregon
All Rights Reserved.
Assistive Techbology
Requirement Engineering
Personalized Requirements Elicitation Using a Domain Model
Electronic Thesis or Dissertation

Malony, Allen
Ozog, David
2013-10-03T23:31:43Z
2013-10-03T23:31:43Z
2013-10-03
http://hdl.handle.net/1794/13239
While the message-passing paradigm, seen in programming models such as MPI and UPC, has provided a solution for efficiently programming on distributed memory computer systems, this approach is not a panacea for the needs of all scientists. The traditional method of developing parallel applications in C/C++ and Fortran potentially leaves behind high-level and heterogeneous environments which are the most conducive for supporting
modern computational science endeavors. PRESTO alleviates this problem with an easy-to-use framework which provides multi-language adapters to a flexible MPI middleware supporting common computational models such as the asynchronous master/worker and ring pipeline in heterogeneous environments.
en_US
University of Oregon
All Rights Reserved.
PRESTO: A Parallel Runtime Environment for Scalable Task-Oriented Computations
Electronic Thesis or Dissertation
M.S.
masters
Department of Computer and Information Science
University of Oregon