Abstract—Deep neural networks have achieved near-human accuracy levels in various types of classification and prediction tasks including images, text, speech, and video data. However, the networks continue to be treated mostly as black-box function approximators,

Abstract: The poor performance of TCP in wireless networks is a well-known problem, and a large amount of research effort has been devoted to it. However, our own experiments show that existing solutions including split TCP and recently developed congestion control

Abstract: Recent spatio-temporal data applications, such as car-shar\-ing and smart cities, impose new challenges regarding the scalability and timeliness of data processing systems. Trajectory compression is a promising approach for scaling up spatio-temporal databases.

Abstract Distributed file systems built for Big Data Analytics and cluster file systems built for traditional applications have very different functionality requirements, resulting in separate storage silos. In enterprises, there is often the need to run analytics on data generated by

Abstract Media uploads and downloads, even those on the order of a few hundred kilobytes, commonly fail when attempted over lossy, low-bandwidth, and high latency connections. These conditions, which are common for networks in rural, resource-poor areas, result in the

Abstract: Fast service placement, finding a set of nodes with enough free capacity of computation, storage, and network connectivity, is a routine task in daily cloud administration. In this work, we formulate this as a subgraph matching problem. Different

ABSTRACT Monitoring and troubleshooting a large wireless mesh network presents several challenges. Diagnosis of problems related to wireless access in these networks requires a comprehensive set of metrics and network monitoring data. Collection and offloading of a large amount of data is infeasible in a bandwidth constrained mesh network. Additionally, the processing required to analyze data from the entire network restricts the scalability of the system and impacts the ability to perform real-time fault diagnosis. To this end, we propose MeshMon, a network monitoring framework that includes a multi-tiered method of data collection. MeshMon dynamically controls the granularity of data collection based on observed events in the network, thereby achieving significant bandwidth savings and enabling real-time automated management. Our evaluation of MeshMon on a real testbed shows that we can diagnose a majority (87%) of network faults with a 66% savings in bandwidth required for network monitoring.

ABSTRACT Power delivery, electricity consumption, and heat management are becoming key challenges in data center environments. Several past solutions have individually evaluated different techniques to address separate aspects of this problem, in hardware and software, and at local and global levels. Unfortunately, there has been no corresponding work on coordinating all these solutions. In the absence of such coordination, these solutions are likely to interfere with one another, in unpredictable (and potentially dangerous) ways. This paper seeks to address this problem. We make two key contributions. First, we propose and validate a power management solution that coordinates different individual approaches. Using simulations based on 180 server traces from nine different real-world enterprises, we demonstrate the correctness, stability, and efficiency advantages of our solution. Second, using our unified architecture as the base, we perform a detailed quantitative sensitivity analysis and draw conclusions about the impact of different architectures, implementations, workloads, and system design choices.

ABSTRACT As the utility of wireless technology grows, wireless networks are being deployed in more widely varying conditions. The monitoring of these networks continues to reveal key implementation deficiencies that need to be corrected in order to improve protocol operation and end-to-end performance. Using data we collected from the 67th Internet Engineering Task Force (IETF) meeting held in November 2006, we show that under conditions of high medium utilization and packet loss, handoffs can be incorrectly initiated. Using the notion of persistence and prevalence for the association of a client to an Access Point (AP), we show that although the clients were predominantly static, the handoff rate is surprisingly high. Through the analysis of the data set, we show that unnecessary handoff events not only increase the amount of management traffic in the network, but also severely impact client performance.