About
me

Hi! My name is Anastassios Nanos and this is my personal home
page. I am currently a researcher doing really interesting stuff around
hypervisors, microservers and struggling with emerging networking and storage
technologies. Previously, I was a Post-doctoral research fellow at the Computing Systems Laboratory
at the National Technical University of Athens
where I obtained by PhD in Computer engineering under the supervision of
Professor Nectarios
Koziris.

You can get an idea of what I'm currently involved in here, or you can take a look at my short bio.
From time to time I wrap up various snippets of code I have been working on (or not) and create repos at github or bitbucket.

You can reach me in various ways -- I mostly prefer e-mail ;-)
In case you're looking for my old home page, you can find it here.

Research
stuff

In the HPC context, applications often scale to a large number of nodes, leading to the need for a high-performance interconnect to provide low-latency and high-bandwidth communication. In the cloud context, distributed applications are executed in a set of Virtual Machines (VMs) that are placed in physical nodes across the cloud data center. These VMs suffer communication overheads, due to various intermediate layers that abstract away the physical characteristics of the underlying hardware and multiplex the application’s access to I/O resources. Moreover, these VMs are unaware of their physical placement – this presents a problem, because application instances running on the same physical node but on different VMs are not able to exploit locality. Data that need to be exchanged between instances of the same application reside in the physical memory of the node. However, VMs are not aware of this and as a result, data flow through various unneeded layers (the network stack, etc.) until the actual message exchange is realized.

V4VSockets

We develop V4VSockets, a socket-compliant, high-performance intra-node communication framework for co-located Virtual Machines. V4VSockets contains many features that reduce and eliminate problems associated with traditional PV drivers in an HPC context. Specifically, it simplifies the data path between co-located VMs. This is achieved by creating a peer-to-peer communication channel between a VM and the hypervisor. V4VSockets eliminates the overhead of page exchange/mapping and enhances throughput by moving the actual copy operation to the receiver VM. V4VSockets improves security by operating in a shared-nothing policy; pages are not shared between VMs – the hypervisor is the only one responsible for transferring data to the peer VM. Moreover, V4VSockets complies to the classic network layer concept, thus simplifying the interface to applications.
Preliminary code is available online.

Xen2MX

As we move towards the standardization of Ethernet in both worlds, Cloud
computing and High-performance computing, we need a way to to study the effect
of message-passing protocols in the Cloud, without having to suffer TCP/IP's
complexity. However, current approaches do not provide a software solution to
efficiently exploit hypervisor abstractions to access hardware. We move forward
to a more generic design, in order to understand and optimize the way VMs
communicate with the network in an HPC context.

We design Xen2MX, a high-performance interconnection protocol for
virtualized environments. Xen2MX is binary compatible with MX and wire
compatible to MXoE, the ethernet mode of Myrinet's MX protocol. Although our
prototype implementation is in early stages, results from the original MX
benchmarks over Xen2MX are promising: virtualization overheads are almost
eliminated compared to a software bridge setup, the generic way of
communication in virtualized environments. We are in the process of finalizing
our implementation and examine possible ways to optimize our prototype. The
code is available at github.

WiP

We believe that modern High Performance Interconnection Networks provide
abstractions that can be exploited in Virtual Machine execution environments
but lack support in sharing architectures. Previous work has shown that
integrating the semantics of Virtualization in specialized software that runs
on Network Processors can isolate and finally minimize the overhead on the VM
Hypervisor concerning access to the device by Guest VMs. Direct I/O has been
proposed as the solution to the CPU overhead imposed by guest VM transparent
services that can lead to low throughput for high bandwidth links. However,
minimizing the CPU overhead comes at the cost of giving away the benefits of
the device driver model. Integrating protocol offload support (present in most
modern NICs) in virtual network device drivers can lead to performance
improvement. Bypassing the Hypervisor while moving data around, can also
minimize the overhead imposed by heavy I/O but at the cost of security and
isolation.

We envision a Virtualization-enabled High performance Network Interface that
can achieve line-rate throughput and optimized sharing of Network I/O in
Virtual Machines by utilizing commodity hardware and innovative
resource-sharing virtualization architectures.

Data access in HPC infrastructures is realized via user-level
networking and OS-bypass techniques through which nodes can communicate
with high bandwidth and low-latency. Virtualizing physical
components requires hardware-aided software hypervisors to control I/O
device access. As a result, line-rate bandwidth or lower latency message
exchange over 10GbE interconnects hosted in Cloud Computing infrastructures
can only be achieved by alleviating software overheads imposed
by the Virtualization abstraction layers, namely the VMM and the driver
domains which hold direct access to I/O devices.
We have designed MyriXen, a framework in which Virtual Machines
efficiently share network I/O devices bypassing overheads imposed
by the VMM or the driver domains. MyriXen permits VMs to optimally
exchange messages with the network via a high performance NIC,
leaving security and isolation issues to the Virtualization layers. Smart
Myri-10G NICs provide hardware abstractions that facilitate the integration
of the MX semantics in the Xen split driver model. With MyriXen,
multiple VMs exchange messages using the MX message passing protocol
over Myri-10G interfaces as if the NIC was assigned solely to them. We
believe that MyriXen can
integrate message passing based application in clusters of VMs provided by
Cloud Computing infrastructures with near-native performance.