Legend:

At the controller, there are no limitations on the number of header fields that can be accessed for computing the route. Hence, the trivial case is to send each packet to the controller and let the controller decide how the packet should be forwarded. However, this will not scale well, since the controller might get overwhelmed with traffic when the number of OpenFlow switches under its control increases. Moreover, when a MobilityFirst chunk is being transmitted as multiple packets, only the first packet has the routing header. Hence, there is no incentive to send all packets to the controller. Additionally, all packets corresponding a chunk have the same Hop ID. Hence, the first packet of each chunk can be sent to the controller, and once the controller computes the out bound port for that packet, a flow rule can be set up on the switch which says,

19

19

20

Hop ID = x and Source MAC Address = y => Out bound Port = z

20

{{{Hop ID = x and Source MAC Address = y => Out bound Port = z}}}

21

21

22

22

However, since the switch will not understand what Hop ID is, we need to set up flows using only layer 2 header fields (the ether type of MobilityFirst packets being 0x27C0 prevents us from using any of the higher layer fields at the switch). Hence, we insert the Hop ID of each packet into the Vlan tag field of the layer 2 header. This way, the switch can look at the source MAC address, and the Vlan tag and decide which port the packet has to be forwarded through. The rule now looks like,

The OpenFlow switch has a MobilityFirst router connected to it. As long as chunks need not be stored for any purpose, this router is never used and the switch takes care of forwarding packets. However, when a chunk needs to be stored, possibly because the link to the destination goes down, the switch can forward the packets to the router, which has storage functionality built into it. When the link comes up again, the packet can then be transmitted from the router to the destination. Additionally, while implementing this scheme, there is a chioce of making the MobilityFirst router transparent or visible to the source and destinatoin. Both of these methods have their own advantages and drawbacks as described below.

36

37

- '''MobilityFirst Router Transparent: ''' In this case, the source and destination see each other and whenever no storage is required, the OpenFlow switch acts as a conventional layer 2 switch. When chunks have to stored for some reason, the switch will forward the packets to the MobilityFirst router, after making necessary changes to the layer 2 header (specifically, layer 2 destination) of the packets. This implementation is efficient in the sense that there is hardly any processing that needs to done (except for layer 2 forwarding) when the chunks need not be stored. The downside is that, if the link to the destination goes down, the source will stop receiving link probe messages and hence will stop transmitting packets. This issue will have to be worked around for this implementation to work.

38

- '''MobilityFirst Router Visible: ''' In this case, the source and destination see only the MobilityFirst router and not each other. Hence, the sender will always send packets to the router no matter what the state of the link the destination is. However, for every packet that does not have to be stored, the switch has to rewrite the layer 2 header and forward the packet directly to the destination instead of forwarding to the router. The storage case becomes trivial, as it is just conventinal layer 2 forwarding. The downside of this implementation is the need for rewriting the header fields of every packet that needs to be cut through, and this might impact the throughput achieved.

39

40

'''Performance Evaluation of Various OpenFlow Actions:'''

41

42

Given that the difference between the above two methods lies in the number of header fields that have to be re-written in each packet, experiments were conducted on MININET and ORBIT to evaluate how this affects the throughput. The setup used was a single OpenFlow switch with two nodes attached to it. {{{iperf}}} was then used to compute the throughput between these two nodes under different OpenFlow actions.

It can be seen that on a hardware switch where the OpenFlow actions are done using TCAMs, increase in complexity of the actions does not cause any decrease in the throughput. Hence, having the MobilityFirst router visible (and having to re-write the header fields in a larger number of packets) does not have any drawbacks in terms of performance.