Why packet fragmentation happens, the consequences, and how to correct for it in a monitoring system

Visual representation of Packet Fragmentation

Continuing on from the previous article, yet another common issue that dogs unintelligent monitoring systems – which causes a reduction in the effectiveness of connected Analytical tools.

What is Packet fragmentation and what causes packets to become fragmented?

Within a network it is sometimes necessary to concatenate either the body of header of an existing packet with a different header, or to encapsulate it completely within a separate packet – this is normally to better facilitate the packet’s passage through a network. There are many reasons to do this, perhaps there are two different network types, with different policies or with different MTU sizes – it could be that one network uses encryption and another one doesn’t. Or, and more commonly – it could be a where two networks use dissimilar transport protocols. For example, MPLS-TE and SCTP, TCP or UDP and even VLAN tagging. Either way something has to be done to transport a packet from one network through another dissimilar network type to a final destination network type.

What causes this to happen?

Specifically, in 4G/LTE mobile networks there is the mandated use of GTP – GPRS Tunneling Protocol. If the packet to be encapsulated has already reached its maximum MTU size, and then encapsulation is added – the packet becomes too long to pass through the network as a single packet and must be split in two or more packet fragments. This can cause a headache for the analytic tools which sit downstream of the traffic capture system itself. If you also take in to account that networks have multiple routes available to them and very often certain traffic types are send through certain routes, a packet can end up with its fragments being out of order, or even greatly separated in time when they are received by the network element that is to process them for analytic information. Imagine a network where there are multiple transport routes between source and destination – the fragmented packet’s middle fragment may arrive in advance of the first fragment or behind the third fragment! The 3rd fragment could arrive before the first two fragments, or get badly delayed in another part of the network, holding up the ability to process the first two fragments.

Why is this an issue?

All of a sudden, and for no apparent reason the traffic no longer resembles how it was sent, its characteristics have changed completely. The network has affected the transport of the packet. In order for a tool to be at its full capacity, it’s expecting perfect packets. Capacity will be reduced if the tool has to reassemble the fragments itself. Very often a more serious situation arises where the tool may or may not be able to remove the body of the fragment from the encapsulation type – leading to incomplete or inaccurate packet analysis. Where this happens, and very often analytic tools need to see the whole interchange between sender and receiver – just losing one packet, or even a fragment of a packet renders the session null & void. This collected and backhauled traffic is useless.

How to avoid the issue of fragmented packets flowing in to your tool, causing reduced performance?

Analytic tools are not build from the ground up to reassemble packets, nor to de-encapsulate them. This function is better left to another network element, ideally in the traffic capture layer. When you select a monitoring system, make sure it has a capability to re-assemble fragmented packets as a base feature…