This allows for better performance by allowing more quest vms to run on the same hardware. As traffic increases (100 pps ) the short sleeps wont happen. If you have a lightly loaded (usually lab testing) scenario you may see high latency.

After investigation, we knew customer never enabled the knob q-pic-large-buffer small-scale on the previous IQ PIC so that it is able to accept relateively high PPS of traffic. However, IQE does not support such knob.

However it is also important to make sure that the device has the capacity or the ability to switch/route as many packets as required to achieve wire rate performance. This metric is called the ‘Packets per Second’ or PPS for short.

This can occur if the packets are coming at a very high rate. The packet rate is configured under forwarding-options sampling input family inet with max-packets-per-second knob. The default value for this is 1000, so if the sampled packets have a pps rate higher than 1000, the sampled packets will be dropped and samples dropped due to high packet rate message will be logged.

This would represent the most recent load-interval data for the interface. At the CLI, each aggregate count is taken and divide by the load interval duration to get the input/output bps/ pps . It should be noted that the accuracy of the bit rate is 64 bits per second.

If in the same period the in overrun counter does not increase, the frame is not dropped. If the in overruns are incrementing as well, then it is most likely a function of traffic volume hitting the firewall, and the frames can be dropped. Check the packets/sec ( pps ) rate on the interface to see if

CTP platforms have a maximum limit of 12500 packets per second ( PPS ). This should be kept in mind when configuring CTP bundles. When configuring a ctp2056 to support 56 E1 SAToP bundles, the packet size should be increased to 1456 bytes.