I'm trying to Network profile for some embedded applications on 4 different devices. I'm using a managed switch to mirror the ports connected to each device to my capture pc. During a 2 hour capture, the traffic does not exceed 731 Kbits/sec.

I typically use a display filter to isolate the traffic for one device and export the specified packets to a new .pcapng file that is smaller and easier to work with. While trying to find the peak data rates of short bursts of traffic I noticed a discrepancy between the IO graph from the original capture file and the exported capture file. For each capture I added a new graph and applied the same display filter used to export the packets.

Here is an example display filter, obviously the MAC has been changed:

For one device the, difference in data rates for the same burst of traffic is 10031 Bits/s. For another device the difference was 72280 Bits/s. Even more confusing is the fact that in the capture file properties, the "Displayed" statistics from the original capture, when using the display filter used to export traffic for a particular device, match the "Captured" statistics in the exported capture file exactly. I should mention that this is all UDP traffic.

If I change the Y axis from bit/s to packets/s, these also do not match...

I think this is the result of a change in start time, resulting in different sampling intervals. For example, in the original capture file, you might have packets split between two intervals whereas in the filtered file, they could fall within the same interval.

For example, suppose you had this distribution of packets in the original capture file (here X represents where those packets are within the interval):

I have defined my sampling interval within the display filter. If what you're suggesting were true, wouldn't the "Displayed" statistics (from original pcap) and "Captured" statistics (from exported pcap) differ? In my case, they match exactly. The # of packets, timespan, everything matches...