On Fri, Jul 28, 2006 at 10:04:24AM +1000, Tim Eden (te) wrote:
> Hi Peter,
>> Thanks for the reply, I've read the FAQ on the argus page and also this
> thread which discusses argus performance:
>> >http://blog.gmane.org/gmane.network.argus/month=20040401>> But still can't seem to find the info I'm after on performance. Our
> primary link to the Internet peaks at just over 100 Mbit/sec in plus 50
> Mbit/sec out and 15,000 packets/sec in both directions. Averages for a
> 24 hour period are around 35 Mbit/sec in, 20 Mbit/sec out and 6,000
> packets/sec. Do you know what sort of machine we would need to handle
> this amount of traffic? Is it safe to assume that if the machine has a
> disk subsystem that can handle the write speed equivalent to the amount
> of traffic it sees (i.e. 150 Mbit/sec or 18.75 MByte/sec) that it will
> be able to handle the load? What sort of CPU and memory requirements
> does it have?
>> Cheers,
>> Tim
>>
That about matches our link speeds. My current sensor is a 4 or so
year old dual 1.2 Gig CPU Athelon box with 3 3c905b cards and 2 Syskonnect
fibre gig cards which feeds the data to a 600 meg P3 box via a cross over cable
on the third 3c905 card who then archives it (archiving to disk on the sensor
box tends to cause packet loss even at my speeds). The same box running Linux
with the PF_ring code in it successfully kept up with a jumbo frame gig link
at 995 megs per second. With special hardware (DAG cards) it has been seen
(not by me :-)) to do half or better of an OC192, so with sufficient money
performance isn't a problem. Disk I/O isn't an issue, by the time the write
to disk takes place you are a couple of orders of magnitude off line rate.
The P3 has an 80 gig IDE disk in it and does fine. This 7+ gig tcpdump file
(128 snaplength I think):
7579131905 2006-07-13 13:44 178.2.tcp
reduces to 100 megs after the argus daemon processes it so disk I/O
(as long as it isn't on the sensor) isn't much of an issue.
108905604 2006-07-24 20:23 r178.2.argus
The sensor is way over powered, when I bought it I was expecting
traffic on the C4 link (which was new then) which in practice is mostly
occuring on lightpaths instead.
From the sensor machine (512 megs of memory)
6:38PM up 696 days, 2:49, 1 user, load averages: 0.09, 0.11, 0.09
last pid: 70973; load averages: 0.05, 0.10, 0.08 up 696+02:50:07 18:38:30
25 processes: 1 running, 24 sleeping
CPU states: 2.2% user, 0.0% nice, 1.1% system, 4.5% interrupt, 92.1% idle
Mem: 391M Active, 31M Inact, 54M Wired, 24M Cache, 61M Buf, 1000K Free
Swap: 1007M Total, 195M Used, 812M Free, 19% Inuse
PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU CPU COMMAND
621 root 2 0 442M 337M select 1 236.3H 3.42% 3.42% argus_bpf
70973 vanepp 28 0 1904K 1036K CPU1 1 0:00 1.32% 0.29% top
627 root 2 0 2528K 412K select 0 59.7H 0.05% 0.05% argus_bpf
73170 root 2 0 130M 51136K select 1 115.0H 0.00% 0.00% argus_bpf
622 root 2 0 2872K 892K select 0 48.5H 0.00% 0.00% argus_bpf
73171 root 2 0 2868K 340K select 0 25.4H 0.00% 0.00% argus_bpf
84078 root 2 0 2868K 248K select 1 17.7H 0.00% 0.00% argus_bpf
97 root 2 0 3056K 692K select 0 36:21 0.00% 0.00% sendmail
92 root 10 0 1028K
So start with whatever cheap PC is lying around to see if you like it
and then upgrade if you need to :-).
This set is 2 hours of the commodity link probably about 50 megs per
second from the peak this morning
-rw-r--r-- 1 argus argus 155775538 Jul 27 12:06 com_argus.2006.07.27.11.00.00.0.gz
-rw-r--r-- 1 argus argus 150585105 Jul 27 13:06 com_argus.2006.07.27.12.00.00.0.gz
And this is from the dual gig links on to CA*net4 which see much less
traffic although they have clear channel gig rather than the traffic shaped
100 on commodity.
-rw-r--r-- 1 argus argus 19819348 Jul 27 12:01 c4_argus.2006.07.27.11.00.00.0.gz
-rw-r--r-- 1 argus argus 18227184 Jul 27 13:01 c4_argus.2006.07.27.12.00.00.0.gz
I just managed to score a pair of IBM P510 dual CPU dual core Power5
machines on a two for one deal (argus likes whichever endian Power is better
than Intel :-)) each with 4 gigs or ram. With DAG cards I expect I could keep
up with most of an OC 192 if I needed to. Here are a couple more references
that may be of interest:
http://www.usenix.org/publications/login/2001-11/pdfs/epp.pdfhttp://www.malmedal.net/Malmedal_Master_Thesis.pdfhttp://www.internet2.edu/presentations/jtvancouver/20050720-Argus-VanEpp.pdf
Peter Van Epp / Operations and Technical Support
Simon Fraser University, Burnaby, B.C. Canada