which show traffic in and out through a peering point or network border.
The SubNetIO report updates RRD files for each of the subnets that you specify (so that
you can produce graphs of CampusIO
by subnet).

The idea behind the distinct report modules is that users will be able to
write new reports that are either derived-classes from CampusIO
or altogether new ones. For instance, one may wish to write a report module
called Abuse which would send email when it detected potentially abusive things going
on, like Denial-of-Service attacks and various scans.

FlowScan is freely-available under the GPL, the GNU General Public License.

Please help me to help you. It is, unfortunately, not uncommon for one to
have questions or problems while installing FlowScan. Please do not send
email about such things to my personal email address, but instead check the
FlowScan mailing list archive, and join the FlowScan mailing list.
Information about the FlowScan mailing lists can be found at:

If you have previously installed and properly configured
FlowScan-1.005, you need only perform a subset of the steps that one would normally have
to perform for an initial installation.

This release of FlowScan uses more memory than previous releases. That is,
the flowscan process will grow to a larger size than that in
FlowScan-1.005. In my recent experience while testing this release, the flowscan process size to approximately 128MB when I use the new experimental BGPDumpFile option to produce ``Top'' reports by ASN. This is hopefully understandable
since flowscan is carrying a full internet routing table when configured in this way. The
memory requirements are significantly lessened if you do not use the
BGPDumpFile option. The flowscan process' size is also a function of the number of active hosts in your
network.

Upgrading perl Modules
Upgrade the Cflow perl module to Cflow-1.030 or later for improved performance. Install HTML::Table in case you want to produce the new ``Top Talkers'' reports. Details on how
to obtain and install these modules can be found in Software Requirements, below.

Upgrading FlowScan
Of course, when upgrading you will need to obtain the current FlowScan.
When you run configure, you should specify the same value with --prefix that you did when installing your existing FlowScan, e.g. /var/local/flows, or wherever your time-stamped raw flow files are currently being written
by cflowd.

There are new TopN and ReportPrefixFormat directives for
CampusIO and SubNetIO. These directives enable the production of ``Top Talker'' reports.
Furthermore there are new experimentalBGPDumpFile and ASNFile options CampusIO which are used to produce ``Top'' reports by Autonomous System. You will
need access a Cisco carrying a full BGP routing table to produce such
reports. See the CampusIO configuration documentation for more info about
configuring this feature. If you have trouble with it, remember that it is
experimental, so please join the discussion in the mailing list.

Secondly, the Napster_subnets.boulder has changed significantly since that provided with FlowScan-1.005. If you
have FlowScan configured to measure Napster traffic, replace your old
Napster_subnets.boulder with the one from the newer distribution:

$ cp cf/Napster_subnets.boulder $PREFIX/bin/Napster_subnets.boulder

Upgrading your RRD Files

If you are upgrading, it is necessary to add two new Data Sources to the
some of your existing RRD files. Before running flowscan, backup your RRD
files, e.g.:

Cisco routers
If you don't have Cisco at your border, you're probably barking up the
wrong tree with this package. Also, FlowScan currently requires that your
IOS version supports NetFlow version 5. Try this command on your router if
you are unsure:

ip flow-export version ?

a GNU/Linux or Unix machine
If you have a trivial amount of traffic being exported to cflowd, such as a
T1's worth, perhaps any old machine will do.

However, if you want to process a fair amount of traffic (e.g. at ~OC-3
rates) you'll want a fast machine.

I've run FlowScan on a SPARC Ultra-30 w/256MB running Solaris 2.6, a Dell
Precision 610 (dual Pentium III, 2x450Mhz) w/128MB running Debian Linux
2.1, and most recenlty a dual PIII Dell server, 2x600Mhz, w/256MB running
Debian Linux 2.2r2. The Intel machines are definitely preferably in the
sense that flowscan processes flows in about 40% of the time that it took the SPARC. (The main flowscan script itself is currently single-threaded.)

In an early performance test of mine, using 24 hours of flows from our
peering router here at UW-Madison, here's the comparison of their ave. time
to process 5 minutes of flows:

SPARC - 284 sec
Intel - 111 sec

Note that it is important that flowscan doesn't take longer to process the
flows than it does for your network's activity and exporting Cisco routers
to produce the flows. So, you want to keep the time to process 5 minutes of
flows under 300 seconds on average.

My recent testing has indicated that 600-850MHz PIII machines can usually
process 3000-4000 flows per second, if flowscan doesn't have to compete with too many other processes.

Disk Space
I recommend devoting a file-system to cflowd and FlowScan. Both require
disk space and the amount depends upon a number of things:

To find the characteristics of your environment, you'll just have to run
the patched cflowd for a little while to see what you get.

Early in this project (c. 1999), we were usually collecting about
150-300,000 flows from our peering router every 5 minutes. Recently, our
5-minute flow files average ~15 to 20 MB in size.

During a recent inbound Denial-of-Service attack consisting of 40-byte TCP
SYN packets with random source addresses and port numbers, I've seen a
single ``5-minute'' flow file greater than 500MB! Even on our fast machine,
that single file took hours to process.

Surely YMMV, currently a 35GB file-system allows us to preserve
gzip(1)ped flow files for about 2 weeks.

Network Interface Card
Regarding the host machine configuration, consider the amount of traffic
that may be exported from your Cisco(s) to your collector
machine if you have enabled ip route-cache flow on very many fast interfaces. With lots of exported flow data (e.g. 15-20
MB of raw flow file data every 5 minutes) and only a 10 Mb/s ethernet NIC,
I found that the host was dropping some of the incoming UDP packets, even
though the rate of incoming flows was less than 2 Mb/s. This was evidenced
by a constantly-increasing number of udpInOverflows in the
netstat -s output under Solaris. I addressed this by reconfiguring my hosts with a 100
Mb/s fast ethernet NIC or 155 Mb/s OC-3 ATM LANE interface and have not
seen that problem since. Of course, one should assure that the requisite
bandwidth is available along the full path between the exporting
Cisco(s) and the collecting host.

The packages and perl modules required by FlowScan are numerous. Their
presence or absence will be detected by FlowScan's configure script but you'll save yourself some frustration by getting ahead of the
game by collecting and installing them first. Below, I've attempted to
present them in a reasonable order in which to obtain, build, and install
them.

As of arts++-1-1-a5, the arts++ build appears to require GNU make 3.79
because its Makefiles use glob for header dependencies, e.g. ``*.hh''. From
my cursory look at the GNU make ChangeLog, perhaps any version >=
3.78.90 will suffice. Also there may be trouble if you don't have flex
headers installed in your ``system'' include directory, such as
``/usr/include'', even though ``configure.in'' appears to be trying to
handle this situation. Since mine were in the ``local'' include directory,
I hand-tweaked the classes/src/Makefile's ``.cc.o'' default rule to include
that directory as well.

In my experience with building cflowd, you're the most likely to have
success in a GNU development environment such as that provided with
GNU/Linux or FreeBSD.

I have not had problems building the patched cflowd-2-1-a9 or
cflowd-2-1-a6 under Debian Linux 2.2.

I've also managed to build the patched cflowd-2-1-a6 with gcc-2.95.2 and
binutils-2.9.1 on a sparc-sun-solaris2.6 machine with GNU make 3.79 and
flex-2.5.4.

As of cflowd-2-1-a6, beware that during the build may pause for minutes
while as(1) uses lots of CPU and memory to building
``CflowdCisco.o''. This is apparenly `normal'. Also, the build appears to
be subtley reliant on GNU ld(1), which is available in the GNU
``binutils'' package. (I was unable to build cflowd-2-1-a6 with the
sparc-sun-solaris2.6 ``/usr/ccs/bin/ld'' although earlier cflowd releases
built fine with it.)

perl 5
If you don't have this already, you're probably way over your head, but
anyway, check out the Comprehensive Perl Archive Network (CPAN):

I recommend that you install rrdtool from source, even if it is available as an optional binary package for
operating system distribution. This is because FlowScan expects that you've
built and installed RRDTOOL something like this:

$ ./configure --enable-shared
$ make install site-perl-install

That last bit is important, since it makes the rrdtool perl modules available to all perl scripts.

I recommend that you create a user just for the purpose of running these
utilities so that all directory permissions and created file permissions
are consistent. You may find this useful especially if you have multiple
network engineers accessing the flows.

I suggest that the FlowScan --prefix directory be owned by an appropriate user and group, and that the
permissions allow write by other members of the group. Also, turn on the
set-group-id bit on the directory so that newly created files (such as the
flow files and log file) will be owned by that group as well, e.g.:

The current FlowScan graphing stuff likes your machine to have the
80/tcp service to be called http. Try running this command:

$ perl -le "print scalar(getservbyport(80, 'tcp'))"

You can continue with the next step if this command prints http. However, if it prints some other value, such as www, then I suggest you modify your /etc/services file so that the line containing
80/tcp looks something like this:

http 80/tcp www www-http #World Wide Web HTTP

Be sure to leave the old name such as www as an ``alias'', like I've shown here. This will reduce the risk of
breaking existing applications which may refer to the service by that name.
If you decide not to modify the service name in this way, FlowScan should
still work, but you'll be on your own when it comes to producing graphs.

Lastly, in complicated environments, choosing which particular interfaces
should have ip route-cache flow enabled is somewhat difficult. For FlowScan, one usually wants it enabled
for any interface that is an ingress point for traffic that is from inside
to outside or vice-versa. You probably don't want flow-switching enabled
for interfaces that carry policy-routed traffic, such as that being
redirected transparently to a web cache. Otherwise, FlowScan could count
the same traffic twice because of multiple flows being reported for what
was essentially the same traffic making multiple passes through a border
router. E.g. user-to-webcache, webcache-to-outside world (on behalf of that
user).

By the way, in the above commands, all is OK if make says ``Nothing to
be done for `target'''. As long as make completes without an error, all is OK.

Subsequently in this document the ``prefix'' directory will be referred to
as the ``--prefix diretory'' or using the environment variable
$PREFIX. FlowScan does not require or use this environment variable, it's just a
documentation convention so you know to use the directory which you passed
as with --prefix.

The FlowScan Package ships with sample configuration files in the cf
sub-directory of the distribution. During initial configuration you will
copy and sometimes modify these sample files to match your network
environent and your purposes.

FlowScan looks for its configuration files in its bin directory - i.e. the directory in which the flowscan perl script and FlowScan report modules are installed. I don't really like this, but that's
the way it is for now. Forgive me.

A number of the directorives have paths to directory entries as their
values. One has a choice of configuring these as either relative or
absolute paths. The samples configuration files ship with relative path
specifications to minimize the changes a new user must make. However, in
this configuration, it is imperitive that flowscan be run in the --prefix directory if these relative paths are used.

Decide which FlowScan Reports to Run
The FlowScan package contains the CampusIO and SubNetIO reports. These two reports are mutually exclusive - SubNetIO does everything that CampusIO does, and more.

Initially, in flowscan.cf I strongly suggest you configure:

ReportClasses CampusIO

rather than:

ReportClasses SubNetIO

The CampusIO report class is simpler than SubNetIO, requires less configuration, and is less CPU/processing intensive. Once
you have the
CampusIO stuff working, you can always go back and configure
flowscan to use SubNetIO instead.

There is POD documentation provided with the CampusIO and
SubNetIO reports. Please use that as the definitive reference on configuration
options for those reports, e.g.:

$ cd bin
$ perldoc CampusIO

Copy and Edit CampusIO.cf
Copy the template to the bin directory. Adjust the values using the required and optional configuration
directives documented there-in.

The most important thing to consider configuring in CampusIO.cf is the method by which CampusIO should identify outbound flows. In order of preference, you should define NextHops, or
OutputIfIndexes, or neither. Beware that if you define neither, CampusIO will resort to
using the flow destination address to determine whether or not the flow is
outbound. This can be troublesome if you do not accurately define your
local networks (below), since flows forwarded to any non-local addresses
will be considered outbound. If possible, it's best to define the list of NextHops to which you know your outbound traffic is forwarded.

For most purposes, the default values for the rest of the CampusIO
directives should suffice. For advanced users that export from multiple
Ciscos to the same cflowd/FlowScan machine, it is also very important to
configure LocalNextHops.

The local_nets.boulder file must contain a list of the networks or subnets within your
organization. It is imperative that this file is maintained accurately
since flowscan will use this to determine whether a given flow represents
inbound traffic.

You should probably specify the networks/subnets in as terse a way as
possible. That is, if you have two adjacent subnets that can be coallesced
into one specification, do so. (This is differnet than the similarly
formatted our_subnets.boulder file mentioned below.)

The format of an entry is:

SUBNET=10.0.0.0/8
[TAG=value]
[...]

Technically, SUBNET is the only tag required in each record. You may find it useful to add
other tags such as DESCRIPTION for documentation purposes. Entries are seperated by a line containing a
single =.

FlowScan identifies outbound flows based on the list of nexthop addresses
that you'll set up below.

Copy the template to the bin directory from which you will be running flowscan. The supplied content seems to work well as of this writing (Mar 10,
2000). No warranties. Please let me know if you have updates regarding
Napster IP address usage, protocol, and/or port usage.

The file Napster_subnets.boulder should contain a list of the networks/subnets in use by Napster, i.e. napster.com.

This file is used by the SubNetIO report class, and therefore is only necessary if you have defined ReportClasses SubNetIO rather than ReportClasses CampusIO.

The file our_subnets.boulder should contain a list of the subnets on which you'd like to gather I/O
statistics.

You should format this file like the aforementioned
local_nets.boulder file. However, the SUBNET tags and values in this file should be listed exactly as you use them in
your network: one record for each subnet. So, if you have two subnets, with
different purposes, they should have seperate entries even if they are
numerically adjacent. This will enable you to report on each of those user
populations independently. For instance:

If you'd like to have FlowScan save your flow files, make a sub-directory
named saved in the directory where flowscan has been configured to look for flow files.
This has been specified with the
FlowFileGlob directive in flowscan.cf and is usually the same directory that is specified using the FLOWDIR directive in your
cflowd.conf.

If you do this, flowscan will move each flow file to that saved
sub-directory after processing it. (Otherwise it would simply remove them.)
e.g.:

$ mkdir $PREFIX/saved
$ touch $PREFIX/saved/.gzip_lock

The .gzip_lock file created by this command is used as a lock file to ensure that only one
cron job at a time.

Be sure to set up a crontab entry as is mentioned below in Final Setup. I.e. don't complain to the author if you're saving flows and your
file-system fills up ;^).

At this point, the RRD files have been created and updated as the flow
files are processed. If not, you should use the diagnostic warning and
error messages or the perl debugger (perl -d flowscan) to determine what is wrong.

Look at the above output carefully. It is imperative that the number of
seconds that Cflow::find took not usually approach nor exceed 300. If, as in the example above, your log
messages indicate that it took more than 300 seconds, FlowScan will not be
able to keep up with the flows being collected on this machine (if the
given flow file is representative). If the total of usr + sys CPU seconds
totals more than 300 seconds, than this machine is not even capable of
running FlowScan fast enough, and you'll need to run it on a faster machine
(or tweak the code, rewrite in C, or mess with process priorities using
nice(1), etc.)

Here are some hints on getting the most out of your hardware if you find
that FlowScan is processing 300 seconds of flows in less an averave of 300
CPU seconds or less, but not 300 seconds of real time; i.e. the flowscan process is not being scheduled to run often enough because of context
switching or because of its competing for CPU with too many other
processes.

On a 2 processor Intell PIII, to keep flowscan from having to compete with other processes for CPU, I have recently had
good luck with setting the flowscan process' nice(1) value to -20.

Furthermore, I applied this experimental patch to the Linux 2.2.18pre21
kernel:

Once you feel that flowscan is working correctly, you can set it (and cflowd) to start up at system boot time. Sample rc scripts for Solaris and Linux are supplied in the rc sub-directory of this distribution. You may have to edit these scripts
depending on your ps(1) flavor and where various commands have
been installed on your system.

Also, if you're saving your flow files, you should set up crontab entries
to handle the ``old'' flows. I use one crontab entry to
gzip(1) recently processed files, and another to delete the files older than a
given number of hours. The ``right'' number of hours is a function of your
file-system size and the rate of flows being exported/collected. See the example/crontab file.

This should produce the ``Campus I/O by IP Protocol'' and ``Well Known
Services'' graphs in PNG files. GIF files may be produced using the
filetype option mentioned below.

If this command fails to produce those graphs, it is likely that some of
the requisite .rrd files are missing, i.e. they have not yet been created by FlowScan, such as http_dst.rrd. If this is the case, it is probably because you skipped the configuration
of /etc/services
in Configuring Your Host. Stop flowscan, rename your
www_*.rrd files to http_*.rrd, modify /etc/services, and restart flowscan.

Alternatively, you may copy and customize the graphs.mf Makefile to remove references to the missing or misnamed .rrd files for those targets. Also, you could produce your graphs using a
graphing tool such as RRGrapher mentioned below in Custom Graphs.

Note that the graphs.mf template Makefile has options to specify such things as the range of time,
graph height and width, and output file type. Usage:

There is a new graphing feature which allows you to specify events that
should be displayed in your graphs. These events are simply a list of
points in time at which something of interest occurred.

For instance, one could create a plain text file in the graphs
directory called events.txt containing these lines:

2001/02/10 1538 added support for events to FlowScan graphs
2001/02/12 1601 allowed the events file to be named on make command line

Then to generate the graphs with those events included one might run:

$ make -f graphs.mf events=events.txt

This feature was implemented using a new script called event2vrule
that is supplied with FlowScan. This script is meant to be used as a
``wrapper'' for running rrdtool(1), similarly to how one might
run nohup(1). E.g.:

$ event2vrule -h 48 events.txt rrdtool graph -s -48h ...

That command will cause these VRULE arguments to be passed to rrdtool, at the end of the argument list:

COMMENT:\n
VRULE:981841080#ff0000:2001/02/10 1538 added support for events to FlowScan graphs
COMMENT:\n
VRULE:982015260#ff0000:2001/02/12 1601 allowed the events file to be named on make command line
COMMENT:\n

For other custom graphs, if you use the supplied graphs.mf Makefile, you can use the examples there in to see how to build ``Campus
I/O by Network'' and ``AS to AS'' graphs. The examples use UW-Madison
network numbers, names of with which we peer and such, so it will be
non-trivial for you to customize them, but at least there's an example.

Currently, RRD files for the configured ASPairs contain a : in the file name. This is apparently a no-no with RRDTOOL since, although
it allows you create files with these names, it doesn't let you graphs
using them because of how the API uses : to seperate arguments.

For the time being, if you want to graph AS information, you must manually
create symbolic links in your graphs sub-dir. i.e.