How Can I inert mac with rasqlinsert?

modversion <modversion <at> gmail.com>
2010-08-01 12:00:41 GMT

Hi list:

How Can I inert mac with rasqlinsert? I’ve set ARGUS_GENERATE_MAC_DATA=yes to collect the mac address,but I can not store the smac and dmac in the mysql database via the command “/usr/local/bin/rasqlinsert -S localhost:561 -m none -d -M time 1d -w mysql://argus <at> localhost/argusData/argusTable_%Y_%m_%d”.

Anybody could be kind enough to do me a favor and tell me how can I get the mac into mysql database,thank you very much!

This will add smac and dmac to your database schema, which you can verify using the mysql

command, 'desc table'.

% mysql

mysql> use argusData;

mysql> desc argusTable_2010_08_01;

Adding this and running it may cause rasqlinsert() to try to insert the fields into an existing table that

doesn't have these fields in it, and it will fail, so be sure and drop any tables that cause problems.

mysql() will allow you to add attributes to existing tables, so if its really important, you can make

a legacy table usable.

Remember the more fields you "expose" in mysql() the more cycles it will take to insert a record,

so expose them in mysql if you will have queries that use the fields. If not, use rasql() to grab

the fields when you need them.

Carter

On Aug 1, 2010, at 8:00 AM, modversion wrote:

Hi list:

How Can I inert mac with rasqlinsert? I’ve set ARGUS_GENERATE_MAC_DATA=yes to collect the mac address,but I can not store the smac and dmac in the mysql database via the command “/usr/local/bin/rasqlinsert -S localhost:561 -m none -d -M time 1d -wmysql://argus <at> localhost/argusData/argusTable_%Y_%m_%d”.

Anybody could be kind enough to do me a favor and tell me how can I get the mac into mysql database,thank you very much!

This will add smac and dmac to your database schema, which you can verify using the mysql

command, 'desc table'.

% mysql

mysql> use argusData;

mysql> desc argusTable_2010_08_01;

Adding this and running it may cause rasqlinsert() to try to insert the fields into an existing table that

doesn't have these fields in it, and it will fail, so be sure and drop any tables that cause problems.

mysql() will allow you to add attributes to existing tables, so if its really important, you can make

a legacy table usable.

Remember the more fields you "expose" in mysql() the more cycles it will take to insert a record,

so expose them in mysql if you will have queries that use the fields. If not, use rasql() to grab

the fields when you need them.

Carter

On Aug 1, 2010, at 8:00 AM, modversion wrote:

Hi list:

How Can I inert mac with rasqlinsert? I’ve set ARGUS_GENERATE_MAC_DATA=yes to collect the mac address,but I can not store the smac and dmac in the mysql database via the command “/usr/local/bin/rasqlinsert -S localhost:561 -m none -d -M time 1d -wmysql://argus <at> localhost/argusData/argusTable_%Y_%m_%d”.

Anybody could be kind enough to do me a favor and tell me how can I get the mac into mysql database,thank you very much!

inputs.conf defines which directory to monitor for argus dataprops.conf defines the argus datatransforms.conf helps to extract argus data fieldsavedsearches.conf contains all the report

In data/ui/views/Argus.xml, it contains the link to the Argus report. I'm going to discuss about setup here -

For argus part, I use argus, rastream, racluster and ra to export the data as csv format, and rsync to the splunk server to /var/log/argus directory where splunk will monitor, I will also release all the scripts i have to make the whole things work. Basically the setup is

Okay, the reason I have most of the stuffs processed in sensor and then only sending the data to splunk server is because usually splunk is in high load condition if it needs to process everything, and it's not so wise to put more load to splunk server by running argus client tools in Splunk Server.

If you have more argus probes, can consider using radium setup. I will release all the stuffs together at one shot however my current documentation for the setup just sucks ;)

Carter and the rest of friends, if you have better idea of how to implement the whole thing that would be great.

Yep, still in edu, and still trying to rise to your level of geekiness.

I'd be glad to test it out. We're making increased use of argus, but searching the logs is timeconsuming. Being able to search in Splunk and locate exactly what log I need to go to would be quite helpful.

We're presently storing about 15 days of logs, capturing the first 400 bytes of every packet. It's been quite useful.

I was having the problem as well until I tried to get argus data into splunk,
and in fact I have almost all the fields in argus extracted and send to
splunk, I always put suser data and duser data at last field. My argus data
is in csv form and this is how I have it done with splunk though -

I don't expect everyone to get the idea at first glance however if you are
familiar with splunk or regex this won't be too hard.

I'm not trying to promote splunk here but since both of them can be glued
together so well, I just want to be able to perform analysis on every field i
can obtain from argus record, and graphing them, further generating report.
On top of them you can still keep argus record in its own format and
processed by ra like tools when you need to do some other post processing
which is not offered by splunk web.

Hey CS Lee,
Yes, the user buffers do need some work. So how do other systems, like csv,
deal with delimiters in the output? Is there a universal escape strategy?

Good to see you around.

Carter

On Jul 28, 2010, at 11:23 AM, CS Lee wrote:

hi Carter,

How's life, think I'm back and will blog more about argus and flow stuffs!

Regarding raconvert, the tricky part I see would be converting user data
field that is printed because I used to have the problem when using , or
other character as delimeter and end up need to do additional parsing to get
user data extracted properly in the ascii flow records.

Gentle people,
There is a new program in the clients distribution, raconvert(), with manpage.

This program is designed to convert ASCII based argus files to binary argus
data records. The ASCII must have a single character delimiter, such as a
',',
but you can specify the delimiter, using the "-c char" option.

raconvert() is not complete. Currently, I'm handling maybe 50 out of the 180
something fields that we can printout, but its time to put it out there, so
if you
try to use it, and some fields don't get converted, send me a sample ascii
file,
and I'll add the support that your field.

The records that we generate may not be complete. It depends on how much
information you provide in the ascii records. For instance if you only have
the "StartTime" field, without the "LastTime" field, the resulting binary
argus
record will have a duration of 0, so you want to ensure that you have enough
information in the ascii output to convey all that you want.

Also, the name suggests that it should be able to do conversion, which may
imply that it converts more than just one thing to another, so, ......,
if you have any ideas as to what you would like to convert, just holler, and
I'll see what I can do.

I will try to add XML conversion before the summer is done.

So why this program? The primary reason is to support moving argus data
around in environments that don't like binary data. You convert the records
to ASCII, printing as many fields as practical, move the file to the next
location,
and then convert them back to binary records so you can do work with them.
Some high security places need this type of support. But you could also use
it as a means to create an argus data editor, if you wanted.

Paul Schmehl, Senior Infosec Analyst
As if it wasn't already obvious, my opinions
are my own and not those of my employer.
*******************************************
"It is as useless to argue with those who have
renounced the use of reason as to administer
medication to the dead." Thomas Jefferson

--
Paul Schmehl, Senior Infosec Analyst
As if it wasn't already obvious, my opinions
are my own and not those of my employer.
*******************************************
"It is as useless to argue with those who have
renounced the use of reason as to administer
medication to the dead." Thomas Jefferson

IP Correlation

CS Lee <geek00l <at> gmail.com>
2010-08-03 12:47:10 GMT

hi guys,

Additionally, I have IP data in Argus correlating with other data sources such as "emering-threats" stuffs, spambot and so forth, all you need to do is actually convert those data in csv format(2 columns)(I have scripts to convert them too) and dump them into lookup directory, there is one simple example config that I put in the props.conf if iirc too.

So with that setup basically you can correlate IP that you obtain from argus data to any external data sources and this helps you to determine any bad ip immediately appeared in the list and it is done automatically. However if you want to run ip address matching quickly on argus data file itself, use rafilteraddr as it is freaking fast.

argus aggregation and CIDR addresses

Carter Bullard <carter <at> qosient.com>
2010-08-03 17:58:29 GMT

Gentle people,
I've added "CIDR address format" printing for IPv4 addresses and will have it for IPv6
addresses later in the week. I would like to know if anyone has an opinion as to whether
it should be the default printing mode for ra* programs.
Currently we do not print aggregated IP addresses using CIDR formats. While the CIDR
mask length has been available in aggregated argus data, it was not uniformly preserved
in all operations. That has been resolved, and for data aggregated using argus-clients-3.0.4,
the address mask length should be considered to be reliable.
In order to maintain legacy behavior, there are three modes for printing CIDR addresses,
and they are configured using the RA_CIDR_ADDRESS_FORMAT variable in the ~/.rarc file.
1) Printing disabled, "no", where we will not report the mask. (legacy mode)
2) Printing enabled, "yes" where we will print "/masklen" when the mask is < full address bits.
3) Printing enabled, "strict", where we will always print the "/masklen".
The idea behind the "yes" and "strict", is that unless you aggregate the data, all IP addresses
in the flow data have full address bit CIDR masklens, so no need to print the "/32", or "/128" .
One the other hand, some people don't like variability in their output formats, so forcing the "/%d"
to be at the end of every IP address could be a desired feature, for some.
I will leave the default to "no" unless we can come to consensus that "yes" is appropriate.
This is important, as we start to work on IP address indexing, as we have for time indexing.
This effort will be very interesting, but before we start, being able to print the CIDR masklen is
going to be really important.
Hope all is most excellent, and if you do have an opinion, don't hold it back.
Carter
Here is a simple description of how we could do IP address indexing. I'm going down this path:
One simple strategy for IP address indexing, is to have a mysql table, for each day, that has
entries for the occurences of all the /16 CIDR address aggregates. This fixed address strategy
has some advantages, primarily, it limits the database to a maximum of 64K entries per index,
which is a good thing. Using our existing database tool, rasqlinsert(), we can formulate the address
aggregates, and poke the aggregate argus records into a single table, and get some really good information.
rasqlindex -M rmon -m srcid smac saddr/16 -s stime dur srcid smac saddr -M cache time 1d -w
mysql://user <at> host/db/ipIndex_%Y_%m_%d
at anytime, we can search the table for the occurence of the /16 network for an address, and if its
a relatively unique address, we'll be able to find it very quickly:
rasql -M time 1d -r mysql://user <at> host/db/ipIndex_%Y_%m_%d -t -30d -M sql="saddr='network in question/16'"
A more elegant solution would allow us to have different CIDR mask lengths, depending on how
many addresses are represented by the aggregate, and the duration of the aggregate. If having using
a "/8" entry keeps the range of the aggregate to, say 30 seconds in a day, then that is a good index
representation. If the "/8" covers the whole day, but a "/9" generates two ranges, a short on in the
morning and a short one in the evening, then using "/9" for the index would be the right thing to do.
We'll be developing IP address indexing strategies that will try to minimize the number of entries,
but also minimize the time range covered by the entries.

Re: which is the best front web interface for me ?

Thank you carter,I will try to do something with Periscope,but could you like to tell me where can I find the commercial web interface for argus ?

If we can not find a suitable web interface,we will do it by ourself for our company,but we can not keep it open, because of the confidentiality agreement.

In my opinion, the visualize map were not the best bet for us, we only want to know which system are hacked (botnet detection) and which system are hacking (scaning,brute-forcing,spoofing)in our company,then block the ip with the firewall and locate the people with the smac.

All of them could be find out by analyse the network behavior data which collected with argus,not very difficult,just count the times which from the same source address to the same destination address and port.

In the botnet detection,we will use black list and white list to make it better

Radium to multiple Argi on the same host

Phillip Deneault <deneault <at> WPI.EDU>
2010-08-05 19:37:13 GMT

In situations where multiple Argi are running on the same sensor, which
is being collected by a Radium instance on a server, is there a good way
to either designate the destination directories and/or set the $srcid in
such a way as to allow radium to separate the flows on its own?
I can hard code an integer ID number into the monitor id of each Argi,
but then I need to keep some external list. I don't think using the IP
or hostname will work since the directory structure will probably be
identical for the two without a further index of some kind.
I tried setting an arbitrary string... just on the off-chance it might
work, but was unsuccessful.
Thanks,
Phil