Welcome to Splunk Answers, a Q&A forum for users to find answers to questions about deploying, managing, and using Splunk products. Contributors of all backgrounds and levels of expertise come here to find solutions to their issues, and to help other users in the Splunk community with their own questions.

This quick tutorial will help you get started with key features to help you find the answers you need. You will receive 10 karma points upon successful completion!

Refine your search:

ANNOUNCEMENT: Answers is being migrated to a brand new platform! answers.splunk.com will be read-only from 5:00pm PDT June 4th - 9:00am PDT June 9th. Please read this Answers thread for all details about the migration.

Welcome to Splunk Answers! Not what you were looking for? Refine your search.

As you can see there is a first block with meta information followed by a endless block of metrics. I wish to extract the meta information from the header per file and set them as fields added to each record. The last line in the sample is the first data point.

I failed to find this in the documentation but imagine it to be possible.

3 Answers

and this is an exemple that could help you :in the REGULAR EXPRESSION part type ....(?...).and in the TEST STRING part type I'm not expert in splunk but look at this it will extrat the world notit could be a good exercice for you and for this exemple you can in Splunk search barre this command:rex field=_raw "....(?...).*" .

First I tried something with multivalued fields, but that didn't work out well - so I ended up with this sorry excuse for a hacky workaround:You do a lookup on the file, remove everything except the actual line you are interested in, and rex the contents out of it:

You can then think about how to get these fields unto every event, maybe by doing a subsearch and then some magic with eval or eventstats. This would require a thought that I can't come up with at the moment though.But perhaps it's easier to export the results of the above search with outputlookup somehow, and then do regular lookups on that exported data during the actual search. This however looks like it should more sensibly done with some script on the filesystem rather than with splunk, as you could simply exctract lines 2 and 3... unfortunately, I don't know how to do that off the top of my head either.

Since the data is usually not that big that might well be a way. When one does some steps manually might as well use a script to separate the second and third line into a lookup and strip it from the .csvI was hoping for a pure splunk way as this seems to me to be a relatively common case.Still this seems to be a good workarround. Alternatively since if you copy the csv into a folder where it can be indexed one could also set some tags manually.If the .csv file were in /opt/splunk/var/run/splunk one could use inputcsv

yes, i use the "default" csv inptu as tabular data and it works fone for the payload but alas the (also tabluar) header contains nice stuff... Naturally the header lines get eliminated by the definition of the headerline as the 4th line. (Unless i am mistaken).

We use our own and third-party cookies to provide you with a great online experience. We also use these cookies to improve our products and services, support our marketing campaigns, and advertise to you on our website and other websites. Some cookies may continue to collect information after you have left our website. Learn more (including how to update your settings) here. Closing this box indicates that you accept our Cookie Policy.