Welcome to Splunk Answers, a Q&A forum for users to find answers to questions about deploying, managing, and using Splunk products. Contributors of all backgrounds and levels of expertise come here to find solutions to their issues, and to help other users in the Splunk community with their own questions.

This quick tutorial will help you get started with key features to help you find the answers you need. You will receive 10 karma points upon successful completion!

What is the best architecture setup to handle multiple log sources and apply all data to specific field headers?

0

Hello - as you may see by my account status, I'm a complete newbie to Splunk.

I apologize for any confusion or use of incorrect terminology. Here's my questions as I've written them down along my journey.

What is the best architecture setup to handle multiple log sources and apply all data to specific field headers? I ask this knowing that I want to first define the fields I want. These fields will be able to cover the vast of incoming log sources, but I know some fields will be null, and that’s ok.

I need this data to be under its respective field at index time ( I think? ), the reason is because I need the data NOT only viewable in the Splunk Web but also need it defined/assigned correctly on the backend for further and separate processing.

Do I need multiple indexers? If so, how do I go about setting this up (step by step please)?How do I connect a multi system Splunk architecture? tutorial reference should be fine.How do I ensure specific data is assigned to specific field headers?

People who like this

1 Answer

You can ingest many different types of logs on a single indexer, and the beauty of Splunk is that you don't need to define your fields ahead of time. Only a few basic fields such as source, sourcetype, host, and time are captured at indexed time, and all other fields can be extracted on-the-fly at search-time.

Thank you for the response. I should clarify that I'm more-so a newbie with administration and data handling, rather than being a user of Splunk and its resources within the web interface.

I have a bit better grasp on how the architecture will be setup. I do have a question about data ingestion though. Can you point me in the right direction if I want to accomplish the following tasks:

I want to receive data from a Cisco ASA FW for VPN logging purposes. This data comes in with different formats depending on the VPN log.

I want to have each uniquely formatted VPN log indexed as a separate sourcetype. How is this possible and where do I start making config changes to accomplish this?

Furthermore, I want to ingest more VPN data from another Cisco ASA FW but I need to tag this data in a way that it is separate from the first Cisco FW. I'd prefer to have it tagged with a custom ID, rather than differentiating by the 'host' or 'source' field. If this is done through specific config files, can you point out which ones and which locations?

We use our own and third-party cookies to provide you with a great online experience. We also use these cookies to improve our products and services, support our marketing campaigns, and advertise to you on our website and other websites. Some cookies may continue to collect information after you have left our website. Learn more (including how to update your settings) here. Closing this box indicates that you accept our Cookie Policy.