A little post to explain how i succeed to deploy a splunk forwarder on a raspberry pi 3

First install the client splunk enterprise on your labptop
Then configure it to accept receiving data on port 9997
In the upper right, click the dropdown for “Settings”. Under Data, click Forwarding and receiving, and you will be taken to the configuration page where you can set Splunk to listen for data from your Pi.

Click “configure receiving”, and you will be taken to the receive data configuration page. Assuming this is a brand new installation of Splunk, you will have no configurations. Click “New” and you will be taken to the new configuration wizard. For now, we will just add a new listener at port 9997, and click Save.

Then install the universal forwarder on your RPI :

Download the Universal Forwarder from http://apps.Splunk.com/app/1611 to your Pi

You’ll find some help on http://docs.splunk.com/Documentation/Splunk/6.0/Forwarding/Deployanixdfmanually

but it ‘s not necessary

just download the .tar file and use : tar -xvf …. to unzip it

One important thing to know when installing the Universal Forwarder on *nix, is that the default install does NOT autorun on boot.
You can set it to autostart running the following as root:$SPLUNK_HOME/bin/Splunk enable boot-start

To start Splunk on your forwarder, navigate to $SPLUNK_HOME /bin/ and run ./splunk start. You’ll see the standard output for startup.
At the next prompt, run ./splunk version, and you should see the version output for ARM Linux.

Our container name is webstack_nginx_1, the port TCP/8080 of our host will be tranfer to the port TCP/80 of the container (default port for NGinx) and it assigne in readonly the directory volume : /usr/share/nginx/html to the host directory $HOME/data/www .

Track Your Work Automatically
Your Waffle board shows your GitHub Issues and Pull Requests in real time. Never wonder if an Issue is still in progress or not. Waffle listens to the actions in your workflow to know when work is finished and updates your status automatically.

Testing your open sourceproject is 10000% free

Otto is the single solution to develop and deploy any application, with first class support for microservices.

Otto automatically builds development environments without any configuration; it can detect your project type and has built-in knowledge of industry-standard tools to setup a development environment that is ready to go. When you’re ready to deploy, otto builds and manages an infrastructure, sets up servers, builds, and deploys the application.

With the growing trend of microservices, Otto knows how to install and configure service dependencies for development and deployment. It automatically exposes these dependencies via DNS for your application to consume.

cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports information about running containers. Specifically, for each container it keeps resource isolation parameters, historical resource usage, histograms of complete historical resource usage and network statistics. This data is exported by container and machine-wide.

cAdvisor has native support for Docker containers and should support just about any other container type out of the box. We strive for support accross the board so feel free to open an issue if that is not the case. cAdvisor’s container abstraction is based on lmctfy’s so containers are inherently nested hierarchically.

To quickly tryout cAdvisor on your machine with Docker, we have a Docker image that includes everything you need to get started. You can run a single cAdvisor to monitor the whole machine. Simply run: