A Developers Blog

Category Archives: Linux

Here is a list of my personal most used and useful commands with Kubernetes.

kubectl config current-context # Get the Kuberneste context where you are operating

kubectl get services # List all services in the namespace
kubectl get pods # Get all pods
kubectl get pods –all-namespaces # List all pods in all namespaces
kubectl get pods -o wide # List all pods in the namespace, with more details
kubectl get deployment my-dep # List a particular deployment
kubectl get pods –include-uninitialized # List all pods in the namespace, including uninitialized ones

I’ve started a new course on ethical hacking to get a better understanding of the internet, software security, personal security etc.

I’ll post a series of posts where I will write down my notes on what I’ve learned.

I’ll start today with some basic terminology:

Term

Description

White Hat Hacker

People that do hacking to help others, legal and ethical

Black Hat Hacker

Unethical and unlegal activities

Grey Hat Hacker

Between White and Black hat

Footprinting

Information gathering on your target, on your task: like figuring out network related information, or software related details, or getting information from real world things or people. General information gathering in regard to your chosen target

DoS (Just you)

Denial Of Service – On person performs a certain amount of request, more than the server can handle, to make the server crash. Servers can handle only a certain amount of requests and so the requests that does not fit into the request pool limit will be dropped out. If the service attack comes from one location/machine this is should not be possible.

DDoS (multiple people)

Domain Denial Of Service – When you multiple computers/machines doing the service attack it will be harder for the software to know who to kick out.

The attack is not hard to do but the preparation is hard. You need to have multiple machines and to do this usually you have to infect other computers to create a bot farm of machines.

RAT

Remote Administration Tools – For DDoS attacks needed a software that can be distributed upon other computers. This gives you control of a computer and allows you to hide your identity. The operations are not visible to a normal user. You can even hide them so that they do not show in normal operating system diagnostic tools.

FUD ( Anti-virus can not detect)

Fully Undetectable – Also needed for DDoS attacks. Not labeled as malicious by anti-virus programs

Fishing

Applying a bait and someone acts on it. Example: You get an email from someone and you click on it. Either it uploads something malicious or you do something that compromises your data, security.

Usually these are done so that the links look authentic but once you click on them you are redirected to some other server, which is not the one you would expect.

An easy way to spot these kind of addresses is to look at the address. If it is not from an HTTPS address then you are probably dealing with a false address. HTTPS addresses are much harder to fake.

SQL Injections

Passing SQL Queries to HTTP requests. Allowing SQL command to run on a server to get or alter data that is not others wise intended to see or use.

VPN

Virtual Private Network – Routing and encrypting traffic data between you and the VPN server/provider. A way of anonymizing yourself.

There is no real easy way to identify you unless the VPN Provider gives up your identity.

Proxy

A less reliable way of staying anonymous. You could route your traffic between many proxies but the more proxies you have the harder it is to add new proxies to your traffic. This is mostly because of internet speed limitations, not enough available bandwidth. It will slow down you actions.

You can use free proxies and you can use paid proxies but paid ones leave a trace of whom you are.

Tor

Open Source – Another way to hide your identity. Faster than proxies but slower than VPNs. Routes traffic through different routes, routers, places to hide your trace.

There is a very high chance of staying hidden (99.99%), there are tools, ways to find but highly unlikely.

VPS

Virtual Private Server – a “security layer”, example: a virtual machine inside an actual machine that serves as a database server for you web server. This is done so that the database is not accessible from the outside directly.

In this way you can be specific who and from where can access that virtual machine.

Key Loggers

Tools that are used to extract information from a machine, these needs to be deployed to a machine where the tool gathers key strokes and send that information to a location for analysis.

Key Loggers can extract existing information as well, you can modify the settings of a key logger (what, where, how to act), you can take screenshots, to use a camera on a device, microphone etc.

Terminal

An interface to control your operating system. GUI tools are not as nearly as powerful as terminal tools.

Most hacking tools are designed for the terminal. Once you know how to do it in the terminal, you’ll know how to do it in the GUI.

Firewall

A firewall is configured through iptable commands.

Linux firewall is open source and it has a HUDE amount of options. On Windows, by default you have some of these options but you will need to buy some package or application to get more options.

Root Kit

A rootkit is a collection of computer software, typically malicious, designed to enable access to a computer or areas of its software that would not otherwise be allowed (for example, to an unauthorized user) and often masks its existence or the existence of other software.

Reverse-shells

There are thousands of Reverse-shells. You have a program that infects another device that program opens up a reverse connection from that device back to you. Therefore, you can keep up controlling an external device.

Usually you need to break through a router first and reconfigure it to give you more access to a network and machines.

For this Spark cluster installation, I used the following packages and tools:

· Oracle Java SDK 8

· Scala 2.12.1

· Apache Spark 2.1.0

Even if you encounter URLs to other versions than those written above, they are not true. I was just lazy changing my own notes, which I have gathered from several sources. So change the files names and URLs to correspond to those you want and need.

Before I go on describing the process, I just want to mention that while it has been a bit difficult doing this installation, this was fun once I got it to work.

My biggest problems where that I come from the Microsoft ecosystem where most things are preaty much behind a UI, you don’t have to understand or know that much necessarily.

Is this good or bad? Well it depends, sometimes having a button that does things is nice but on the other hand, it takes away from actually knowing what you are doing. For example, managing users, privileges, file system etc is a totally different thing on Linux and you actually have to know what you are doing.

I found the experience with Linux very fun and enjoyable. What I struggled the most with was Spark and Hadoop (not the topic of this post). It was difficult to understand which configurations are needed. I had problems getting Hadoop to do anything and the errors whereas usually with any software obscure, or in other words a pain in the ass.

I really had to focus and want to make it to work. I felt like giving up at times.

Anyway learning using Linux was the most fun, so much fun that I ended up installing a Linux distro dual booting with Windows.

On the top right corner press the mouse second button on the network indicator(the two arrows pointing up and down). Then select “Wireless & Wires Network Settings”. Then select the Interface and eht0 and configure the network configurations you desire.

Notice that I have removed everything else and left the localhost definition. This is a strange this, with Spark I could not get the workers to communicate properly without the localhost definition but with Hadoop it was the other way around, not sure now why this is.

After looking several tutorials on installing Hadoop all mentioned that it is a good practice to disable the IP6 support. Apparently Hadoop does not support IP6 properly or at all. Since Apache Spark work ontop of Hadoop then I applied the same method on Spark also.

Open the /etc/sysctl.conf file for editing and add the following at the end of the file:

This is to avoid using authentication when using Spark. If you do not do this you might run into problems and you also are constantly required to type in the account and password which you want to run Spark on.

This is something I had to do for Odroid C2. Even though the Odroid has double the RAM Raspberry Pi has, it also has Ubuntu installed on it which takes up nearly half the memory when booting. The Raspbian OS takes about a little over 150 MB, so about 15 % of the Raspberry Pi’s total RAM.

I ran into problems when I wanted to use the Odroid as the Master and also as a slave node for calculation. Because I wanted to use as much memory as possible on the actual Raspberry Pi nodes I allocated 768 MB, which is OK for the Pi’s but I could not allocate less than 512 MB for Odroid and allocating 512 MB caused the Odroid to swap and since there was no swap created the OS crashed or became unresponsive.

To combat this problem I create a swap for Odroid, the size of the Swap was double the RAM, 4 GB:

Before we begin, we will first check for the swap space available on the server or system

We can use the below command to see that the system is having the swap partition or not

$ free -h

We can also run the below command but if the swap partitions do not exist, we cannot see any information.

$ sudo swapon –s

In the above command, we can see that the swap is not enabled or not configured for this server to configure the swap in this machine. We will first check for the free disk space available with the below command –

$ df –h

Creating a Swap File

As we know the disk space availability, we can go ahead and create a swap file on the filesystem. To create the swap file we can use ‘fallocate’ a package or utility which can create a preallocated size to instantly. As we have a little space on the server will create a swap file with 512 MB size to create a swap file below is the command.

$ sudo fallocate -l 512M /swapfile

And to check the swap file we will use the below command

$ ls -lh /swapfile
-rw-r–r– 1 root root 512M Sep 6 14:22 /swapfile

Enabling the Swap to use the Swap File

Before, we are going to enable the swap, we need to fix the file permission that other than root any others can read/write the file below is the command to change the file permission.

$ sudo chmod 600 /swapfile

Once, we change the permission we will check the file below and execute the below command to check the swap file permissions.

Once, we change the permission we will check the file below and execute the below command to check the swap file permissions.

As in the above steps, we have created the swap partition and we are able to use that swap for temporary memory and once the machine is rebooted then the swap, setting will be lost to needed to use this swap file permanently we will make the swap file permanent.

We will edit the /etc/stab and add the information to mount the swap file even if we reboot the machine

$ sudo vi /etc/fstab

Add the below line to the existing file.

/swapfile none swap sw 0 0

For better performance for using the swap memory, we can do some tweaks.

Installing other packages for Spark

For my installation, I needed a few other packages to make things work:

If you are having problem with access to the spark folder or if you are installing Hadoop and have configured namenodes file system locations etc use the following command to add more privileges to the desired locations: