Tuesday, 5 November 2013

for several years the squid proxy can be used as transparent proxy for HTTP and also HTTPS. As I was curious how it will work and how hard it is to setup, I've just installed and configured it.

First I installed a fresh virtual machine with Debian 7.2. In Debian you could either install Squid 2.7 or Squid 3.1 via apt-get (apt-get install squid or apt-get install squid3). Unfortunately to make a transparent proxy that also supports all HTTPS features, at least version 3.2 is needed. So I downloaded the latest sources (Version 3.3.10) directly from squid-cache.org. Before installing, the following packages should be installed in Debian, otherwise errors will pop-up during configure or make:

# apt-get install build-essential
# apt-get install libssl-dev

After unpacking the squid sources it is important to use the following configure statement, to activate ssl, because it is disabled by default:

Now squid is installed in /usr/local/squid. As next step the user squid should be created and the log directory should be allocated to that user:

# useradd squid
# chown -R squid:squid /usr/local/squid/var/logs/

The next steps I've copied from the squid documentation (2):

Afterwards you must create the swap directories. Do this by running Squid with the -z option:

# /usr/local/squid/sbin/squid -z

Once the creation of the cache directories completes, you can start Squid and try it out. Probably the best thing to do is run it from your terminal and watch the debugging output. Use this command:

# /usr/local/squid/sbin/squid -NCd1

If everything is working okay, you will see the line:

Ready to serve requests.

If you want to run squid in the background, as a daemon process, just leave off all options:

# /usr/local/squid/sbin/squid

Now you should have a running squid on port 3128. But we still do not support HTTPS requests and the proxy is still not transparent. The next steps will be modifing squid.conf and put in some iptables rules. But at first we need to create our your own CA (Certificate Authority):

This pem file can now be imported in your certificate store in your browser. Then you will not get any certificate errors when surfing HTTPS sites later via our transparent squid.
Next we need to replace the line "http_port 3128" with the following lines in /usr/local/squid/etc/squid.conf:

To fully work as transparent HTTPS proxy, your clients in the network needs now the IP of this proxy as gateway address and the pem certificate needs to be imported in the browser of the clients.

Now you can start squid by exeuting:

# /usr/local/squid/sbin/squid

Debugging:

If you have any problems you should check if squid and their ports are running. You can do this by using netstat:

# netstat -tulpen

You should then see port 3128 and 3127. If not execute "killall squid" several times and restart squid in debugging mode with

# /usr/local/squid/sbin/squid -NCd9

You can also have a look at the access.log during browsing or via tcpdump to see if the packets are really arriving at your proxy.

Hint:

This was just a quick'n'dirty how-to on how a transparent proxy supporting HTTPS can be created. This setup is for lab environments to get to know squid and it's capabilites and not for productive use. For example your private key is in the pem certificate which should be seperated from the certificate your deploying to your browser.

for several years the squid proxy can be used as transparent proxy for HTTP and also HTTPS. As I was curious how it will work and how hard it is to setup, I've just installed and configured it.

First I installed a fresh virtual machine with Debian 7.2. In Debian you could either install Squid 2.7 or Squid 3.1 via apt-get (apt-get install squid or apt-get install squid3). Unfortunately to make a transparent proxy that also supports all HTTPS features, at least version 3.2 is needed. So I downloaded the latest sources (Version 3.3.10) directly from squid-cache.org. Before installing, the following packages should be installed in Debian, otherwise errors will pop-up during configure or make:

# apt-get install build-essential
# apt-get install libssl-dev

After unpacking the squid sources it is important to use the following configure statement, to activate ssl, because it is disabled by default:

Now squid is installed in /usr/local/squid. As next step the user squid should be created and the log directory should be allocated to that user:

# useradd squid
# chown -R squid:squid /usr/local/squid/var/logs/

The next steps I've copied from the squid documentation (2):

Afterwards you must create the swap directories. Do this by running Squid with the -z option:

# /usr/local/squid/sbin/squid -z

Once the creation of the cache directories completes, you can start Squid and try it out. Probably the best thing to do is run it from your terminal and watch the debugging output. Use this command:

# /usr/local/squid/sbin/squid -NCd1

If everything is working okay, you will see the line:

Ready to serve requests.

If you want to run squid in the background, as a daemon process, just leave off all options:

# /usr/local/squid/sbin/squid

Now you should have a running squid on port 3128. But we still do not support HTTPS requests and the proxy is still not transparent. The next steps will be modifing squid.conf and put in some iptables rules. But at first we need to create our your own CA (Certificate Authority):

This pem file can now be imported in your certificate store in your browser. Then you will not get any certificate errors when surfing HTTPS sites later via our transparent squid.
Next we need to replace the line "http_port 3128" with the following lines in /usr/local/squid/etc/squid.conf:

To fully work as transparent HTTPS proxy, your clients in the network needs now the IP of this proxy as gateway address and the pem certificate needs to be imported in the browser of the clients.

Now you can start squid by exeuting:

# /usr/local/squid/sbin/squid

Debugging:

If you have any problems you should check if squid and their ports are running. You can do this by using netstat:

# netstat -tulpen

You should then see port 3128 and 3127. If not execute "killall squid" several times and restart squid in debugging mode with

# /usr/local/squid/sbin/squid -NCd9

You can also have a look at the access.log during browsing or via tcpdump to see if the packets are really arriving at your proxy.

Hint:

This was just a quick'n'dirty how-to on how a transparent proxy supporting HTTPS can be created. This setup is for lab environments to get to know squid and it's capabilites and not for productive use. For example your private key is in the pem certificate which should be seperated from the certificate your deploying to your browser.

Sunday, 20 October 2013

Hey there,

I've mentioned in one of my last posts, that it is possible to forward X via SSH. In my case I'm connecting from my Mac OS X client to my Rasperry PI running Kali Linux. I'm using the X forwarding feature of ssh to start tools that would need X on my Raspberry Pi, but the window will pop up in Mac OS X, as long as X11 is started on my Mac. If this was too confusing, you can just read this, and I think you will get it ;-)

I've just got one problem when doing this: When I log into Kali Linux I'm using an unprivileged account, let's say the account name is alice. The problem is that some tools need root-privileges, like Wireshark (of course you can also run tcpdump, but Wireshark is just an example). If I switch to the root account via su, the X forwarding for the application I want to start is not working anymore:

I'm getting this error because when the ssh connection is initiated a file called .Xauthority is created in the home directory of alice. This file contains a "session cookie" called Magic-Cookie. When I want to start now the application as root, the content of this file is not available to the root account, so I have to copy the .Xauthority file to the home folder of the root account:

# su -# cp /home/alice/.Xauthoriy /root/

Then the Magic-Cookie will also be available for the root account and now wireshark can be started. If it is still not working you should check the environment variable DISPLAY. The DISPLAY variable of alice needs to be the same as in the root account.

To automate this task, I've created the file .bash_profile in the root directory:

# touch /root/.bash_profile# vim /root/.bash_profile

and added the following content:

# cp /home/alice/.Xauthoriy /root/

Now everytime when I change to the root account the .Xauthority will be copied in the home folder of the root account and the X forwaring feature is still working.

If you have better/other solutions for this problem, feel free to leave a comment.
Cheers.

I've mentioned in one of my last posts, that it is possible to forward X via SSH. In my case I'm connecting from my Mac OS X client to my Rasperry PI running Kali Linux. I'm using the X forwarding feature of ssh to start tools that would need X on my Raspberry Pi, but the window will pop up in Mac OS X, as long as X11 is started on my Mac. If this was too confusing, you can just read this, and I think you will get it ;-)

I've just got one problem when doing this: When I log into Kali Linux I'm using an unprivileged account, let's say the account name is alice. The problem is that some tools need root-privileges, like Wireshark (of course you can also run tcpdump, but Wireshark is just an example). If I switch to the root account via su, the X forwarding for the application I want to start is not working anymore:

I'm getting this error because when the ssh connection is initiated a file called .Xauthority is created in the home directory of alice. This file contains a "session cookie" called Magic-Cookie. When I want to start now the application as root, the content of this file is not available to the root account, so I have to copy the .Xauthority file to the home folder of the root account:

# su -# cp /home/alice/.Xauthoriy /root/

Then the Magic-Cookie will also be available for the root account and now wireshark can be started. If it is still not working you should check the environment variable DISPLAY. The DISPLAY variable of alice needs to be the same as in the root account.

To automate this task, I've created the file .bash_profile in the root directory:

# touch /root/.bash_profile# vim /root/.bash_profile

and added the following content:

# cp /home/alice/.Xauthoriy /root/

Now everytime when I change to the root account the .Xauthority will be copied in the home folder of the root account and the X forwaring feature is still working.

If you have better/other solutions for this problem, feel free to leave a comment.
Cheers.

Sunday, 13 October 2013

Hi there,

with tmux you can make your life a little more easy, if you have to work on the command line or manage one or more servers. So here is what I did:

If I connect to one of my servers via ssh I'm doing this always via my ssh key. Here you can find a detailed guide on how to setup a connection via ssh by using a key and a password for the key. If you're using this kind of authentication you just have to remember one password (the one for your private key) and you can login to any server you distributed your public key to. So now you can easily connect to your server(s) without creating another password for your user on another server.

But after login you have still just one shell available but maybe you need sometimes more shells but don't want to login for it. Thats the moment when you should start tmux.

As there are already some good tutorials and explanations I don't want to make my own one here so just visit here, here, here (german) or here as a starting point.

There is also a book available about tmux. I didn't read it, but maybe useful for someone who wants to dive deeper into tmux.

with tmux you can make your life a little more easy, if you have to work on the command line or manage one or more servers. So here is what I did:

If I connect to one of my servers via ssh I'm doing this always via my ssh key. Here you can find a detailed guide on how to setup a connection via ssh by using a key and a password for the key. If you're using this kind of authentication you just have to remember one password (the one for your private key) and you can login to any server you distributed your public key to. So now you can easily connect to your server(s) without creating another password for your user on another server.

But after login you have still just one shell available but maybe you need sometimes more shells but don't want to login for it. Thats the moment when you should start tmux.

As there are already some good tutorials and explanations I don't want to make my own one here so just visit here, here, here (german) or here as a starting point.

There is also a book available about tmux. I didn't read it, but maybe useful for someone who wants to dive deeper into tmux.

Saturday, 12 October 2013

The main purpose for me using Dropbox was always to store documents like PDFs (books, whitepapers etc.) and to read them on my iPad. It was very convenient and I didn't had to worry about backups, as the files were on my mobile devices, my laptop and in my Dropbox and it was also very convenient to share the files with others.

Here is the configuration, that I'm using now instead of Dropbox:

iOS: I'm using an app called "Good Reader" on my iPad in order to read all kinds of documents and Good Reader also provides an interface to connect to your Dropbox. It is possible for every App that can talk to a WebDAV server to connect to your OwnCloud. In Good Reader I just needed to add a new WebDAV server, insert the URL accordingly to the manual of OwnCloud (e.g. https://example.com/owncloud/) put in your credentials and afterwards you can sync all data with Good Reader. You can sync data that is already available in your OwnCloud with Good Reader or upload files to OwnCloud via Good Reader. It's working for me now as convenient as Dropbox.

Mac OS X: I wanted to use the Finder of Mac OS X for connecting to OwnCloud, as described here. Unfortunately it is not working as expected. I'm able to connect to my server via WebDAV and I can navigate through the directories, but when I want to create a folder or upload a file, it takes minutes and then the operation I wanted to execute did not succeed. I couldn't find out the problem, so I switched to Cyberduck. With Cyberduck I'm not having any problems and it I've got a good performance.

Windows: In Windows it was no problem to map the WebDAV share to a drive letter. Maybe you need to tweak on the registry, but I didn't had to do it on my Windows 7 Professional Laptop.

With this configuration I can access now all my files via iOS, Mac OS X and Windows. But to access the files I need to be online, otherwise the files will not be available. To access your files also when your offline you can use the Sync-Clients by OwnCloud. In Windows it worked without any errors, but on Mac OS X I alway got the following error when I wanted to connect to my server via HTTPS:

The main purpose for me using Dropbox was always to store documents like PDFs (books, whitepapers etc.) and to read them on my iPad. It was very convenient and I didn't had to worry about backups, as the files were on my mobile devices, my laptop and in my Dropbox and it was also very convenient to share the files with others.

Here is the configuration, that I'm using now instead of Dropbox:

iOS: I'm using an app called "Good Reader" on my iPad in order to read all kinds of documents and Good Reader also provides an interface to connect to your Dropbox. It is possible for every App that can talk to a WebDAV server to connect to your OwnCloud. In Good Reader I just needed to add a new WebDAV server, insert the URL accordingly to the manual of OwnCloud (e.g. https://example.com/owncloud/) put in your credentials and afterwards you can sync all data with Good Reader. You can sync data that is already available in your OwnCloud with Good Reader or upload files to OwnCloud via Good Reader. It's working for me now as convenient as Dropbox.

Mac OS X: I wanted to use the Finder of Mac OS X for connecting to OwnCloud, as described here. Unfortunately it is not working as expected. I'm able to connect to my server via WebDAV and I can navigate through the directories, but when I want to create a folder or upload a file, it takes minutes and then the operation I wanted to execute did not succeed. I couldn't find out the problem, so I switched to Cyberduck. With Cyberduck I'm not having any problems and it I've got a good performance.

Windows: In Windows it was no problem to map the WebDAV share to a drive letter. Maybe you need to tweak on the registry, but I didn't had to do it on my Windows 7 Professional Laptop.

With this configuration I can access now all my files via iOS, Mac OS X and Windows. But to access the files I need to be online, otherwise the files will not be available. To access your files also when your offline you can use the Sync-Clients by OwnCloud. In Windows it worked without any errors, but on Mac OS X I alway got the following error when I wanted to connect to my server via HTTPS:

Wednesday, 18 September 2013

Hey there,

I want to make a little experiment to get as much data in my own cloud and not using services like Google Calendar or iCloud. Especially because of all the things regarding Edward Snowden disclosed confirmed all our paranoid thoughts about a big brother scenario and total surveillance and I want to try to be the master of my data as much as possible now. And of course I'm curious what can be done with services like OwnCloud.

What I want is to be my own cloud provider by using my own root-server and all my devices (laptop, smartphone and tablet) can use this server to sync their data. Actually I've used such cloud services like Google Calendar or Dropbox, but never trusted those services and I always felt uncomfortable and thats why I didn't use cloud services e.g. for syncing my contacts. I've always synced my contacts directly via my laptop to my other devices.

My goal is to get as much data on my own server so it's under my control and not stored on some server or on a server somewhere in the US that will be monitored by some agency. Of course this server will be the single point of failure, and if it get's hacked all my data will be disclosed or compromised, but hey, at least I'm responsible now for my data.

First thing I've done is installing OwnCloud on my Debian server, see the link here for further installation instructions. Afterwards you can navigate to your web server by adding /owncloud to your URL, e.g. https://www.dummy.org/owncloud for further configuration.

I wanted to use the MySQL service as database for OwnCloud, as it is already running on my server:

1. Connect to MySQL and create a database for OwnCloud:

root@kali # mysql -u root -p
mysql> create database owncloud;

2. Create a user for the new database owncloud and grant all privileges to him

3. Now you can go again to https://www.dummy.org/owncloud and type in the name of the database you want to use for OwnCloud and the user and password for it. Also an administrator user will be created for the web interface.

Afterwards you can finish the installation and your OwnCloud is ready. You should also use SSL for your OwnCloud, so that your communication channel is encrypted. If you don't use SSL now and you don't want to spend money for an SSL certificate you should consider to create a server certificate at CAcert. Don't forget to import the root Certificate of CAcert into your browser and devices that want to use OwnCloud, so you have a trusted connection to your server.

So what can you do now by using OwnCloud? After logging into your OwnCloud you have the opportunity to share a calendar, contacts, data, pictures and music.

I just wanted to use the calendar service for now. To synchronize the calendar with your iOS device, just follow the manual at owncloud.org. You can also synchronize it with iCal on OS X and also with Lightning in Thunderbird. In Lightning you need the CalDav link that points directly to your calendar. You can find that link in OwnCloud 5 if you navigate to calendar, click on the settings symbol in the right corner and click on the little earth symbol in the row of your calendar. Then the CalDav link will appear. In Lightning you just need to create a new calendar, choose network, select CalDav as format and paste the URL in the address field. Then you just need to fill in the credentials in the login dialog that will pop-up and you have also the OwnCloud calendar in Thunderbird Lightning.

You can also sync the calendar with Android devices, but you need a 3rd party app like Card-Dav Sync. As I've got no Android device, I could not test it, so if there are better apps or if it is supported by the OS by now, feel free to leave a comment.

For me this setup is working fine now. I'm using it on OS X in iCal, on my iPhone, iPad and also another Windows Laptop has access to the calendar via Thunderbird Lightning and all via SSL. First step is done to get your own secure datastore.

For backup purposes here is a little hint what you want to backup on another machine to restore your data in OwnCloud, if the server crashes.

Cheers.

Be your own cloud provider and kick out Google Calendar, Dropbox and co. - Part 1 Calendar

I want to make a little experiment to get as much data in my own cloud and not using services like Google Calendar or iCloud. Especially because of all the things regarding Edward Snowden disclosed confirmed all our paranoid thoughts about a big brother scenario and total surveillance and I want to try to be the master of my data as much as possible now. And of course I'm curious what can be done with services like OwnCloud.

What I want is to be my own cloud provider by using my own root-server and all my devices (laptop, smartphone and tablet) can use this server to sync their data. Actually I've used such cloud services like Google Calendar or Dropbox, but never trusted those services and I always felt uncomfortable and thats why I didn't use cloud services e.g. for syncing my contacts. I've always synced my contacts directly via my laptop to my other devices.

My goal is to get as much data on my own server so it's under my control and not stored on some server or on a server somewhere in the US that will be monitored by some agency. Of course this server will be the single point of failure, and if it get's hacked all my data will be disclosed or compromised, but hey, at least I'm responsible now for my data.

First thing I've done is installing OwnCloud on my Debian server, see the link here for further installation instructions. Afterwards you can navigate to your web server by adding /owncloud to your URL, e.g. https://www.dummy.org/owncloud for further configuration.

I wanted to use the MySQL service as database for OwnCloud, as it is already running on my server:

1. Connect to MySQL and create a database for OwnCloud:

root@kali # mysql -u root -p
mysql> create database owncloud;

2. Create a user for the new database owncloud and grant all privileges to him

3. Now you can go again to https://www.dummy.org/owncloud and type in the name of the database you want to use for OwnCloud and the user and password for it. Also an administrator user will be created for the web interface.

Afterwards you can finish the installation and your OwnCloud is ready. You should also use SSL for your OwnCloud, so that your communication channel is encrypted. If you don't use SSL now and you don't want to spend money for an SSL certificate you should consider to create a server certificate at CAcert. Don't forget to import the root Certificate of CAcert into your browser and devices that want to use OwnCloud, so you have a trusted connection to your server.

So what can you do now by using OwnCloud? After logging into your OwnCloud you have the opportunity to share a calendar, contacts, data, pictures and music.

I just wanted to use the calendar service for now. To synchronize the calendar with your iOS device, just follow the manual at owncloud.org. You can also synchronize it with iCal on OS X and also with Lightning in Thunderbird. In Lightning you need the CalDav link that points directly to your calendar. You can find that link in OwnCloud 5 if you navigate to calendar, click on the settings symbol in the right corner and click on the little earth symbol in the row of your calendar. Then the CalDav link will appear. In Lightning you just need to create a new calendar, choose network, select CalDav as format and paste the URL in the address field. Then you just need to fill in the credentials in the login dialog that will pop-up and you have also the OwnCloud calendar in Thunderbird Lightning.

You can also sync the calendar with Android devices, but you need a 3rd party app like Card-Dav Sync. As I've got no Android device, I could not test it, so if there are better apps or if it is supported by the OS by now, feel free to leave a comment.

For me this setup is working fine now. I'm using it on OS X in iCal, on my iPhone, iPad and also another Windows Laptop has access to the calendar via Thunderbird Lightning and all via SSL. First step is done to get your own secure datastore.

For backup purposes here is a little hint what you want to backup on another machine to restore your data in OwnCloud, if the server crashes.

Saturday, 31 August 2013

Hey there,

just today I found a new useful linux command called "mtr", ok this tool is available since the late 90s of the last century, but for me it was new. It is an enhanced traceroute and is much quicker than traceroute, as it combines traceroute with ping and you can gather much more information with mtr than with traceroute. As I also install some more useful Unix commands via apt-get on my Kali Linux for Raspberry Pi, I just give a short overview about them (also as reminder for me):

- mtr (as explained above, much more powerful than traceroute)
- htop (nicer view than the normal top)
- dstat (nice view of resource consumption with timestamp, e.g. dstat --tclmgry)
- tree (shorter and much powerful version of "find . -type d")
- links (if you need a browser in the shell; it is no fun to surf the web in a cli, but sometimes it can be useful)
- bc (little calculator in the shell)
- colordiff (you can guess it by the name, it enhances diff by adding color)
- tmux (alternative to screen)
- vim (no, I don't use emacs ;-)

A really great tool is tmux, that makes you're life in the shell much more easy. You should read the FAQ of OpenBSD to tmux for a quick'n'dirty introduction in it.

If you have any commands that are also useful for you regarding pentesting or to work more efficiently just leave a comment.

Another very useful tool for Mac OS X, regarding ssh is csshX. You can install it easily via homebrew on your Mac and you can manage different ssh sessions at once and you have also a master window that sends all input to every ssh session. Pretty neat.

just today I found a new useful linux command called "mtr", ok this tool is available since the late 90s of the last century, but for me it was new. It is an enhanced traceroute and is much quicker than traceroute, as it combines traceroute with ping and you can gather much more information with mtr than with traceroute. As I also install some more useful Unix commands via apt-get on my Kali Linux for Raspberry Pi, I just give a short overview about them (also as reminder for me):

- mtr (as explained above, much more powerful than traceroute)
- htop (nicer view than the normal top)
- dstat (nice view of resource consumption with timestamp, e.g. dstat --tclmgry)
- tree (shorter and much powerful version of "find . -type d")
- links (if you need a browser in the shell; it is no fun to surf the web in a cli, but sometimes it can be useful)
- bc (little calculator in the shell)
- colordiff (you can guess it by the name, it enhances diff by adding color)
- tmux (alternative to screen)
- vim (no, I don't use emacs ;-)

A really great tool is tmux, that makes you're life in the shell much more easy. You should read the FAQ of OpenBSD to tmux for a quick'n'dirty introduction in it.

If you have any commands that are also useful for you regarding pentesting or to work more efficiently just leave a comment.

Another very useful tool for Mac OS X, regarding ssh is csshX. You can install it easily via homebrew on your Mac and you can manage different ssh sessions at once and you have also a master window that sends all input to every ssh session. Pretty neat.

Wednesday, 28 August 2013

Hey everybody,

I've got a Raspberry PI for one year now and at the beginning I was just playing around with it as Media Center, but then it was laying around and I didn't use it for several months.

This had to change, so I ordered a HDMI2DVI cable from Amazon, as I wanted to use it on my monitor that has only DVI and no HDMI. I ordered also a 16 GB SanDisk Class 10 Ultra SHDC memory card, You can find a detailed overview about memory cards that are working with the Raspbery Pi here.

Here you can find a list of several distributions available for the Raspebry Pi. Here are also detailed explanations of general installation instructions of an image to a memory card on Linux, Windows and Mac OS.

There are some Raspberry Pi distributions available, that can be used for Pentesting:

Of course you need to change /dev/sdb to your actual device where you want to write the image to.

If you install the Kali image to the memory card on a Windows system, you can use Win32 Disk Imager.

After installation just plug the memory card into your Raspberry Pi and boot up Kali Linux. After login with user root and password toor your should reset the root password and start the ssh-service. The basics for Kali can be found here.

If you connect now via ssh to your Raspberry Pi and ask yourself: "How can I start tools that need a X-Server?", just do the following on your Linux / Mac OS X client:

ssh -X <username>@<IP-of-Raspberry-Pi>

After you connected to it you can start for example wireshark and it will pop up on your client but will run on your Raspberry Pi. So you don't need any monitor or keyboard on it, you can do anything from remote.

If you are using Windows, you can also do this trick via the -X flag. You just need to install an X-Server on your windows machine, like Xming and connect via Putty.

To automatically start ssh during the boot process, just execute the following command:

update-rc.d ssh enable

Now you have a simple little pentesting gadget that you can use either to support you during onsite penetration tests or as an intruder showcase to just scare your management/customer as how an attacker could easily hide the gadget in the suspended ceiling of the office and eavesdrop your network.

I've got a Raspberry PI for one year now and at the beginning I was just playing around with it as Media Center, but then it was laying around and I didn't use it for several months.

This had to change, so I ordered a HDMI2DVI cable from Amazon, as I wanted to use it on my monitor that has only DVI and no HDMI. I ordered also a 16 GB SanDisk Class 10 Ultra SHDC memory card, You can find a detailed overview about memory cards that are working with the Raspbery Pi here.

Here you can find a list of several distributions available for the Raspebry Pi. Here are also detailed explanations of general installation instructions of an image to a memory card on Linux, Windows and Mac OS.

There are some Raspberry Pi distributions available, that can be used for Pentesting:

Of course you need to change /dev/sdb to your actual device where you want to write the image to.

If you install the Kali image to the memory card on a Windows system, you can use Win32 Disk Imager.

After installation just plug the memory card into your Raspberry Pi and boot up Kali Linux. After login with user root and password toor your should reset the root password and start the ssh-service. The basics for Kali can be found here.

If you connect now via ssh to your Raspberry Pi and ask yourself: "How can I start tools that need a X-Server?", just do the following on your Linux / Mac OS X client:

ssh -X <username>@<IP-of-Raspberry-Pi>

After you connected to it you can start for example wireshark and it will pop up on your client but will run on your Raspberry Pi. So you don't need any monitor or keyboard on it, you can do anything from remote.

If you are using Windows, you can also do this trick via the -X flag. You just need to install an X-Server on your windows machine, like Xming and connect via Putty.

To automatically start ssh during the boot process, just execute the following command:

update-rc.d ssh enable

Now you have a simple little pentesting gadget that you can use either to support you during onsite penetration tests or as an intruder showcase to just scare your management/customer as how an attacker could easily hide the gadget in the suspended ceiling of the office and eavesdrop your network.

Monday, 19 August 2013

Hi there,

I've been registered at Facebook since 2009. Now I've killed my account. This has several reasons:

- Since I've registered the spam and ads are increasing and now Facebook want's the users to watch ad videos in their timeline. So Facebook is just an advertising rostrum.
- I've been registered at ADN and really like it much more than Twitter or Facebook, I have to pay for it, but that's totally worth it as there are no ads.
- I don't want Facebook to track me and my behavior and Facebook has no real value to me anymore when compared to all the privacy issues.
- Instead of clicking dump Like Buttons of actions people talk about, I want to talk to a few people in real life or do a chit-chat via phone without any distractions. I don't even concentrate on the conversation when I was chatting via Facebook as I was always doing something aside, like googling, scrolling the Facebook timeline etc. Of course that's no real argument against Facebook, but a behavior that I want to change and Facebook is not supporting me by achieving this.
- Several weeks ago I've read a tweet, unfortunately I don't have a link to it, that Facebook is the new "going to the kitchen and looking in the fridge". And exactly that's how I feel when I'm using Facebook, sometimes I just think it's a waste of time.

So this is what I've done:

1. I requested for a copy of all my Facebook data. You can make this request when you go to your preferences and click on the link "download your Facebook data". An e-mail will be send to you with a download link. I've got the e-mail after some minutes and download the archive.

2. As I couldn't find a button for deleting my account in the preferences (I thought that it would be hard to find the delete button), I could find the following link in a blog:

I've been registered at Facebook since 2009. Now I've killed my account. This has several reasons:

- Since I've registered the spam and ads are increasing and now Facebook want's the users to watch ad videos in their timeline. So Facebook is just an advertising rostrum.
- I've been registered at ADN and really like it much more than Twitter or Facebook, I have to pay for it, but that's totally worth it as there are no ads.
- I don't want Facebook to track me and my behavior and Facebook has no real value to me anymore when compared to all the privacy issues.
- Instead of clicking dump Like Buttons of actions people talk about, I want to talk to a few people in real life or do a chit-chat via phone without any distractions. I don't even concentrate on the conversation when I was chatting via Facebook as I was always doing something aside, like googling, scrolling the Facebook timeline etc. Of course that's no real argument against Facebook, but a behavior that I want to change and Facebook is not supporting me by achieving this.
- Several weeks ago I've read a tweet, unfortunately I don't have a link to it, that Facebook is the new "going to the kitchen and looking in the fridge". And exactly that's how I feel when I'm using Facebook, sometimes I just think it's a waste of time.

So this is what I've done:

1. I requested for a copy of all my Facebook data. You can make this request when you go to your preferences and click on the link "download your Facebook data". An e-mail will be send to you with a download link. I've got the e-mail after some minutes and download the archive.

2. As I couldn't find a button for deleting my account in the preferences (I thought that it would be hard to find the delete button), I could find the following link in a blog:

Sunday, 20 January 2013

Hey there,

I'm using VMware Fusion Version 4 and wanted to open a .ova file. I just wanted to play around a little on https://www.hacking-lab.com/, and the they provide a full virtual machine that is ready to connect to their test network via VPN. Unfortunately VMware Fusion 4 won't open it. According to the docs of VMware Fusion 5 you can just import an .ova file (http://pubs.vmware.com/fusion-5/index.jsp?topic=%2Fcom.vmware.fusion.help.doc%2FGUID-275EF202-CF74-43BF-A9E9-351488E16030.html), but that's not working in VMware Fusion 4.

I just found a tool called OVF Tool by VMware. Yo can download it here:

https://my.vmware.com/group/vmware/get-download?downloadGroup=OVF-TOOL-3-0-1 (you will need an account on VMware to download it)

After installing the command ovftool is availabe in "/Applications/VMware OVF Tool" in the CLI.

I'm using VMware Fusion Version 4 and wanted to open a .ova file. I just wanted to play around a little on https://www.hacking-lab.com/, and the they provide a full virtual machine that is ready to connect to their test network via VPN. Unfortunately VMware Fusion 4 won't open it. According to the docs of VMware Fusion 5 you can just import an .ova file (http://pubs.vmware.com/fusion-5/index.jsp?topic=%2Fcom.vmware.fusion.help.doc%2FGUID-275EF202-CF74-43BF-A9E9-351488E16030.html), but that's not working in VMware Fusion 4.

I just found a tool called OVF Tool by VMware. Yo can download it here:

https://my.vmware.com/group/vmware/get-download?downloadGroup=OVF-TOOL-3-0-1 (you will need an account on VMware to download it)

After installing the command ovftool is availabe in "/Applications/VMware OVF Tool" in the CLI.