A web-log on Q3J5cHRvZ3JhcGh5, alert(document.cookie), and screensaver.exe

Menu

NSDL and UTI are two bodies under the Indian Government which are the official PAN Card service providers. Recently I had the privilege to take services for PAN Updation through UTI ITSL.

After waiting for some time for the processing of my card, I went to the website of UTI-ITSL for checking the status. I entered the application number, and instantly got the status of my query. Cool!

As a fuzzer, in the form-field for ‘Application Coupon Number’, I entered the next number (my appln number + 1). And yes, it gave the results. Entered some more numbers in the sequence, got results for each query. I could get results for applications as early as 2011. This means that if someone runs a tiny script to scrape data of applicants for the last 8 years, they can easily get the details – Full name, PAN Number, Application Number.

Name, PAN No, Courier Tracking Details

As shown in the above image, all these details are visible to everyone without any kind of authentication, you need to just input a 9-digit application number.

And there is something more to that – you can look for the PIN Code and City of the applicant, through the Courier Tracking Number:

This PAN Card was delivered to some guy in RANPUR (Gujarat) on 09-03-2017, most probably he lives there

If you are more lucky, you will get the birth-date and spouse/father’s name of the applicant:

For the above applicant, he is having name mismatch between Income Tax Department’s Data and the data provided in the application. So which fields are required to be shown to the applicant – only the field which is having some conflict, right? No, even if the DOB which is totally irrelevant in this case of name mismatch, it is shown. Proof below:

In case of Name mismatch (field highlighted by pink by the UTI guys), Father Name and DOB are also displayed

With some modification in the script to scrape all this data, we can fetch the DOBs for all the people who are having such mismatch in their application. Later through correlation, we can get the below details for a single applicant:

Applicant’s full name

Applicant’s Father’s full name

Applicant’s DOB

Applicant’s PAN Number

Applicant’s PIN Code and City

This can count as a huge flaw in the design of their application which gives such golden data with very less efforts, and exposes the PII of millions of applicants.

Some suggestions for UTI developer guys:

Randomize the application numbers, if possible, and

Please do not allow anyone to query your database with a single key. At-least use two keys (e.g. 1. Application Number & Date – Time of application, 2. Application Number & UID Number)

Don’t provide the status if it has been a month after the PAN card is received by the applicant

(I tried to contact the people at UTI ITSL: their email (utiitsl.gsd@utiitsl.com) bounces back, no-one picks up the phone, and for snail-mail I don’t have the postal stamps)

You are using Instagram, right? And you might have kept your posts private, so that only your followers can view your posts. Yes, even I have ticked the option to allow only my followers to view my posts.

That option works well if you are browsing through Instagram only. But what if you post your Instagram picture’s link like this:

The post on your Instagram profile was limited only to your followers (maybe 150, 1500 or 150k), but now your tweet has made that picture available to millions of people who are on the Internet. Anybody can click on the link and see your picture.

RSYSLOG is the rocket-fast system for log processing. After syslog, now rsyslog comes pre-built with the Linux systems, meant for local and remote logging.
In any system, you will want to (a) log the system and application logs on the local machine, and/or (b) log the system and application logs to a remote machine.

Below given are 2 cases, useful for forwarding OS logs and application logs:

Forwarding only OS logs:

Add the below given line at the bottom of the /etc/rsyslog.conf file, and later restart the rsyslog service-

*.info;authpriv.*;cron.*;mail.* @remote_ip:514
By default, rsyslog uses port number 514 for its activities. If the logs need to be forwarded through UDP, mention a single '@' before the remote_ip, and for TCP, mention '@@' before the remote_ip.

*.info – all logs with info severity

authpriv.* – all logs related to authorization and privileges

cron.* – all logs related to cron – scheduled jobs

mail.* – all logs related to mail and mail servers

Forwarding OS and Application logs:

# Add the following module - it is the module for forwarding logs from a file.
# Add this along with the other $ModLoad tags at the top of the file
$ModLoad imfile

# Add 'local7.none' to the below line as shown below.
# This will stop the logging of local7 messages in /var/log/messages, as we need to forward our application logs through local7 service
*.info;mail.none;local7.none;authpriv.none;cron.none /var/log/messages
# Comment the local7 for boot logs, to stop logging the application logs to /var/log/boot.log which we are forwarding through local7 service
#local7.* /var/log/boot.log
# Add the below lines to forward the logs from their respective files. First 3 lines are variable, the other 2 are static.
# $InputFileName takes the path to log file (absolute path of the file)
# $InputFileTag will attach the mentioned tag (here: tag_jio.com) to the original log
# $InputFileStateFile is the State file where the logs are stored before forwarding (for eg. useful in case of network failure)
$InputFileName /path/to/log/file
$InputFileTag tag_website.com:
$InputFileStateFile buffer_file_name
$InputFileFacility local7
$InputRunFileMonitor
# Add this line at the bottom of the file, for forwarding
# local7.* (all logs of local7 - application),
# *.info (all logs with info level),
# authpriv.* (all logs of authorization-privilege) and
# cron.* (all logs of cron)
# - to the receiver IP and Syslog port 514.
# Add '@' for sending logs through UDP, '@@' for TCP.
local7.*;*.info;authpriv.*;cron.* @receiver_IP:514

(Above given configuration is for Red Hat based systems only. It may differ in Debian based systems.)

SSL/TLS provides an extra layer of security to the HTTP, making it HTTP Secure (HTTPS). It works on the Application Layer (OSI Model) along with HTTP. HTTPS is not a different protocol, but the underlying HTTP with implementation of SSL/TLS for security.

Public Key Infrastructure and Certificate Authorities are used for making it possible.

How HTTPS works?

Short Version
Just like the TCP Handshake, a handshake happens in SSL between the server and the client. We can break this handshake into three steps: Hello, Certificate exchange and Key exchange.

Hello

The client sends a Hello message and the server responds with its Hello message. These messages contain information like the SSL version supported, cipher suite and some random data for key generation.

Certificate Exchange

To provide its authenticity, the server has to send its SSL certificate to the client. On receiving the certificate, the client checks whether its verified and trusted by some Certificate Authority, and takes the decision accordingly. For some sensitive applications, the server can ask for a certificate from the client too.

Key Exchange

A symmetric key is exchanged between the two parties. The client computes a key, encrypts it with the server’s public key, and sends it to the server. Only the server can decrypt it, by its own private key. All the communication then takes place encrypted with this symmetric key.

Long VersionClient Hello

After the TCP connection is established, the clients starts the SSL handshake. The important data in the Client’s Hello message includes:

Version Number (eg. SSL 2.0, SSL 3.0, TLS 3.1)

Random Data (which is later used with the Server’s Random Data to generate a secret key)

Cipher Suite (the list of cipher suite available with the client, which includes – the protocol version, the algorithm for key exchange, the algorithm for encryption, and a hash function)

Along with the above mentioned details, the following steps take place in the Server Hello message:

The server sends its digital certificate to the client, which has the server’s public key

Server creates a temporary key to the client

Server asks the client for its certificate, to validate the client’s authenticity

End of hello, meaning the server’s Hello message is done, and client can respond

Client Response

After getting the server’s Hello Done message, client starts talking. It sends the necessary messages in the below mentioned sequence:

Client certificate – contain’s the client’s public key, to provide its authentication at the server

Client Key exchange – the most important part of the communication. The client computes a premaster key from both the random values previously exchanged. This key is then encrypted by server’s public key before sending it, so that only the server can decrypt and get out the original key with its private key.

Change cipher spec – all the further messages will be encrypted using keys and algorithms negotiated

Client Finished – is the hash of the entire conversation. This is the first message which is encrypted and hashed for the session.

Server Final Response

This is the final message in the conversation between the server and the client to have a secured connection. The server’s final response will have:

Change cipher spec – will notify the client that the server will start encrypting the messages with the negotiated keys and algorithms

Server Finished – is the hash of the entire conversation to this point. If the client can decrypt this message and validate the hashes, it means that the SSL/TLS handshake was successful.

After the SSL/TLS handshake is done, further communication is secure between the server and the client.

Example

A representation of how your browser starts a HTTPS connection with website example.com-

Firefox (your browser, for example) connects with the server of example.com with HTTP and asks for the login page which uses HTTPS

For the communication, the server sends Firefox a certificate, which contains the server’s public key

Firefox verifies the public key of the server from the certificate

Firefox chooses a random symmetric key and encrypts it with the public key of the server

On receiving the encrypted message, the server decrypts it with its private key. Nobody else on the network who has received the encrypted message can decrypt it, because they don’t have the server’s private key. Now the server has the symmetric key with it

Every time Firefox wants to send something to example.com in a secured manner, it will encrypt it with the symmetric key. On the other end, the server will decrypt it with the same key

Every website/server which wants to implement HTTPS (i.e. SSL/TLS security) has to buy SSL certificates from authorities like VeriSign, Comodo, etc. Many websites implement HTTPS part only for some important pages (like login or payment) and other parts of the website work on simple HTTP. Implementing HTTPS for the whole website is not much costly, but the CPU overload increases in processing the requests. Hence many website owners keep away from HTTPS because of the cost factor or the overload factor. Recently Google announced that it will reward the HTTPS webpages with a higher ranking in its search results (source).

Why not use asymmetric key encryption for the handshake?

There’s an answer on StackExchange. (1) Asymmetric encryption is much slower compared to symmetric encryption, (2) For the same keylength, asymmetric is weaker compared to symmetric encryption.

What an attacker can see if you are using SSL/TLS during your connection?

If you are using SSL/TLS correctly, the attacker can interpret only some of your data. That includes – the domain you are connected to, the related IP address and port numbers.

For example, if you are doing a Google search using https, the URL in the browser will be: https://www.google.co.in/?gws_rd=ssl#q=what+is+https, and you can see the full URL. But on your cable, only the domain name google.co.in is sent to the DNS for domain name resolution, instead of the full query/URL. Hence, you can say that HTTPS hides your full URL, only the domain name is revealed.

HTTPS provides confidentiality of data, but not anonymity of who is sending / receiving the data.

This interactive image by EFF provides clear understanding of what can be seen by the eavesdroppers while you are using HTTPS and while you are using Tor.

I was proficient with working on Snort on my eth0 connection during my previous Ubuntu installation. Later, I changed to Fedora, and eth0 was replaced with eno1. And the other change – I started using a direct DSL line, which used the ppp connection.

Now while doing ifconfig for the DSL connection, I get the interface as ppp0 instead of eno1.

The limitation with Snort is that it will consider only the ether packets, ignoring the ppp0 connection. Even when I am using the ppp0/DSL connection through my Ethernet port, the connection is not through eno1.

If you try starting the Snort instance with the command

# snort -c /etc/snort/snort.conf -l /var/log/snort/

it will give the following error:

ERROR: Cannot decode data link type 113
Fatal Error, Quitting..

If you try looking for the error, you will get a variety of solutions. If your snort version is 2.9.6.1, none of them are going to work for you. The reason is – they have stopped supporting the –enable-non-ether-decoders.

If you put that argument with your command for igniting Snort, you will be provided with a list of available arguments, but –enable-non-ether-decoders will not be allowed. I was furiously looking for a solution regarding this problem. After going through some forums, it came to my mind to try a walk-through.

The easiest option available was to make Snort work with the ppp0 connection (which is plugged in to eno1) work with eno1. You have to try giving the command with an additional argument, which is -i eno1:

# snort -D -i eno1 -c /etc/snort/snort.conf -l /var/log/snort/

This will start the Snort Daemon on the eno1 interface, capturing all the packets and dumping them to your desired location. The logs will be located in files named snort.log.xxxx. For every instance there will be a new log file, which has the packets logged in Binary PCAP format to be readable by Wireshark, Snort, or other similar applications.

If you try to read these logs with some text reader/editor, it will be like reading the Webdings fonts. Don’t do that. Snort has a better reader, also called Snort -r.

Give the command:

# snort -r snort.log.1405955899

This will give you a nice analysis of the packets with all the logs available to you. You can also export the readable content to a .txt file by the normal methods.

Choose the rules very wisely which you are applying for Snort. As this was for a test environment, I implemented all the available rules to the scenario; and that gave me 5 MB of logs when I ran Snort for just 25 seconds. You need to cut that down, Roger!

Parsing and getting the required information from these logs is one more task. Have you tried Splunk, lately? Here: http://apps.splunk.com/app/340/

TL;DR list your interface as eno1 even if you are using a ppp0 connection

I was going through some missions, and came across one with SQL Truncation vulnerability. It is an ignored vulnerability, and many have patched the vulnerability, but there are lots of websites which still have this vulnerability. Here I’m explaining you (ELI5) the basics of SQL Truncation and how the vulnerability is exploited.

Lets take an example of a website where a user can register himself with a username and password, and later login with the same username-password combination. Lets name this website pikachu.com

Whenever a user registers the username and password, using SQL they are stored in the table. For the table, there is some specific maximum-length for the username and password. Lets consider that the username and password should be max 20 characters. In the HTML form, the following would be given:

This enforces the user to have username-password of maximum length 20 characters only.

Now, suppose the user enters ‘pokemon’ as the username and some random password. It will be checked in the column of usernames whether a username ‘pokemon’ exists or not. If the username does not exist, the table will store ‘pokemon’ under the username column and the password for it in the password column. Here pokemon is the administrator of the website.

Now, we are the attackers and we want to login to that site with the username ‘pokemon’. Possible? Yeah, possible if it is vulnerable to SQL Truncation. The following scenario:

Use the add-on Web Developer (for Firefox) or something similar in your browser, to break the ‘maxlength=20’ barrier.

Create a new user ‘pokemon b’, which exceeds 20 characters. After pokemon you need to have white spaces filling the 20 characters and then some random characters.

The application will search in the username column for ‘pokemon b’, and doesn’t find any so will store it in the database with our password. But since the max limit is 20 characters, it will store only ‘pokemon ‘ and since there are only white spaces, it becomes ‘pokemon’. If we provide just ‘pokemon ‘ at the username registration, it will take only ‘pokemon’ as it truncates the white spaces – and hence we gave ‘pokemon b’ where the trailing character ‘b’ will not let it truncate the white spaces.

Thus we inserted the user ‘pokemon’ into the database with our password, and now onward we can login with our own password and ‘pokemon’ username.

Whenever we use ‘pokemon’ as the username, now it will check the two different cells in the table with the same username, and will validate our credentials.

SQL Truncation is a type of SQL Injection, which is a low hanging fruit. If it is not properly patched in the application, can cause a severe damage to the application data.

These days, the server mostly used is either Apache or Nginx (ref: Netcraft). For Apache, there have been several security tips and a few modules for providing security. One of them is mod_evasive. If you refer basic server hardening tips, they would have recommended to install mod_evasive to secure your Apache against Denial of Service attacks. mod_evasive comes with some default settings which are not needed to be played with if you have a general purpose website.

How mod_evasive works:
(ref: /var/httpd/conf.d/mod_evasive.conf)

DOSPageCount, default: 2 – in 1 second

This is the threshhold for the number of requests for the same page (or URI) per page interval. Once the threshhold for that interval has been exceeded, the IP address of the client will be added to the blocking list.

DOSSiteCount, default: 50 – in 1 secondThis is the threshhold for the total number of requests for any object by the same client on the same listener per site interval. Once the threshhold for that interval has been exceeded, the IP address of the client will be added to the blocking list.

DOSBlockingPeriod, default: 10The blocking period is the amount of time (in seconds) that a client will be blocked for if they are added to the blocking list. During this time, all subsequent requests from the client will result in a 403 (Forbidden) and the timer being reset (e.g. another 10 seconds). Since the timer is reset for every subsequent request, it is not necessary to have a long blocking period; in the event of a DoS attack, this timer will keep getting reset.

Explanation: If an IP address requests a page more than 2 times in 1 second, or requests an object more than 50 times on the same listener in 1 second, the IP address will be blocked. It will be blocked for 10 seconds and all the requests during that time will be resulting into 403.

What I did:

Copied a website and all its objects using ‘wget’ and hosted the website from its source on my Apache server in the folder /var/www/html/

Created the below Py script to get a HTTP Connection to the server and GET the requested object.

lst is the list of site objects which were to be accessed using GET.

It randomly requests an object from the given list, avoiding repetition.

Checking mod_evasive with default settings, requesting from the same machine (localhost):

Server: Apache 2.4.6
OS: Fedora
Client: Fedora, Python script

>>

Running this script on the Fedora (localhost) machine causes the temperature of the machine to rise till 87 degree Celsius (The processing was Ctrl+Zed to avoid over-heating, as the point was proved). mod_evasive will definitely stop serving this script as soon as it will find that it is exceeding the threshold, but it will continue returning 403 to the script. The 200 response will stop and 403 will start; Apache will continue processing and serving 403 to the script. So what is the use of mod_evasive? Mod evasive is built for protecting against the DoS attacks, but here mod_evasive is the victim. It continues the processing the this keeps the busy and the single script will provide enough load to the server.

Checking mod_evasive with default settings, requesting from a Windows machine:

Server: Apache 2.4.6
OS: Fedora
Client: Windows 7, Python script

>>

The same thing which happened from localhost will occur while sending requests from a Windows machine. After some time Windows will show that either it lacked sufficient buffer or the queue was full.

Checking mod_evasive with default settings, requesting from a Linux machine:

Server: Apache 2.4.6
OS: Fedora
Client: Kali-Linux_x86, Python script

>>
Story continues here. Testing from a Kali-Linux, running the same py script, will DoS the Apache server. The main task was to flood the Apache server which was using the default configured mod_evasive module, and it was accomplished. Mr Mod_evasive, what is the meaning of sending 403 to the blacklisted IP every time? It does totally reverse, clogging the server and giving very less time for other client requests.

One more trick is to request for a non-existent object (eg. /hello-admin.html), and hence the server will be busy responding with 400 Not Found. We just need to keep the server busy with our requests, and this tiny-simple script does it all.

In the below screenshot it can be seen how much processing is done by apache/httpd while processing for the single script.

Here it can be seen how the temperature rises by 20 degrees in just 1 minute:

In Plain Text: Using mod_evasive with default settings is of NO use as it does not stop serving the DoSing client but just responds it with a 403. The processing remains the same (kind-of).

Here is a portal by BSNL where you can pay your Telephone bills online: https://portal1.bsnl.in/aspxfiles/instaPay.aspx. After a long time BSNL people have started making use of technology for public services, apart from providing basic broadband.

I have been paying my land-line bill online since 6-7 months through the same portal, and I had to provide my phone number and account number at the initial stage, and later I was asked for my bank details for making the payment. I guess people were confused with the account number field, and hence last month BSNL made some changes to the portal text fields. Nowadays we don’t have to provide the account number, and it serves as the Truecaller app for getting the owner’s name. Along with the owner’s name, it gives the outstanding payment details. I think in this way BSNL’s portal is not seriously considering our privacy. Anybody can get the name of the owner and the bill details by just providing their phone-number. It works for individual bills, and not for corporate.

Comparing the BSNL’s portal with Truecaller, it provides better facilities – we can get the verified name of the phone owner (as in BSNL database) and the current bill details. And the best point – unlike Truecaller, we don’t need to provide our authentication details or install the app on our phone for BSNL’s portal. This may not be a security issue for the customers, but it is totally violating the privacy.

(You can give it a try. Visit the Instapay portal. Enter the BSNL land-line number of your friend, and the captcha code. You dont need to provide any mobile number or email address. Click ‘Submit’ and you will be provided the land-line owner’s name and their outstanding amount.)

After a Codecademy course which teaches the game of Rock Paper Scissor step-by-step in Python, last month I used the keyboard for creating a Phonebook utility on Python and put it on Codecademy. The Phonebook exercise there teaches users to create a file for storing contact name-number and later get them as desired. The exercise is under the Codecademy team for beta-testing, and will be avialable in the Track listing soon after reviewing. You can test the exercise here: Phonebook on Codecademy.

DNS is the Domain Name System, the mapping of IP address to domain names (e.g. for websites like http://example.com). As the IP addresses are hard to remember, we have adapted the system of mapping the IP addresses to their respective FQDNs (Fully Qualified Domain Names). We just type the website address in the URL bar and the DNS server converts it to its respective IP address, which later serves us.

We all have phone numbers stored in our phone’s memory, mapped with the names. When we want to contact someone through our phone, it serves as a DNS server for us where we can just tap the name and the phone dials the number. Thus we all have a tiny DNS server in our pocket, but what if we combine all these tiny servers into a globalized service. Not like truecaller or something, but a more concrete and reliable service.

There has been Telephone number mapping service, for unifying the International telephone numbers with Internet addressing and name spaces (ref: Wikipedia). There can be something different of the same kind. For primary purpose, each person owns one phone number, or take it two. Each number can be mapped to usernames like @bhumish. The second number can have different username like @bhumishgajjar. This mapping can be stored by every ISPs, which can later be combined at higher lever for making it globalized – the same way DNS works till the root servers. For example, my number 91 9090 90909 is mapped to @bhumishg, and so while tipping my number I’ll say ‘My contact number is bhumishg‘. Whenever you dial @bhumishg from your phone, first your ISP will check the mapping and then connect it to me. In the current scenario, our phones remember the things for us, but what if we lessen the burden of our phones too? Just like our twitter handles, we can have unique usernames for the phone numbers. Also, it will be cool having the phone numbers like @h4ck3r or @cutegirl.

Why can’t we have such system?

There can be some issues with setting up such system in the current scenario with increase in technology/devices and the big-data of phone numbers. If it had been early days of the Internet and Mobile phone systems, possibilities of phone DNS could be higher. The number of phone-numbers is increasing at a very fast pace, and the initial setup necessary for these numbers is difficult. Everybody is used to the normal phone-number and name system, and implementing the new (easy) system would be tough for the companies as well as it would be hard to adapt for the users. Plus the infrastructure to be set-up by the ISPs is huge. As ages have passed since the evolution of Internet-DNS and phone numbers, it is now not possible to set it up again.

The idea is great, but the time has passed. Just like IPv6, now we are in need of extending the phone number range also. The ISPs are taking appropriate steps country-wide, but if they can apply some global changes while designing the extended numbers, it would be better.

When an application accepts specific kinds of uploads, it should check them for their validity of being the specific kind only. For example, if you want to allow users upload only .doc files, you should be checking the file thoroughly for being a .doc file only. At basic level, there’s no special programming or resources needed, just match the file signature with its extension.
For example, the file signature for .doc (Microsoft documents file) is “D0 CF 11 E0” (ref: File Signatures on Wikipedia)

WordPress allows users to upload only a limited kinds of files like doc, pdf, gif, jpg. But while uploading, it does not check the file signatures but just the extensions. Hence anybody can upload any kind of file by changing the extension, and WordPress will host it.
If in case WordPress is restricting files like .exe or .rar to be safe from hosting malwares-virus-trojans, they are doing it wrong. Currently we are not concerned about downloading those any-kind-of-files with valid extensions, but with uploading such files to WordPress blog. If we can upload any kind of file without considering the file signature, it may be possible for the bad-guys to upload anything and lure users to download it anyhow. While in the Insert Media menu, they mention ‘Allowed File Types’, instead they should be mentioning ‘Allowed File extensions’ – as they are checking the extensions only.

If we take the case of GMail, while attaching a file, it checks it thoroughly (kind-of!) so that users can’t attach an .exe file. Even if someone zips the exe, it will catch the steganography and skip attaching the file. But in case of WordPress, you are now aware what happens to file upload. Thus Wikipedia can act as a File-sharing site too (not considering the extensions).

Below is a link of Win32.Polip.A virus, which was a .rar file and I uploaded it by altering the extension to .doc. (Download it at your own risk! This is purely a virus and I am not responsible for any harm.)

(I tried to contact WordPress Support, but I read that I need to post that in forums and I can’t contact the support team directly unless I am a paid-customer. Hence, here I am, making this infomation public.)

The last time I made a hashing utility, it was in my mind to create a new tool which takes a list of passwords and gives their hash. Now imagine a scenario: you have found out a hash of some common password and now you are in a hurry to get the hashes of words like ‘admin’, ‘root’, ‘admin@123’, ‘passw0rd’, ‘toor’. You can’t take them one by one and find their hash and copy it to a file for matching it with the hash.Here I present a tiny utility, which will take your words through the command line and create a file with a list of password : matching_hash. Not even just words through command line, you can make a file with the common passwords for future reference – and this utility will give you a new file with the passwords on your file matched with their respective hash.At present it supports just md5 hash function, but the next update (coming soon) will have some other hash functions like sha256 and sha512 and more. Right now the utility takes input as either your words, or a file with the list of those words and gives output as a new file with the words matched with their hashes.

According to Wikipedia, Transposition Cipher is a method of encryption by which the positions held by units of plaintext are shifted according to a regular system, so that the ciphertext constitutes a permutation of the plaintext. That is, the order of the units is changed. Transposition ~ the position of each character is modified according to the key and method used.

The examples of Transposition cipher are Rail Fence, Route cipher, Double Transposition, Myszkowski Transposition. There are some drawbacks associated with some of the transposition ciphers, and the worst is its vulnerability to frequency count. If the ciphertext exhibits a frequency distribution very similar to plaintext, it is mostly a transposition. They can be attacked with anagramming, meaning through sliding pieces of ciphertext and looking for sections that look like anagrams and solving them.

Transposition can be made more secure by combining it with other techniques like substitution cipher. It is also mentioned that Fractionation can enhance the technique, and at last binary technique is mentioned, but there is no considerable work done on the binary side. Yesterday night when I was solving some challenges at my favourite site, it came to my mind about enhancing the transposition techniques by working with binary numbers. When we convert the plaintext to binary, we can have better chances of making the ciphertext more unpredictable through transposition. Here I am presenting the ouline of how to randomize the transposition cipher by using 2 symmetric keys and the hash of plaintext. Hash functions like md5, sha256-512 or whirlpool.

We will need the following:

Plaintext

Key-1 (alphanumeric)

Key-2 (numeric – even length)

Hash Function

1. Convert the plaintext(ASCII) to binary.
It can be done with a simple python function. (ref: A stackoverflow post) Here a space is used to differentiate the different ASCII characters, but in real we dont use the space between them.

1(a). Take the Hash of plaintext and store it in a variable.

2. User provides an alphanumeric key, of random length. For example, lets take it of length 10 bits. While, we assume that the plain text is of 20 bits.3. Make the total length a multiple of 4. The total length here is 20+10 = 30 bits, and we add 2 bits here. Preferrably, in this case we add two ‘1’ bits. Total length = 32.4. Now the first-last step comes. We have a string of 32 bits, without any spaces. We create a new string / modify our string by placing the bits in this order >> first bit – last bit – second bit – second-last bit – third bit – third-last bit – … … – sixteenth bit – seventeenth bit
This step will kind-of randomize the string.5. User provides a numeric key of random length. Suppose the key here is 317325.6. Our string is of 32 bits (a multiple of 4). Hence there can be 8 (= 32/4) groups of bits. Lets name them with numbers, like 1 2 3 4 5 6 7 8.
Transposition is done once again, in a different manner. The key here is 317325. First, we replace ‘3-1’7325.
So in our string, the groups of bits numbered 3 and 1 will be swapped. It becomes
3 2 1 4 5 6 7 8
Again, according to the key, one more swapping of 31’7-3’25. (Here comes a small trick: The 3rd group became the first group, and 1st group is at number 3. So the group at position 7 and at position 3 will be swapped.)
3 2 7 4 5 6 1 8
The last transposition according to the key 3173’2-5’:
3 5 7 4 2 6 1 87. The string is randomized. For making it complex, we reverse the first-last step. The new arrangement of bits will be >> first bit – third bit – fifth bit – … … – sixth bit – fourth bit – second bit
The string is again randomized.8. Now we convert it back to ASCII for some more computation. The hash of the plaintext is available to us.
We take one char of our string, one char of hash, next char from string, next char of hash, … …
Continue the above process till the end of hash, and then keep the characters as-it-is.
Hence, if we consider our string characters as s(1,2,3,4,…) and hash as h(1,2,3,4,…), the new string becomes
s1 h1 s2 h2 s3 h3 … …
The length of hash depends on the Hash function used. For example, if it is md5 then 128 bits, and for whirlpool it is 512 bits.9. Send the string to the receiver. The receiver knows which hash function was used, and hence can directly take away the bits of hash and save it for verification of plaintext.10. The reverse process to the above given steps will decrypt the ciphertext.

Why one more transposition cipher?
The well-know ciphers which currently exist do have some or the other flaws, along with that one is common – frequency analysis. In the above given technique, the frequency analysis is nearly impossible. Also, it is much reinforced against anagram attacks.Why one more transposition cipher in the era of asymmetric-key ciphers?
Take example of emails. People are provided with the public key encryption techniques in their mail-clients, but they seldom use it. Reason is complexity and because they dont like configuring the keys for each user and spending some time decrypting the received message. In the above given technique, the computing is less compared to the public key ciphers, and a one-time setup will work forever. Though user needs to keep changing the keys/hash-functions.How is it different from the other techniques?
The security. Its sheild against attacks. The cryptanalyst will need to spend more time computing and guessing and playing with the binary data. Although it is vulnerable against brute-force attack, it will need intensive resources as compared to the resources which can crack the traditional transposition ciphers. One more advantage is that this technique can be used for any kind of data – text, video, image. And further this data can be converted to a different kind of data because the encryption is done at bit level. Hence it becomes expensive for the attacker to detect the type of data before doing the cryptanalysis.

I have just started working on the technique, and implementation on real-world scenarios and cryptanalysis through brute-forcing and other techniques are yet to be performed. Here I have just provided my idea on how binary translation can provide better security in transposition ciphers without the intention of criticising any of the prevelant cipher techniques.

Honeyd is a small daemon for Linux (now also available for Windows) to simulate multiple virtual hosts on a single machine. It is a kind of an interactive honeypot. The latest release can be downloaded from Honeyd release page.

For my project, I have been working with honeypots, and Honeyd is one of them. During the initial stage, I faced some problems while starting the basic setup of some personalities with Honeyd. Here I recall those problems and some misconfigurations which can result in errors (mainly: config file parse error) and can be a problem for first time users.

Here, honey.conf is my configuration file and -f is used for pointing to that file. -dis used to tell the machine to run honeyd as a daemon.

eth0 not an IP

Reason: Your ethernet connection does not have an IP address.

When you are testing on a single machine, the first thing you need to do is give your interface an IP address. The below command will take care of it. Replace ‘eth0’ with your respective interface.

# ifconfig eth0 192.168.1.1(If you are using a different interface like eth2, you need to mention while starting honeyd. Should be -i <interface>, example -i eth2)

Now, here is my sample configuration file:

Lets disect the file line by line.
1: creates a personality, and we will refer to it as windows.
2: name the personality as Windows XP, means someone who is scanning our honeypot will find it so.
3: including the ftp.sh file, which will simulate a FTP server.
4,5,6: opens the tcp ports 135, 139, 445.
7: bind the ip address to our personality.

Try running the honeyd while using our honey.conf file. Error?

parsing configuration file failed

Now, during my initial day I had taken help for the FTP server from a blog on linux.com, “Weekend Project: Use HoneyD to fool attackers“. As it is a tutorial on linux.com, there are more chances that this post will be on top of your Google search for HoneyD on Linux. My point is, they have simplified the process of configuration, explained well, but there is one small error. I have highlighted it in the below screenshot:

The error that you will get will be: parsing configuration file failed. It will be on line:3. Set is used for setting our personality to some predefined condition, while add is used to provide something extra. If you are using set for providing preloaded scripts, then surely you will face parsing error.

Solution: replace set with add.
This should be your configuration:

Now, your honeypot will start its work without any error. Time to rejoice? Kind of.

Logging

How to log any attacks or scans on your honeypot? Use -l <filename>. Normally, it is logged under a directory named honeyd under the /tmp directory. If you dont have that directory, create it with mkdir.The command I used for logging the attempts was:

Ah, permission denied!
How to solve this? You guessed right – the file is write-protected, and hence give the write permission to everyone. Use chmod command.# chmod 766 /tmp/honeyd/log

Can’t detect Ping?

As you’ve seen the configuration file, I have not yet given any MAC address to my honeypot. Hence, it is not yet accessible to the outside world. Try pinging from a different computer, it will fail.
Provide a MAC address to your honeypot with the line as shown in the below screenshot. Check the MAC address of your host machine, and give the address of your honeypot as near as possible to the host address.

It is good if you have given the personality name as “Microsoft Windows XP Professional SP1”. If you have given a name like Windows XP (like I have given, in the below screenshot) or Linux Ubuntu 13.10, you are prone to getting an error while parsing the configuration file.

There are conventions for naming the personalities. There is a list of fingerprints (or names for such personalities) which should be used for naming the honeypot personality. The fingerprints are located in nmap.prints file. It uses the fingerprints which are identified by nmap scan, and hence when someone is scanning the honeypot, they will get the name provided by you.

Locate the nmap.prints file, with locate command. The you can use more to view the whole file, or if you simply want to view the fingerprints, use the grep command as shown in the below screenshot: (ref: Honeyd FAQ)

You can use any of the personalities in the list displayed by the above command.
While sometimes, there is a need to specify the fingerprint file on the command line. The command should include -p <fingerprint.file>
# honeyd -d -f honey.conf -l /tmp/honeyd/log -p /usr/share/honeyd/nmap.prints

Again, start your honeypot with a new personality.
Ping the honeypot from a remote machine. It will log everything, along with displaying it on the console.
Try doing FTP to your honeypot. It will show you the FTP login screen. (As usual, anonymous login is not allowed!)
Let me know if you face any other problems in configuring your honeypot.

Conclusion: HoneyD is very easy to work with, and hence the choice of many. But some common mistakes like typo or proofreading can bug you till infinity. You mostly need to take care with the initial configuration.
Adios!

Until now, I have been using the 3G internet by Tata Docomo. They were generous and gave me IP addresses without any kind of conversion. Means whatever IP I get on my ppp0 interface with ipconfig, is the same IP I get by the Google search ‘whats my ip’. Though they were dynamic IPs, they reached me without any translations.Last week I switched to a new provider, Vodafone 3G. I don’t know what kind of addressing scheme they are using, but definitely they gave me something more with the IP. Here on my laptop, on the ppp0 interface I have private IP address of 10.119.69.xx, which is further translated by NAT at their side and converted to 1.38.29.123. Mostly we (here ‘we’ refers to the whole group of people whose NAT address is converted to the specified IP, and you can consider the number of people in a class-A scheme) are given that IP address on the outside, while the inside address keeps changing.Now, whats the problem with that IP? Because, Spamhaus has black-listed that IP. Here’s the link: Spamhaus haz my IP black-listed. Reason? Before some days/months/years, that IP was a member of Cutwail spambot and kept sending spam mails. My first reaction on reading this was – let me check if I am affected with that spyware/bot. Additionally, there are some cons of getting black-listed by the Spamhaus >> online services (like port-scans, some forums) will block you, and the main problem will be with the SMTP.Two things need to be done:Spamhaus should have kind-of dynamic listingElse ISPs should be taking actions for getting the IP out of Spamhaus’ black-list.

After developing a tiny game of Rock Paper Scissors Lizard Spock based on python, during the free time today I made a module for getting the hash of a user provided string. This hash function makes use of the built-in ‘hashlib’ in Python, and provides options for using any of the hash function among md5 (128 bits), sha1 (160 bits), sha256 (256 bits) and sha512 (512 bits). It is kind of interactive, and can take any of the two inputs – either a file or a string. Unless specified, the program continues to give the hash through the chosen function.

I am willing to add more hash functions (like RIPEMD, md6, whirlpool) in the next update. Plus, thoughts of some encryption mixology module are in progress.Have uploaded the hash-er module here: simplyhash.py on PyPI

Phishtank is a project by OpenDNS community. OpenDNS is a company which provides its services for safe and fast browsing to the Internet. While Phishtank is a community where anyone can share or check phishing data.
Phishtank is not a technology to filter phishing/spam or to protect against phishing attacks, but a platform to submit, verify, check or share phishing details so it provides as a repository of phishing data.

How to support Phishtank?You can support Phishtank in either ways-

If you come across a website or an URL which you think is a phishing attempt, report it to Phishtank.
How to report?

If you are a lucky guy and don’t come across much phishes, you can support Phishtank by verifying phishes. Whenever you are having some free time, jump to verify a phish. There will be a list of latest phish links submitted by users like us, which need to be verified by users (like us) in order to validate them as valid or invalid phishes.

What happens after we submit or verify a phish?By the users’ reviews, phishtank knows that a link is a valid phish or not. If it is a valid phish, it stores it in its repository, or else it discards the data after some time.
Through that repository, either by Is it a Phish button or by their APIs, people can verify whether a given link is a phish or not, without getting their hands dirty by visiting the link. When the link is verified as a valid phish, OpenDNS takes appropriate action to eliminate that address and thus making the web more secure against Phishing attempts.
On Phishtank, anyone can check or search for active phishing sites in the Phish Archive. It is the repository of phishes submitted by users, and showing whether the phish is valid/invalid and online/offiline. Phishtank also provides nice statistics of total submissions, suspected phishes, graphs for phish verification and submission.

Whenever you are having some free time, do some work at Phishtank for making our Internet a better and safer place.

There are great moments – we have birthday parties, weddings, or any memorable moment on a normal day. To cherish these moments, we capture them visually as photographs and videos. Even after years, these moments excite us, bringing back the memories and feelings.Now, even some aromas have the same effect. Some scents make us nostalgic. The smell of a particular food, miles away from our home can bring the memories of mom’s food. While some smells simulate us in other way. I always had a feeling that if we can store these scents with us in any form and retrieve them back when we want to smell them, that would be great. Once I had a soap, which had a very nice aroma but I had only one piece of the soap. So I wished that the soap never gets finished (because of the aroma).Speaking of today’s markets, for visuals – we have cameras, for our voices – we have recorders, but for scents? There is a vast scope for some devices or products which can deal with scents for storing and giving back the smell whenever we want to have it.

These days I was busy with college work and exams. I learnt Python some months back, and found it very interesting to work with. My sources of learning Py were Head First Python (O’Reilly) and Beginning Python (Wiley Publishing), plus some online tutorials. While my first and favourite source was Python exercises on CodeAcademy. Hence, afterwards I made an exercise on that website for playing Rock Paper Scissors. You know that the coding of such program is too easy, but the backside validation for the user inputs was much tricky. Willing to make some more exercises at an advanced level. The Rock Paper Scissors exercise, after some beta testing by the website peeps, is available here: Rock Paper Scissors on Codecademy

Snort, is an Intrusion Detection and Prevention System for Windows and *nix machines. You can download it from here: Snort Download.

Well, for debian we dont require to download it from there. The command to download and install it is-

# apt-get install snort

This will download and install Snort to your Debian.

Next step is to configure the Snort for generating alerts for any activity. For example, we can consider ICMP-ping requests for alerts. Whenever someone pings our machine, an alert will be logged.

For configuration, 3 directories are necessary. If they are not created on their own, create them with mkdir command. They are:

/etc/snort

/etc/snort/rules

/var/log/snort

Now, our configuration file is: /etc/snort/snort.conf

If you need, you can take a backup of the original file, and then create a new file and edit it as below:

include /etc/snort/rules/icmp.rules

We don’t need to add other lines, as right now we are considered about only the ICMP requests, we will configure only the icmp.rules file and hence it is referenced in the snort.conf file.

Now, the icmp.rules file contains the below content:

alert icmp any any -> any any (msg:”Hey, someone pinged!”; sid:477; rev:3;)

This line will log any ICMP request from any source, with the given message. The sid and rev are used to uniquely identify Snort rules and its revisions.

Now, to start Snort listening on interface eth1, the command will be:

# snort -c /etc/snort/snort.conf -l /var/log/snort -i eth1

The first location is where the Snort configuration file is located, while the second location with -l is where to store the alert, and -i provides the interface selection.

Now, ping the machine from some other machine, and you will find an entry in the alert file located in /var/log/snort. It will contain the source and destination IP addresses, the time and date of the incident and other information related to the query.

Similarly, you can configure Snort to generate alerts on various incidents like FTP login, SSH attempts, Telnet requests.

Yesterday, gave the CloudU Final test for getting the certificate on Cloud Computing (Cloud University – Rackspace OpenCloud). It was a nice experience learning for the lesson tests from their informative white-papers. Much of the questions were a cakewalk for me, as having learnt Cloud Computing as a core subject in University. Just one more certificate for adding it to my pool. Next can be CCNA.

Distributed botnet, around tens of thousands of bots with their respective IP addresses
A pass file of around 1000 entries with some normal passwords
Default username: ‘admin’

Steps:

WordPress 3.0 release before 3 years, users going on with ‘admin’ as their default username, and some usual password

A brute-force with username: ‘admin’ and password from the above mentioned file

The botnet, tries this attack on each and every wordpress portal available over Internet

Objective:

A well-planned distributed attack (just like itsoknoproblembro shook the banking world) against some hot-spot over the Internet.

How:

The wordpress web servers have very high bandwidth, practically unlimited. Any attack triggered from these servers will have a great impact. This can be done to create a better and huge zombie-net.

Conclusion:

Save your wordpress! Change your password if the username is admin (and also, you need to change the username from admin to something else, for being secure).

Some more tips:

If you are using the .com for your wordpress, change your password and enable the 2 step authentication.

If you are the admin of wordpress installation on your server, you have some more steps to follow – like creating a password for the .wpadmin file and some security modifications in the .htaccess file.

MAC addresses travel like a relay race while IP addresses travel like a marathon runner.

MAC addresses:

The MAC address is the permanent address of the Network Interface Card. It is normally exchanged among the switches for local communications.

IP address:

The IP address is a temporary address assigned to any device, for communicating through the Internet. It is mostly handled by Routers, for long distance communications.

</br>

Now, for a long route packet transfer, the MAC address will keep changing from network to network. During each network, the packet will be handled by different MAC addresses, and be assigned the destination address of some local machine for the time being. While, the source and destination IP address will remain original.

Still today, if you search for that equation on Google, it returns results with xxx titles. Some more contradictory search equations which return same type of results are

“1 2” -1

“1 2” -2

The explanation of the equation -4^(1/4)” is given as – we are asking Google to return pages containing a 1 next to a 4, but which do not contain a 4.

A Google engineer related with the search quality, justifies that this should return zero results, because it is impossible to satisfy both requirements. However, we have uncovered a bug that causes some web pages to “match” these contradictory queries. Since these are the only results that “match” the query, they are the results that get shown.

Its really a bizarre bug in the Google search, which needs to be fixed soon. Though its not affecting many of its users, it is benefiting the porn websites in getting higher ranks.

The workshop on High Performance Computing was really a nice arrangement by CDAC for the students to learn and be familiar with the parallel processing. They offered the supercomputer access to perform the OpenMP and MPI programs, along with nice practical teaching from the HPC experts.

Suffered typhoid for half a month

The fever, which was caused because of some sort of food poisoning, made me suffer for a fortnight. I lost weight in notable proportions, but now I’m doing all good! Due to the fever I missed some of the lectures at college – but that’s no problem – as they were of my favourite subjects – Cloud Computing and Network Defense.

The trip to Mumbai was great! Travelling whole night in the train – tea at every station, a new experince with excitement in the land of dreams, an awesome event by Google, a trip in the Best bus, an empty local train, Mumbai vada-pav, key chains, halwa, all in a single day!!

Performed nicely in the mid-semester exams and practicals

Before the exams I was not expecting to score much in those exams, but that thing boosted me up and worked hard to get a nice rank in the exams – the first exams of my M.Tech. syllabus. I faced the online exams for the first time, totally practicals oriented subjects, and I am happy that I stood 3rd in the class.

Ah, how can I forget the haste I made on that day! I badly wanted to attend the Microsoft TechDay event – while the exams were going on. The next day exam was of Advanced Operating Systems, and yes, I wanted to attend the event which was about the operating systems. Still I managed to travel to Ahmedabad for the event and attend the Windows 8 and Server 2012 phases – though I missed the Visual Studio part.

Cisco Learning Network

The Cisco Learning Network is the best thing for a Networking guy! All the Cisco networking guys at one place – helping each other and boosting the spirits to perform better for the certifications and to solve the problems at their workplace. I like the friendly atmosphere the VIPs and Managers have created there, along with the pointing systems. Also, it gives a feel like a true Social Networking site – adding of the friends, updates, messages, discussions, games. I was a member of the community since long, but got to feel it when I became active during the last months.

Apart from the normal reasons for keeping our email accounts secure, there are many more which we try to ignore, or are not aware of the possibilities.

Take this scenario – why to keep the work-related and social email accounts seperate and confidential (if possible) :

If someone knows the basic information about you, your social networking account can be hacked. The main ingredient is – your email id. Its better to keep the id secure which you are using for networking. If the work and social email ids are the same, there are more chances of people guessing-knowing your basic informations, providing more chance for your account to get compromised.

I just wanted to let you know – that nobody is secure.

Some minutes back, I received a DM on my twitter by a friend. The DM contained –

And, it was from a girl who’s in the network security field since 12 years. Clearly, her account was hacked, and the victim account was used to send DMs to get some more accounts.

The result of clicking on that link will be? — Some metasploit exploit, abusing the vulnerabilities on your computer.

Point is, do not share email ids with anyone, do not click any link (even if its from a friend, verify the link by some online checker), change the password every 2 weeks, keep seperate email accounts, and patch your system regularly.