Bitcrack's Bl0g

Monday, January 23, 2017

DISCLAIMER:The following information, illustrations and design are for educational purposes, and the furtherance of protecting secure areas with the explicit permission of the owners. Please do not use such devices for any illegal purposes.

At Bitcrack, we often find ourselves conducting a red-team, or penetration test that involves access control assessment, Wi-Fi assessments, RFID and so forth.One thing that often gets in the way of a successful assessment is having to stop and take stock of collected data, process logs and so forth. We have thus embarked on a project to consolidate our attack hardware into a platform that can be easily used and deployed in the field.In this blog post we are detailing our HID RFID clone tool. It is loosely based on the Tastic RFID thief, with some modification.The ProblemWe liked the Tastic RFID Thief (thanks to BishopFox) for our assessments, but it had some issues for us. A major one being that one has to capture HID tag IDs, then stop somewhere, eject the SD card and open it on a computer/laptop to clone the HID using a Proxmark or something similar.The SolutionBuild an all-in-one solution to capture-and-write cards on demand with nothing more than your backpack and mobile phone/tablet.To do this, we did the following;1. Build our own Tastic version and modify it to suit our needs.2. Build a central control unit to manage our "captured" RFID cards, and writing the cards on the fly.3. All the necessary programs and scripts written to run it.Below is a picture of our final products, shown individually:

The items above are;1. a HID ProxPro II Reader2. a 2.1A 5V Li-Ion Battery Pack3. an Elec House Proxmark 3 RDV24. a custom-built Tastic RFID Thief with a home-made 3D-Printed box, LCD display. We removed the SD card and its associated program code. We also modified the code for our serial data requirements, and added a Li-On battery.5. a Raspberry Pi with our code running on it.The Tastic RFID unit close-up and on, looks like this:

We take all our components, and put them in a back-pack to create an easy-to-use walk-around HID read-and-clone system.

Backpack

HID Card Reader in Front of Bag

Tastic + Raspberry Pi + Battery Pack in center of backpack

Proxmark in side pocket

The Attack Process

The attack process is:

STEP 1

Get the bag near a HID RFID card (if it is worn, simply hang around people with cards in close proximity, around +/- 20-25cm)

STEP 2

As the system obtains RFID cards from people around you, the website open on the Phone/Tablet automatically updates to show you what cards you've captured.

STEP 3

Take a "blank" card out of your pocket, hold it against the side of the bag against the Proxmark and click CLONE ME for the corresponding captured card you wish to clone.

DONE!

The cloned card can now be used to access the areas the original card would have access to. A Red-Team simulated attack or Penetration Test can now continue. The system verifies that the cloned card is a match of the captured card you chose.

TODO:

Still on our list to add to our system is;

1. Support for other TAG types by simply holding them against the side of our bag to the Proxmark and using the phone interface to modify/clone them.

2. Addition of a Wi-Fi dongle to our Raspberry Pi and the building of a Wi-Fi attack module into the website to allow for Wi-Fi audits directly from the hand-held via the Backpack while walking around in the environment.

Monday, October 31, 2016

NOTE: Edited to include a measurement of the next optimization done on build 3.10-620. See table at the end.

Hashcat, the de-facto password cracking tool that recently went open-source, works very well on both AMD and Nvidia GPUs.

One problem however, was that when everyone went out to buy their GTX 980's and other Maxwell-based cards, they discovered that rule-based attacks on wordlists were slower than brute-force attacks on some algorithms. They were also just slow compared to other similar-spec cards from AMD. Obviously, wordlist and rule-based attack speeds depend on how many hashes you have, and how many wordlist candidates are keeping the GPU's busy. However, even when properly optimized, and due to OpenCL constraints, the speed of Maxwell GPU's in wordlists+rule modes lags behind their AMD cousins.

Until now that is.

Thank's to a recent tweak by atom (Hashcat's developer) we are enjoying a major speed boost for Maxwell-based cards. The tweak was a work around for how OpenCL is used by Hashcat with Maxwell-based Nvidia cards.

I have decided to do some benchmarks to show the difference.
the benchmarks were done using;

1 SHA256(p./s), MD5, NTLM & PHPass Hash

A 1GB wordlist to ensure that all GPU's are 100% utilized during our measurement run.

The d3ad0ne.rule ruleset that ships with Hashcat

A timer of 60 seconds to let everything settle and run.

No reboots, no driver changes, no extra Hashcat settings (all on automatic).

4 measurements of speed during the 60 seconds, averaged to a final speed.

The Benchmarks were done in the following manner;

Old Hashcat + 1 980 GPU

New Hashcat + 1 980 GPU

Old Hashcat + 6 x 980 GPUs

New Hashcat + 6 x 980 GPUs

Note: Old Hashcat refers to the version built off Github before 3.10 which is the newly optimized version built from Github.

All the results are published in the graphs below, which indicate the changes:

As can be clearly seen on 1 GPU, NTLM & MD5 have received an awesome speed boost with the new optimization. PHPass and SHA256 (p.s) remained much the same.

Let's look at 6 GPU's...

With 6 GPU's the increase remains the same (which is what we want!) - we see a speed increase for all 6 Maxwell cards doing NTLM and MD5.

How much increase did we measure?

Clearly, atom's changes have given Hashcat a major boost :)

Check that 45% speed increase in NTLM on Maxwell cards!

-- snip --

Atom pushed some more optimization that increases speed although not as much as the initial changes did. For brevity sake and not to redo all the graphs again, below is the table showing the differences between the current optimized version (3.10-611) and the one after that (3.10-620).

--snip--

Your numbers may be higher or lower than mine, given that you can still tweak the utilization, overclock the GPU's or other settings. However, its clear everyone with Maxwell-based GPU's is in for a nice treat with the changes.

Wednesday, October 26, 2016

As cybercrime becomes more and more sophisticated and wide
spread with technology, demand for cyber security services are ever increasing.
With this, thousands of cyber security companies have emerged offering all
sorts of services using powerful words and brilliant marketing schemes. This
can be really confusing to many businesses who want fortify their cyber
infrastructure.

All you need to know and fix your cyber security posture in
the most basic way are two critical assessments, namely vulnerability
assessments and penetration tests.

Vulnerability Assessments

This is a technique of discovering IT security
vulnerabilities that hackers use to harm your business.

The goal of a vulnerability assessment is to identify
vulnerabilities, quantify their impacts should they be exploited by malicious
hackers, chart a risk matrix with classifications based on impact and business
value, and mitigating them to reduce the business risk exposure.

Penetration tests

This is a simulation of an intrusion on your business IT
network as a hacker would.

The goal of a penetration test is to identify how a hacker
would hack into your business and what kind of harm the attacker can do, for
example, reach into your customer database which can cause massive damage to
your business and reputation. Not to mention compliance issues in your country.
The second goal is to put your security systems through a test of their
effectiveness and efficiency.

What should your business start with?

If your IT team has never put focus on security, it is
crucial to take on a vulnerability assessment first. This will map out your
business’s critical assets to a security risk matrix and determine the current
status of your IT infrastructure.

Penetration tests are more effective after a vulnerability
assessment. This is because you can not only test your infrastructure but also
test all the security measures you have put into place to reduce your risk
which you discovered from the vulnerability assessment findings.

Going the next step…

By now, with regular vulnerability assessments and periodic
penetration tests, your defenses are quite strong. And you will have a cyber
security team in place to maintain the security measures and overall security
posture of your business network infrastructure.

Occasionally, it is very beneficial for both your cyber
security team and the support staff of your network infrastructure to run a red
team-blue team simulation. A red
team-blue team is an offensive-defensive simulation. The blue team comprises of
your internal cyber security team, and the red team comprises of an external
cyber security team.

The simulation can be run as a planned event or an unplanned
event. The latter is always advised as it will test how effective your internal
cyber security team are at identifying intrusions and mitigating them from
doing more damage.

This will give the business the most practical view as to
how much the it can withstand against a fully-fledged cyber-attack.

Sunday, September 25, 2016

-snip- update 16 DEC: With the recent announcement of yet another Yahoo! breach, this time in 2013, I no doubt expect that the information below applies to data from 2013, not just the 2014 breach anymore.

Following-on from my previous Blog post, I decided to give more attention to the domains aspect of the Yahoo! data leak.

Side Note: This blog post is not intended to discourage, or force anyone to stop using Yahoo! services. Like any other provider, Yahoo! maintains a high level of security and complies with international laws and best-practices. However, this article does address the issue of data having been leaked in 2014 - which has been confirmed by Yahoo! and tries to provide more insight into persons possibly affected by the leak.

As articles like the one on CNN Money (click here) state, many people may have Yahoo! accounts without even knowing it. A prime example is email hosting that Yahoo! allows you to do via their business email services. This gives you your own email address, while Yahoo! manages all the back-end work.

Similar to how Google allows you to host your domain with Google Apps, Yahoo! allows you to host your domain and thus email and other services with them. What this means of course, is that the login account Yahoo! kept in its database for your "custom" domain was also stolen in the leak.

I decided to do an analysis to see what domains are hosting their services with Yahoo!. The best way for me to achieve this, as someone who loves password cracking, was to use a wordlist of domains - and compare their MX records to see who they host their email with.

My research led me to believe that Yahoo! services for email would point to some or other MX record like the ones below;

am0.yahoodns.net

mx-biz.mail.am0.yahoodns.net.

The common pattern there is that am0.yahoodns.net is associated with Yahoo! accounts - in particular email since we are looking at MX records here.

Using a Wordlist, of 560 000 domain names, I set out to find which are hosting their email at Yahoo!. I already knew of some so I used those to also double-check my script's findings. Keep in mind that this is not necessarily a complete list, since my data source was 560 000 domains, not all domains.

A Python script was written to perform MX lookups on ALL 560 000 domains and log the ones hosted at Yahoo!. The results of my findings are shown below;

Number of Domains using Yahoo! Email Services

My research shows that at least572 162* domains are using Yahoo! as their email provider, and thus Yahoo!'s web-based account services and portals.(* Thanks to Royce Williams (@TychoTithonus on Twitter) for the addition of a large number of domains we added to our Checker. )

Country-Domain Breakdown (or other TLD's)

Which countries are using Yahoo! Email for their domains?

.COM's accounted for the most - 461 911 domains.
Following that was .NET's with 44 128 and .ORG's with 36 150.

Note: Only countries with 10+ domains where counted, there are many more in the 1-10 category.

The USA is in the graph below, as there were too many to include with other countries.

I did not want to release the list of domains I found to be pointing to Yahoo!'s mail services. So I therefore decided rather to allow user's to be able to search for their domain in my list and see if its hosted with Yahoo!.

If your domain is hosted with Yahoo!, and you used it on or before 2014, there is a chance your data was compromised in the leak.

Conclusion

It is clear, that with the stolen login information, attackers have had 2 years to not only get into @yahoo.com accounts but also a vast array of domains belonging to other companies and organizations. Clearly, a major impact for people and companies - the impact of which may only be realized much later on.

Friday, September 23, 2016

-snip- update 16 DEC: With the recent announcement of yet another Yahoo! breach, this time in 2013, I no doubt expect that the information below applies to data from 2013, not just the 2014 breach anymore.

There is little on the "oh no!" scale that can beat waking up to hearing that over 500 million user accounts have been compromised on a very popular portal.

Unfortunately for YAHOO! such is the case. Well, such was the case - in 2014 - when the breach supposedly occurred. Comments from the company seem to indicate that there was an idea, albeit not confirmed, that there might have been a compromise, but until now it was not cast in stone. As of yesterday, YAHOO! has confirmed that the breach did in fact occur, real data was stolen and that data can/could include email addresses, passwords (hashed), address information, telephone information and so forth.

The prime suspect appears to be a nation state actor, or actors. Until more information on this becomes evident however, it may not be wise to start pointing at possible targets.

What we want to focus on, is the passwords aspect of the breach. The passwords were hashed using the bcrypt algorithm, although some information coming forward indicates there may be mixed hash types - possibly from imported/merged sites that YAHOO! took over. Either way, the majority are bcrypt according to YAHOO!.

Bcrypt, based on the Blowfish cipher is no small fish when it comes to password security. It is harder to crack (especially with brute-force techniques) than many other widely used hash algorithms, and it is generally very "slow" to attack as it is not efficiently processed by GPU-based cracking software.

Of course, the real matter is not so much the algorithm but the passwords. Bcrypt may be slow to crack, but if passwords like 123456 or password1 are in use, they are going to fall pretty quickly when cracked, Bcrypt or not. At 500+ Million hashes - assuming all accounts had a hash - there are going to be a lot of "easy" passwords.

So what is an "easy" password for YAHOO!? Let's examine their password rules from around 2015;

Note: I obtained this information from YAHOO! help sites, and other sources. If incorrect please let me know.

From the above, a few things are clear. Firstly, YAHOO! is not the worst candidate for user password requirements out there. Secondly, they avoided and checked for plain words so hopefully, "password" will not account for hundreds or more of the passwords in the leak once cracked.

However, where the above requirements do fall a bit short is;

A minimum password length of 8 should really be at least 9 or higher.

Their "tip" says use lowercase, uppercase, symbols, numbers etc - however their password verification only checked if you had used alphanumeric. i.e, instead of )ThE00Big#BrownFOX$ you would be allowed to use thebigbrownfox1

A maximum of 32 characters may not fit the needs of those using Password managers such as 1Password that can do much higher candidates, and allow one to have high levels of complexity in their password.

Given the above, we therefore are expecting to see some passwords being cracked in the leak as;

The other risk we are expecting to see realized is that people who shared their passwords amongst a few sites will still be using them there, allowing attackers to "script" scanners to try their credentials on other popular sites.

Additionally, since the YAHOO! account was probably used for email, the entire account and probably the same password could have been used by many to register on other sites.

The result for YAHOO! ? I think Per Thorsheim puts it well in this excerpt from CNN Money:

"This is massive," said cybersecurity expert Per Thorsheim on the scale of the hack. "It will cause ripples online for years to come."

Thursday, March 17, 2016

Defence in Depth has been an information assurance approach used for many years to protect networks and systems within ICT environments.

In short, it involves adding multiple layers of security around your “core” protected system/environment so that attacks can be fended off as attackers meet these layers one after the next. Should a control fail to protect a system, another control is there to prevent further attacks. It was introduced by the NSA and has been used by many companies in the past, and of course the present.

However, does DID (Defence-in-Depth) protect against hackers? Does it even slow them down?

This question is asked a lot because many companies that have implemented this kind of security approach found themselves to be hacked, or had breaches and other security-related problems.

Like any security methodology or process, DID has to yield to certain business requirements in order to be an enabler of business (this is a topic for another blog post, but suffice it to say, Security has to be a business enabler, not just a defender of business from risks).

In the process of enabling business, certain facilities have to be implemented for DID to allow business services to be properly provided. For example, for a user to access a website with an accounting system, that user needs access to the website. So, a firewall rule is created to allow users to access the web application. Now, for the web application to work – it needs to see the Database server behind the DMZ (see Google), as such access is opened for the web server to see the database server. Layer by layer we grant access to various systems for them to work together properly.

I wont go into more detail but you get the point? Our sphere of protection with its layers of defence in depth, now has a sharp “needle” put into it to the core to allow business to operate. This is not a problem, since of course the company’s security would be useless if it did not enable business systems to function for the company.

However, think of an attacker now. The attacker has access to the same website as “normal users”. Say this hacker finds a vulnerability to exploit in the web application that is available on the internet.

The hacker starts to exploit functions on the web application to access the database within the company. Defence in Depth is no longer a viable method of protection because as we mentioned above, we have opened a hole between the layers for our systems to work. The hacker thus is able to exploit the database and steal data via the web application front-end – all the while the company had a Defence in Depth approach.

The short fact is, trusted paths have to be carved into your DID implementations for business functions and other IT functions to operate. And it is through these methods that attackers attack your systems.

In conclusion, DID should be viewed as a piece of a much larger pie. It cannot be viewed as a check-box approach that guarantees security. In fact, it is an approach that should form a foundation towards a tailor-made solution that is implemented to suit your needs – both security, and business.

Add Defence in Depth to your overall strategy, but do not rely solely on it. Cyber Security is an ever-changing, evolving and granular entity that needs the correct input and advisory partners to make it work.

We operate in various countries globally. Contact us for any Cyber Security related needs such as Architecture, Advisory, Ethical Hacking, Threat Intelligence and more.

Sunday, September 28, 2014

No sooner than the dust settled on the first bash bug, have a few more vectors been found. And so the term "Shellshock" has been coined to refer to the recent spate of vulnerabilities affecting bash.

Unfortunately, MacOS (Mavericks) is not immune to this and the version of bash included with your installation is also vulnerable. I'm confident Apple will patch it soon, but incase you want to get it quickly patched ASAP you can follow the steps below to do so.

NOTE: If you don't have Xcode, you will need it to compile the replacement bash, and its a LARGE download. Be aware of this before proceeding.

Monday, February 10, 2014

So, you've tried to install AMD's linux driver 13.12 on an Ubuntu Server (12.04.4 or similar) and find yourself with a failed driver install. My blog entry this week will give you an option to get past that.

After investigating, you open the log file and find that the following errors occurs in the installer:Building module:cleaning build area....cd /var/lib/dkms/fglrx/13.251/build; sh make.sh --nohints --uname_r=3.11.0-15-generic --norootcheck.....(bad exit status: 1)[Error] Kernel Module : Failed to build fglrx-13.251 with DKMS[Error] Kernel Module : Removing fglrx-13.251 from DKMS------------------------------Deleting module version: 13.251completely from the DKMS tree.------------------------------
The problem is because the DKMS scripts are calling methods that are not supported in this kernel version. The kernel version i've found to fail is : 3.11.0-15-generic

Now there are a few options here, one of which is to extract and edit the Catalyst driver (simply add --extract <foldername> to get the amd driver installer to just extract the file to the target foldername)

However for those who just want to get moving, especially in my case since this machine is only for running GPU-based Password Crackers (Hashcat), there is a simpler way.

You need to use a kernel below 3.11.
Therefore, simply do:aptitude search linux-image | grep -i 3.9 // or 3.8 if you wish etc. The key is to stay at 3.10 or below

In my case, I'm going to take kernel linux-image-3.8.0-35-generic
So I run;aptitude install linux-image-3.8.0-35-generic

Once that is done, you now need to modify GRUB to boot that kernel. In a stock-install of Ubuntu Server, you will have 2 boot-up kernel options, 0 is default (3.11.x) and 1 is the recovery mode. Therefore we want to tell GRUB to boot kernel option 2 - which is now our new 3.8.0-35 kernel.
therefore;vi /etc/default/grub
Change this line:
GRUB_DEFAULT=0
TO
GRUB_DEFAULT=2 // or another number if you have more kernels installed

Save the file, and then install your new grub config by doing;update-grub

Now, reboot to boot your new kernel. If you see at boot-time on the local console that the wrong option is being selected, just choose the right boot kernel and then make a note of which option you should change the default to.

Once rebooted, confirm the right kernel is now running by doing a;uname -r
It should say 3.8.0-35-generic - or whichever one under 3.11 you chose.

The only thing left to do is to install the linux-headers for your new kernel 3.8.0-35-generic.
To do this, just execute;aptitude install linux-headers-`uname -r`

Once that is done, you're all set! Go to your AMD Driver directory and re-run;[email protected]:~$ ./amd-catalyst-13.12-linux-x86.x86_64.run

Thursday, September 5, 2013

We've all been in this position. You're at your wits end trying to figure out why you cant crack any more passwords in the list you have.

And then someone mentions "By the way, they're not English characters..."

Now, usually that's enough to scare you away pretty quickly, and rightfuly so. Different platforms all have different ways of using, displaying and identifying non-English language characters. But all is not lost, there is a way.

Below is an example of how to run a crack with the Arabic alphabet and characters. This is done on an Ubuntu machine, using Hex codes based on the UTF-8 character set.

First, a primer:
We are using oclHashcat-plus, and Hashcat has a parameter called "--hex-charset". This very-important and handy switch will tell Hashcat that your custom character sets specified are actually in HEX, not in normal English characters.

For example, A normal character set would be -1 ?l?u123 which means Hashcat will brute-force using all lower-case letters, all upper-case letters and only the numbers 1,2 and 3. From this the word "ILOVEu123" could be derived.

Now, In the --hex-charset case, Hashcat treats all the character sets as HEX. Therefore -1 ABBBBC means Hashcat will take AB as a character, BB as a character and BC as a character and bruteforce their representative text from hex.

If we look at the Arabic character set in UTF-8 in the encoding table we have:

Notice that column 3 has the HEX representation of the Arabic character. In this case its D886 in the first line.

If you examine the list closely, you will notice that there's a base HEX code (char set) and then the actual character HEX code. So D8 is the base, 86 is a character, or 8a is a character etc.

Feeding this into Hashcat is a two-fold process. Remember its 2 HEX codes to make a character. But we dont know when d8 is used and what might be put with it, so to get around this we make two custom character-sets in Hashcat. One is our base HEX the other is the Character HEX.

This will brute-force up to 9 Arabic characters. (NOTE: I left out everything else oclHashcat-plus would need to run, fill that in yourself as its not in scope here)

Sample outputs are:تفاحة الترانزستور

etc etc.

Using this approach I managed to get a very high hit-rate in an all-Arabic hash list. Customize it as you need it and remember you can apply it to any HEX character set, or even add English UTF-8 HEX codes to brute-force a mix of English and Arabic.

Special thanks to Atom, and to http://www.utf8-chartable.de for their easy-to-read and use UTF-8 tables.