Most Secure Applications of B.I.B.I.F.I. Contest

Build it, Break it, Fix it (BIBIFI). This is the name of Maryland Cybersecurity Center's contest available for Cybersecurity Specialization Capstone Project on Coursera, for those who successfully passed on Usable Security, Software Security, Cryptography and Hardware Security modules.

In this post I'd like to share with you the lessons learned during this contest and how we got one of the most secure applications after the end of the break it phase. This is very interesting because more than ever I was the developer, even having a security background, and built and application against security professionals. Some conflicts started to become crystal clear to me after this contest when it comes to development X information security. I will start explaining the contest itself and then move on to each phase's details.

Contest

In this contest, participants pass through 3 rounds:

Build it: to build applications accordingly to the contest's specification

Break it: to break applications from other teams to score points. Breaks can be pointed for correctness bug or security bug (exploit)

Fix it: to fix bugs found on break it phase

Before any round begin, people have to be on a team, from 1 to 5 people. So everybody published their skills and sought candidates that could help by either coding on the build phase or breaking code on the break phase. In the end more than 100 teams from all over the world were formed.

This is where I met @zeafonso, @tche, @apolishc and @lucas. We called ourselves "CyberMarmitex". The word "marmita" is a brazillan synonym for lunch box. So CyberMarmitex doesn't make any sense. Don't worry about that, just keep reading hehe.

Build It

One thing I can say for sure, is that everybody embraced the Rugged Software Manifesto right off the bat. We all knew that we were going to be attacked 2 weeks since the beginning of the build it phase. Still, was even possible for being pwned since we all knew that attacks were on their way? That's what I thought. After completing a security specialization and knowing the importance of information security, I'd bet that few vulnerabilities would be found. But boy, I was wrong.

During this phase we needed to develop 2 command line programs: a client and a server. Any language could be used. I won't go into much details to don't screw up the excitement in case you want to do it yourself. Build points were given for correctness, i.e., pass unit tests, and performance.

It look a lot of time. I coded 90% of both programs in ruby and we implemented the basic validation to pass correctness tests which included security verifications at some extent, .e.g, invalid inputs, but not all of them. @apolishc did some code improvements, implemented replay protection on ATM and made us pass in some tests as well.

Code wasn't everything. We had a threat model made by @lucas. @zeafonso and @tche were primarily responsible for building a fuzzer to get us ahead of other teams in the break it phase. But things changed too much in the middle of the week and we didn't use the fuzzer in the end.

Near the deadline to submit our code, I was somewhat crazy. I implemented encryption from client to server, but not from server to client. @zeafonso pointed this out, altogether with the replay attack protection and the correct AES mode to prevent oracle padding attacks.

Skipping building details, the takeaway in this phase for me was that development and information security need to be executed by 2 different people, even if the developer has an information security background. The reason for that is that in the end, when the deadline is near, the developer focuses on making the application work more than anything else. And it's not wrong for the developer to think that way. It's his ultimate responsibility. Furthermore, security for a broken app doesn't have any value at all.

In consequence of that, stupid things happen, like forgetting to encrypt from server to client. It's funny. So, even in your company, in your team, if you have someone that is a good software engineer and have a security background, make sure to find one more security person to support him. Otherwise sloppy security controls may take place of effective security controls.

Break It

The war has begun. A lot of teams started making points from correctness bugs, like trying variations of parameters allowed for the client. Some very dumb, e.g., use an invalid port like 9999999, and some clever, e.g., the parameter "-a <account>" must accept the characters [a-zA-Z\-], so a valid value could be "-b", so trying "-a -b" usually makes programs crash, but they should work normally accepting "-b" as the value of "-a", which was rare.

Our team suffered with correctness bugs, but then security bugs (exploits) started to show up in many teams few days later. Security bugs could be segregated in confidentiality bugs and integrity bugs. If information between client and server could be identified during a man-in-the-middle (MITM) attack, that's a confidentiality exploit. If some information could be manipulated, tampered by a MITM attack, that would be an integrity exploit.

The majority fell short for replay attacks. Both from client to server and from server to client. Was hard for teams to remember to put this protection by hand. The choice of the crypto algorithm was hard enough already.

To prevent replay attacks we used some random number generator to generate a unique string to be sent to the server, which would store this string to prevent double processing and also return this string back to the client in order to let the client known from which request he was getting a response. It solved replay attacks from both sides.

So, even after all protections that we put in place, one team, "b01lers", found one vulnerability in our application. The unique security bug. We couldn't know before the break it round ends, so we started to guess and the best possible solution was brute force attacks to guess some parameter passed from the client to server or from the server to client. We were right, but we could have been more specific.

B01lers took advantage of the lack of padding in our ciphertext. Error messages were shorter and successful messages were longer. Based on that they used brute force to guess some parameters. That was clever. To fix that, we had to put some padding to make all ciphertext have the same length. However there is a trick for that. If we fix some value for the padding, e.g., 50, and the client could have a parameter of a large size, e.g., 50 too, if the attacker force the parameter with a huge value, the padding won't be effective, so there is Math that need to be done before setting the padding length. This is one of the practical details that we, as security consultants, tend to overlook most of the time. "Hey, put the damn padding", you'd say. And in the end, the padding wasn't well implemented.

There were 3 other teams that had no reported vulnerability at all, as they added padding to prevent side channel attacks based on ciphertext length. However, they were vulnerable to side channel attacks based on time as I saw on their code, although no one wrote exploits for them.

The maximum number of exploits for an application was 26. So the answer for my question, asked in the build it phase for myself, is a HUGE YES. Even in 2015, after security awareness, teams were pwned very badly. Imagine developers WITHOUT a security specialization. The landscape is fuc**** dirty. Holy cow ...

Fix It

There is not much to say here. We had to fix bugs and dispute bugs that weren't actually bugs, so we solved 60%, the b01lers exploit and correctness bugs (basically fixing regex expressions) with support from @tche and @lucas's spreadsheet to organize all bugs and we disputed the other 40%.

@tche also helped a lot during the break phase to earn points for our team. He basically carried the entire team in his back in that phase.

Anyway, we had a good teamwork, despite having to deal with our day-to-day work to do this capstone project. I confess that when we got 5 members I thought that the point of having a team was to get 5 members and pray to at least 3 of them work well, but in the end everybody worked and the team proved itself strong enough to earn the most secure application award.

Actually this award of being one of the most secure applications is self-given. There is nothing official from BIBIBI about this. Actually the contest counted performance, correctness bugs and security bugs to select the best builders. As correctness bugs hurt us a bit, who got more exploits but fewer correctness bugs, even with exploits having more impact, could get a better classification. We also wrote in ruby, which wasn't the most performatic language at all.

I personally don't mind much, even knowing that a good software is Reliable (Correctness), Resilient (Withstand Attacks) and Recoverable (Rapidly responding to exploits). The challenge for me were the security attacks. The ability to withstand attacks from professionals around the globe was the most exciting thing for me.

I'd like to congratulate CyberMarmitex, @apolishc and @lucas for reviewing the english of this post, the BIBIFI contest and their sponsors and hope that we could change this landscape in secure development, which is very very ugly. To find that amount of security bugs in the end of a security specialization and don't be disappointed, it's very hard.

So, if you develop applications, go learn security, spread the knowledge and make the internet a safer place, please.

Thank you.

* Note: in case you're wondering if after reading this post you'd perform better in case you participate in the next BIBIFI, I'd like to point that yes, this post of course help, but help as much as any client/server security article. There are no answers here, no specs, no source code. We (all teams) knew that we needed a secure application, and all those controls were explained in the previous courses that we needed to pass, so one way to see this article is to see as a bunch of notes that I've taken. Still, knowing the controls represents 50% of what need to be done. There is a huge gap between pointing a security control and the implementation itself. I discuss such controls in this blog in a separate fashion, not related to BIBIFI, as you can see in the homepage.