Friday, 17 July 2015

The following would resolve the list of clarification with respective to javascript cryptography :

WHAT DO YOU MEAN, "JAVASCRIPT CRYPTOGRAPHY"?

We mean attempts to implement security features in browsers using cryptographic algoritms implemented in whole or in part in Javascript.

You may now be asking yourself, "What about Node.js? What about non-browser Javascript?". Non-browser Javascript cryptography is perilous, but not doomed. For the rest of this document, we're referring to browser Javascript when we discuss Javascript cryptography.

WHY DOES BROWSER CRYPTOGRAPHY MATTER?

The web hosts most of the world's new crypto functionality. A significant portion of that crypto has been implemented in Javascript, and is thus doomed. This is an issue worth discussing.

WHAT ARE SOME EXAMPLES OF "DOOMED" BROWSER CRYPTOGRAPHY?

You have a web application. People log in to it with usernames and passwords. You'd rather they didn't send their passwords in the clear, where attackers can capture them. You could use SSL/TLS to solve this problem, but that's expensive and complicated. So instead, you create a challenge-response protocol, where the application sends Javascript to user browsers that gets them to sendHMAC-SHA1(password, nonce) to prove they know a password without ever transmitting the password.

Or, you have a different application, where users edit private notes stored on a server. You'd like to offer your users the feature of knowing that their notes can't be read by the server. So you generate an AES key for each note, send it to the user's browser to store locally, forget the key, and let the user wrap and unwrap their data.

WHAT'S WRONG WITH THESE EXAMPLES?

They will both fail to secure users.

REALLY? WHY?

For several reasons, including the following:

Secure delivery of Javascript to browsers is a chicken-egg problem.

Browser Javascript is hostile to cryptography.

The "view-source" transparency of Javascript is illusory.

Until those problems are fixed, Javascript isn't a serious crypto research environment, and suffers for it.

If you don't trust the network to deliver a password, or, worse, don't trust the server not to keep user secrets, you can't trust them to deliver security code. The same attacker who was sniffing passwords or reading diaries before you introduce crypto is simply hijacking crypto code after you do.

THAT ATTACK SOUNDS COMPLICATED! SURELY, YOU'RE BETTER OFF WITH CRYPTO THAN WITHOUT IT?

There are three misconceptions embedded in that common objection, all of them grave.

First, although the "hijack the crypto code to steal secrets" attack sounds complicated, it is in fact simple. Any attacker who could swipe an unencrypted secret can, with almost total certainty, intercept and alter a web request. Intercepting requests does not require advanced computer science. Once an attacker controls the web requests, the work needed to fatally wound crypto code is trivial: the attacker need only inject another <SCRIPT> tag to steal secrets before they're encrypted.

Second, the difficulty of an attack is irrelevant. What's relevant is how tractable the attack is. Cryptography deals in problems that intractable even stipulating an attacker with as many advanced computers as there are atoms composing the planet we live on. On that scale, the difficulty of defeating a cryptosystem delivered over an insecure channel is indistinguishable from "so trivial as to be automatic". Further perspective: we live and work in an uncertain world in which any piece of software we rely on could be found vulnerable to new flaws at any time. But all those flaws require new R&D effort to discover. Relative to the difficulty of those attacks, against which the industry deploys hundreds of millions of dollars every year, the difficulties of breaking Javascript crypto remain imperceptibly different than "trivial".

Finally, the security value of a crypto measure that fails can easily fall below zero. The most obvious way that can happen is for impressive-sounding crypto terminology to convey a false sense of security. But there are worse ways; for instance, flaws in login crypto can allow attackers to log in without ever knowing a user's password, or can disclose one user's documents to another user.

WHY CAN'T I USE TLS/SSL TO DELIVER THE JAVASCRIPT CRYPTO CODE?

You can. It's harder than it sounds, but you safely transmit Javascript crypto to a browser using SSL. The problem is, having established a secure channel with SSL, you no longer need Javascript cryptography; you have "real" cryptography. Meanwhile, the Javascript crypto code is still imperiled by other browser problems.

WHAT'S HARD ABOUT DEPLOYING JAVASCRIPT OVER SSL/TLS?

You can't simply send a single Javascript file over SSL/TLS. You have to send all the page contentover SSL/TLS. Otherwise, attackers will hijack the crypto code using the least-secure connection that builds the page.

HOW ARE BROWSERS HOSTILE TO CRYPTOGRAPHY?

In a dispriting variety of ways, among them:

The prevalence of content-controlled code.

The malleability of the Javascript runtime.

The lack of systems programming primitives needed to implement crypto.

The crushing weight of the installed base of users.

Each of these issues creates security gaps that are fatal to secure crypto. Attackers will exploit them to defeat systems that should otherwise be secure. There may be no way to address them without fixing browsers.

WHAT DO YOU MEAN BY "CONTENT-CONTROLLED CODE"? WHY IS IT A PROBLEM?

We mean that pages are built from multiple requests, some of them conveying Javascript directly, and some of them influencing Javascript using DOM tag attributes (such as "onmouseover").

OK, THEN I'LL JUST SERVE A CRYPTOGRAPHIC DIGEST OF MY CODE FROM THE SAME SERVER SO THE CODE CAN VERIFY ITSELF.

This won't work.

Content-controlled code means you can't reason about the security of a piece of Javascript without considering every other piece of content that built the page that hosted it. A crypto routine that is completely sound by itself can be utterly insecure hosted on a page with a single, invisible DOM attribute that backdoors routines that the crypto depends on.

This isn't an abstract problem. It's an instance of "Javascript injection", better known to web developers as "cross-site scripting". Virtually every popular web application ever deployed has fallen victim to this problem, and few researchers would take the other side of a bet that most will again in the future.

Worse still, browsers cache both content and Javascript aggressively; caching is vital to web performance. Javascript crypto can't control the caching behavior of the whole browser with specificity, and for most applications it's infeasible to entirely disable caching. This means that unless you can create a "clean-room" environment for your crypto code to run in, pulling in no resource tainted by any other site resource (from layout to UX) , you can't even know what version of the content you're looking at.

WHAT'S A "MALLEABLE RUNTIME"? WHY ARE THEY BAD?

We mean you can change the way the environment works at runtime. And it's not bad; it's a fantastic property of a programming environment, particularly one used "in the small" like Javascript often is. But it's a real problem for crypto.

The problem with running crypto code in Javascript is that practically any function that the crypto depends on could be overridden silently by any piece of content used to build the hosting page. Crypto security could be undone early in the process (by generating bogus random numbers, or by tampering with constants and parameters used by algorithms), or later (by spiriting key material back to an attacker), or --- in the most likely scenario --- by bypassing the crypto entirely.

There is no reliable way for any piece of Javascript code to verify its execution environment. Javascript crypto code can't ask, "am I really dealing with a random number generator, or with some facsimile of one provided by an attacker?" And it certainly can't assert "nobody is allowed to do anything with this crypto secret except in ways that I, the author, approve of". These are two properties that often are provided in other environments that use crypto, and they're impossible in Javascript.

WELL THEN, COULDN'T I WRITE A SIMPLE BROWSER EXTENSION THAT WOULD ALLOW JAVASCRIPT TO VERIFY ITSELF?

You could. It's harder than it sounds, because you'd have to verify the entire runtime, including anything the DOM could contribute to it, but it is theoretically possible. But why would you ever do that? If you can write a runtime verifier extension, you can also do your crypto in the extension, and it'll be far safer and better.

"But", you're about to say, "I want my crypto to be flexible! I only want the bare minimum functionality in the extension!" This is a bad thing to want, because ninety-nine and five-more-nines percent of the crypto needed by web applications would be entirely served by a simple, well-specified cryptosystem: PGP.

The PGP cryptosystem is approaching two decades of continuous study. Just as all programs evolve towards a point where they can read email, and all languages contain a poorly-specified and buggy implementation of Lisp, most crypto code is at heart an inferior version of PGP. PGP sounds complicated, but there is no reason a browser-engine implementation would need to be (for instance, the web doesn't need all the keyring management, the "web of trust", or the key servers). At the same time, much of what makes PGP seem unwieldy is actually defending against specific, dangerous attacks.

YOU WANT MY BROWSER TO HAVE MY PGP KEY?

Definitely not. It'd be nice if your browser could generate, store, and use its own PGP keys though.

WHAT SYSTEMS PROGRAMMING FUNCTIONALITY DOES JAVASCRIPT LACK?

Here's a starting point: a secure random number generator.

HOW BIG A DEAL IS THE RANDOM NUMBER GENERATOR?

Virtually all cryptography depends on secure random number generators (crypto people call them CSPRNGs). In most schemes, the crypto keys themselves come from a CSPRNG. If your PRNG isn't CS, your scheme is no longer cryptographically secure; it is only as secure as the random number generator.

BUT HOW EASY IS IT TO ATTACK AN INSECURE RANDOM GENERATOR, REALLY?

It's actually hard to say, because in real cryptosystems, bad RNGs are a "hair on fire" problem solved by providing a real RNG. Some RNG schemes are pencil-and-paper solveable; others are "crackable", like an old DES crypt(3) password. It depends on the degree of badness you're willing to accept. But: no SSL system would accept any degree of RNG badness.

BUT I CAN GET RANDOM NUMBERS OVER THE INTERNET AND USE THEM FOR MY CRYPTO!

How can you do that without SSL? And if you have SSL, why do you need Javascript crypto? Just use the SSL.

I'LL USE RANDOM.ORG. THEY SUPPORT SSL.

Imagine a system that involved your browser encrypting something, but filing away a copy of the plaintext and the key material with an unrelated third party on the Internet just for safekeeping. That's what this solution amounts to. You can't outsource random number generation in a cryptosystem; doing so outsources the security of the system.

WHAT ELSE IS THE JAVASCRIPT RUNTIME LACKING FOR CRYPTO IMPLEMENTORS?

Two big ones are secure erase (Javascript is usually garbage collected, so secrets are lurking in memory potentially long after they're needed) and functions with known timing characteristics. Real crypto libraries are carefully studied and vetted to eliminate data-dependant code paths --- ensuring that one similarly-sized bucket of bits takes as long to process as any other --- because without that vetting, attackers can extract crypto keys from timing.

BUT OTHER LANGUAGES HAVE THE SAME PROBLEM!

That's true. But what's your point? We're not saying Javascript is a bad language. We're saying it doesn't work for crypto inside a browser.

BUT PEOPLE RELY ON CRYPTO IN LANGUAGES LIKE RUBY AND JAVA TODAY. ARE THEY DOOMED, TOO?

Some of them are; crypto is perilous.

But many of them aren't, because they can deploy countermeasures that Javascript can't. For instance, a web app developer can hook up a real CSPRNG from the operating system with an extension library, or call out to constant-time compare functions.

If Python was the standard browser content programming language, browser Python crypto would also be doomed.

WHAT ELSE IS JAVASCRIPT MISSING?

A secure keystore.

WHAT'S THAT?

A way to generate and store private keys that doesn't depend on an external trust anchor.

EXTERNAL WHAT NOW?

It means, there's no way to store a key securely in Javascript that couldn't be expressed with the same fundamental degree of security by storing the key on someone else's server.

WAIT, CAN'T I GENERATE A KEY AND USE IT TO SECURE THINGS IN HTML5 LOCAL STORAGE? WHAT'S WRONG WITH THAT?

That scheme is, at best, only as secure as the server that fed you the code you used to secure the key. You might as well just store the key on that server and ask for it later. For that matter, store your documents there, and keep the moving parts out of the browser.

THESE DON'T SEEM LIKE EARTH-SHATTERING PROBLEMS. WE'RE SO CLOSE TO HAVING WHAT WE NEED IN BROWSERS, WHY NOT GET TO WORK ON IT?

Check back in 10 years when the majority of people aren't running browsers from 2008.

THAT'S THE SAME THING PEOPLE SAY ABOUT WEB STANDARDS.

Compare downsides: using Arial as your typeface when you really wanted FF Meta, or coughing up a private key for a crypto operation.

We're not being entirely glib. Web standards advocates care about graceful degradation, the idea that a page should at least be legible even if the browser doesn't understand some advanced tag or CSS declaration.

"Graceful degradation" in cryptography would imply that the server could reliably identify which clients it could safely communicate with, and fall back to some acceptable substitute in cases where it couldn't. The former problem is unsolved even in the academic literature. The latter recalls the chicken-egg problem of web crypto: if you have an acceptable lowest-common-denominator solution, use that instead.

THIS IS WHAT YOU MEANT WHEN YOU REFERRED TO THE "CRUSHING BURDEN OF THE INSTALLED BASE"?

Yes.

AND WHEN YOU SAID "VIEW-SOURCE TRANSPARENCY WAS ILLUSORY"?

We meant that you can't just look at a Javascript file and know that it's secure, even in the vanishingly unlikely event that you were a skilled cryptographer, because of all the reasons we just cited.

NOBODY VERIFIES THE SOFTWARE THEY DOWNLOAD BEFORE THEY RUN IT. HOW COULD THIS BE WORSE?

Nobody installs hundreds of applications every day. Nobody re-installs each application every time they run it. But that's what people are doing, without even realizing it, with web apps.

This is a big deal: it means attackers have many hundreds of opportunities to break web app crypto, where they might only have one or two opportunities to break a native application.

BUT PEOPLE GIVE THEIR CREDIT CARDS TO HUNDREDS OF RANDOM PEOPLE INSECURELY.

An attacker can exploit a flaw in a web app across tens or hundreds of thousands of users at one stroke. They can't get a hundred thousand credit card numbers on the street.

YOU'RE JUST NOT GOING TO GIVE AN INCH ON THIS, ARE YOU?

Nobody would accept any of the problems we're dredging up here in a real cryptosystem. If SSL/TLS or PGP had just a few of these problems, it would be front-page news in the trade press.

YOU SAID JAVASCRIPT CRYPTO ISN'T A SERIOUS RESEARCH AREA.

It isn't.

HOW MUCH RESEARCH DO WE REALLY NEED? WE'LL JUST USE AES AND SHA256. NOBODY'S TALKING ABOUT INVENTING NEW CRYPTOSYSTEMS.

AES is to "secure cryptosystems" what uranium oxide pellets are to "a working nuclear reactor". Ever read the story of the radioactive boy scout? He bought an old clock with painted with radium and found a vial of radium paint inside. Using that and a strip of beryllium swiped from his high school chemistry lab, he built a radium gun that irradiated pitchblende. He was on his way to building a "working breeder reactor" before moon-suited EPA officials shut him down and turned his neighborhood into a Superfund site.

The risks in building cryptography directly out of AES and SHA routines are comparable. It is capital-H Hard to construct safe cryptosystems out of raw algorithms, which is why you generally want to use high-level constructs like PGP instead of low-level ones.

WHAT ABOUT THINGS LIKE SJCL, THE STANFORD CRYPTO LIBRARY?

SJCL is great work, but you can't use it securely in a browser for all the reasons we've given in this document.

SJCL is also practically the only example of a trustworthy crypto library written in Javascript, and it's extremely young.

The authors of SJCL themselves say, "Unfortunately, this is not as great as in desktop applications because it is not feasible to completely protect against code injection, malicious servers and side-channel attacks." That last example is a killer: what they're really saying is, "we don't know enough about Javascript runtimes to know whether we can securely host cryptography on them". Again, that's painful-but-tolerable in a server-side application, where you can always call out to native code as a workaround. It's death to a browser.

AREN'T YOU CREATING A SELF-FULFILLING PROPHECY ABOUT JAVASCRIPT CRYPTO RESEARCH?

People don't take Javascript crypto seriously because they can't get past things like "there's no secure way to key a cryptosystem" and "there's no reliably safe way to deliver the crypto code itself" and "there's practically no value to doing crypto in Javascript once you add SSL to the mix, which you have to do to deliver the code".

THESE MAY BE REAL PROBLEMS, BUT WE'RE TALKING ABOUT MAKING CRYPTO AVAILABLE TO EVERYONE ON THE INTERNET. THE REWARDS OUTWEIGH THE RISKS!

DETROIT --- A man who became the subject of a book called "The Radioactive Boy Scout" after trying to build a nuclear reactor in a shed as a teenager has been charged with stealing 16 smoke detectors. Police say it was a possible effort to experiment with radioactive materials.

The world works the way it works, not the way we want it to work. It's one thing to point at the flaws that make it hard to do cryptography in Javascript and propose ways to solve them; it's quite a different thing to simply wish them away, which is exactly what you do when you deploy cryptography to end-users using their browser's Javascript runtime.

You could try on chipgenius which claims that it repairs and inspects if the usb flash controller chip has the wrong VID PID information

Step 3 : Repairing Your Fake Flash disk

If the flash disk was real and not fake You could try on the following to repair the stuff :

Operating System Disk :

It involves removing the existing hard disk from a computer or laptop, booting from the operating system disk, then reformatting the memory card. It appears to be very successful. You can’t use an OEM disk provided with your computer or laptop, it must be a full Windows operating system CD or DVD.

Primary Partitioning For The Reported Flash disk :

The alternative option is to use the information provided by H2testw to build a fence. That is, create a primary partition on the flash disk slightly less then the real capacity reported by H2testw . The balance of the capacity the windows operating system sees as unallocated. You must always remember never to touch or format the additional unallocated capacity, because it is the capacity that is fake, it does not really exist! If people own Acronis Disk Director software, they will use it instead.

Other options to check were you could use testdrive from instructables

Interconnectivity and by what means i.e. T1, Satellite, Wide Area Network, Lease Line Dial up etc.

Encryption/ VPN’s utilized etc.

Role of the network or system

Scope of test

Constraints and limitations imposed on the team i.e. Out of scope items, hardware, IP addresses.

Constraints, limitations or problems encountered by the team during the actual test

Purpose of Test

Deployment of new software release etc.

Security assurance for the Code of Connection

Interconnectivity issues.

Type of Test

Compliance Test

Vulnerability Assessment

Penetration Test

Test Type

White-Box

The testing team has complete carte blanche access to the testing network and has been supplied with network diagrams, hardware, operating system and application details etc, prior to a test being carried out. This does not equate to a truly blind test but can speed up the process a great deal and leads to a more accurate results being obtained. The amount of prior knowledge leads to a test targeting specific operating systems, applications and network devices that reside on the network rather than spending time enumerating what could possibly be on the network. This type of test equates to a situation whereby an attacker may have complete knowledge of the internal network.

Black-Box

No prior knowledge of a company network is known. In essence an example of this is when an external web based test is to be carried out and only the details of a website URL or IP address is supplied to the testing team. It would be their role to attempt to break into the company website/ network. This would equate to an external attack carried out by a malicious hacker.

Grey-Box

The testing team would simulate an attack that could be carried out by a disgruntled, disaffected staff member. The testing team would be supplied with appropriate user level privileges and a user account and access permitted to the internal network by relaxation of specific security policies present on the network i.e. port level security.

Example: A FAT partition was found. FAT by default does not give the ability to set appropriate access control permissions to files. In addition moving files to this area removes the protection of the current ACLs applied to the file.

Recommendation and fix

Example: Format the file system to NTFS.

Password Policy

Details of finding

Example: LM Hashes found still being utilized on the network.

Recommendation and fix

Example: Ensure NTLM2 is enforced by means of the correct setting in Group Policy.

Auditing Policy

Details of finding

Example: Logon success and failure was not enabled

Recommendation and fix

Example: Amend appropriate Group Policy Objects and ensure it is tested and then applied to all relevant Organizational Units etc.

Patching Policy

Details of finding

Example: Several of the latest Microsoft patches were found to be missing

Recommendation and fix

Example: Ensure a rigorous patching policy is instigated after first being tested on a development LAN to ensure stability. Review the settings on the WSUS server and ensure that it is regularly updated and an appropriate update strategy is instigated for the domain.

Anti-virus Policy

Details of finding

Example: Several workstations were found to have out of date anti-virus software. In addition where it was found to be installed the actual product was found to be mis-configured and did not provide on-access protection.

Recommendation and fix

Example: Ensure all workstations are regularly updated and configured correctly to ensure maximum protection is afforded

Trust Policy

Details of finding

Example: Users from one domain were unable to access resources on another tree.

Recommendation and fix

Example: Review transitive and non-transitive trusts and ensure that all relevant trusts have been established.

Web Server Security

File System Security

Details of finding

Example: i.e. Incorrect permission on www root Recommendation and fix

Example: Apply more stringent permissions or remove various users/groups that currently have access to this area.

Password Policy

Details of finding

Example: Areas of the website that should be Protected did not have any password mechanism enforced.

Recommendation and fix

Example: Ensure areas that require access to be limited are password protected.

Auditing Policy

Details of finding

Example: Web server logs were not being reviewed for illicit behaviors.

Recommendation and fix

Example: Regularly review all audit logs.

Patching Policy

Details of finding

Example: The latest patch was not applied to the server leaving it susceptible to a Denial of Service Attack.

Recommendation and fix

Example: Apply the latest patch after testing on a development server to ensure compatibility with installed applications and stability of the server is maintained.

Lockdown Policy

Details of finding

Example: The IIS lockdown tool has not been applied to the web server.

Recommendation and fix

Example: Apply the IIS lockdown tool to the server after first testing on a development server to ensure compatibility with installed applications and stability of the server is maintained.

Database Server Security

File System Security

Details of finding

Example: Loose access control permissions were found on directories containing important configuration files that govern access to the server.

Recommendation and fix

Example: Ensure stringent access control permissions are enforced.

Password Policy

Details of finding

Example: Clear text passwords were found stored within the database.

Recommendation and fix

Example: Ensure all passwords, if required to be stored within the database are encrypted and afforded the maximum protection possible.

Auditing Policy

Details of finding

Example: Reviewing the audit logs from the TNS Listener were not being carried out.

Recommendation and fix

Example: Ensure all relevant audit logs are regularly inspected. Audit logs may give you the first clue to possible attempts to brute force access into the database.

Patching Policy

Details of finding

Example: The latest Oracle CPU was not installed, leaving the system susceptible to multiple buffer and heap overflows and possible Denial of Service attacks.

Recommendation and fix

Example: Install the latest Oracle CPU after first testing on a development server to ensure adequate compatibility and stability.

Lockdown Policy

Details of finding

Example: Numerous extended stored procedures were directly accessible by the public role.

Recommendation and fix

Example: Ensure the public role is revoked from all procedures that direct access is not required or utilized.

Trust Policy

Details of finding

Example: Clear text Link passwords were discovered.

Recommendation and fix

Example: Ensure all Link passwords are encrypted, review the requirement to utilize these Links on a regular basis.

General Application Security

File System Security

Details of finding

Recommendation and fix

Password Policy Details of finding

Recommendation and fix

Auditing Policy

Details of finding

Recommendation and fix

Patching Policy

Details of finding

Recommendation and fix

Lockdown Policy

Details of finding

Recommendation and fix

Trust Policy

Details of finding

Recommendation and fix

Business Continuity Policy

Backup Policy

Details of finding

Recommendation and fix

Replacement premises provisioning

Details of finding

Recommendation and fix

Replacement personnel provisioning

Details of finding

Recommendation and fix

Replacement software provisioning

Details of finding

Recommendation and fix

Replacement hardware provisioning

Details of finding

Recommendation and fix

Replacement document provisioning

Details of finding

Recommendation and fix

Annexes

Glossary of Terms

Buffer Overflow

Normally takes the form of inputting an overly long string of characters or commands that the system cannot deal with. Some functions have a finite space available to store these characters or commands and any extra characters etc. over and above this will then start to overwrite other portions of code and in worse case scenarios will enable a remote user to gain a remote command prompt with the ability to interact directly with the local machine.

Denial of Service

This is an aimed attacks designed to deny a particular service that you could rely on to conduct your business. These are attacks designed to say overtax a web server with multiple requests which are intended to slow it down and possibly cause it to crash. Traditionally such attacks emanated from one particular source.

Directory Traversal

Basically when a user or function tries to “break” out of the normal parent directory specified for the application and traverse elsewhere within the system, possibly gaining access to sensitive files or directories in the process.

Basically when a low privileged user interactively executes PL/SQL commands on the database server by adding additional syntax into standard arguments, which is then passed to a particular function enabling enhanced privileges.

Network Map/Diagram

Accompanying Scan Results – CD-ROM

Vulnerability Definitions

Critical

A vulnerability allowing remote code execution, elevation of privilege or a denial of service on an affected system.

Important

A security weakness, whose exploitation may result in the compromise of the Confidentiality, Integrity or Availability of the company’s data.

Information Leak

Insecure services and protocols are being employed by the system allowing potentially allowing unrestricted access to sensitive information i.e.:a. The use of the Finger and Sendmail services may allow enumeration of User IDs.b. Anonymous FTP and Web based services are being offered on network devices or peripherals.c. Disclosure of Operating System, Application version details and personal details of system administration staffs.

Concern

The current systems configuration has a risk potential to the network concerned though the ability to exploit this is mitigated by factors such as default configuration, auditing, or the difficulty level or access level required to carry out an exploit. This includes the running of network-enabled services that are not required by the current business continuity process.

Unknowns

An unknown risk is an unclear response to a test or an action whose impact can be determined as having minimal impact on the system. The test identifying this risk may or may not be repeatable. While the results do not represent a security risk per see, they should be investigated and rectified where possible. Unknowns may also be due to false positives being reported, however, do require follow up response.

Details of Tools Utilized.

Methodology Utilized.

Reconnaissance

The tester would attempt to gather as much information as possible about the selected network. Reconnaissance can take two forms i.e. active and passive. A passive attack is always the best starting point as this would normally defeat intrusion detection systems and other forms of protection etc. afforded to the network. This would usually involve trying to discover publicly available information by utilizing a web browser and visiting newsgroups etc. An active form would be more intrusive and may show up in audit logs and may take the form of an attempted DNS zone transfer or a social engineering type of attack.

Enumeration

The tester would use varied operating system fingerprinting tools to determine what hosts are alive on the network and more importantly what services and operating systems they are running. Research into these services would then be carried out to tailor the test to the discovered services.

Scanning

By use of vulnerability scanners all discovered hosts would be tested for vulnerabilities. The result would then be analyzed to determine if there any vulnerabilities that could be exploited to gain access to a target host on a network.

Obtaining Access

By use of published exploits or weaknesses found in applications, operating system and services access would then be attempted. This may be done surreptitiously or by more brute force methods. An example of this would be the use of exploit engines i.e. Metasploit or password cracking tools such as John the Ripper.

Maintaining Access

This is done by installing a backdoor into the target network to allow the tester to return as and when required. This may be by means of a rootkit, backdoor trojan or simply the addition of bogus user accounts.

Erasing Evidence

The ability to erase logs that may have detected the testing teams attempts to access the network should ideally not be possible. These logs are the first piece of evidence that may prove that a possible breach of company security has occurred and should be protected at all costs. An attempt to erase or alter these logs should prove unsuccessful to ensure that if a malicious attacker did in fact get access to the network then their every movement would be recorded.

an authenticator, signature, or message authentication code (MAC) is sent along with the message

the MAC is generated via some algorithm which depends on both the message and some (public or private) key known only to the sender and receiver

the message may be of any length

the MAC may be of any length, but more often is some fixed size, requiring the use of some hash function to condense the message to the required size if this is not acheived by the authentication scheme

process the message in 16-word (512-bit) chunks, using 3 rounds of 16 bit operations each on the chunk & buffer

output hash value is the final buffer value

some progress at cryptanalysing MD4 has been made, with a small number of collisions having been found

MD5 was designed as a strengthened version, using four rounds, a little more complex than in MD4

a little progress at cryptanalysing MD5 has been made with a small number of collisions having been found

both MD4 and MD5 are still in use and considered secure in most practical applications

both are specified as Internet standards (MD4 in RFC1320, MD5 in RFC1321)

SHA (Secure Hash Algorithm)

SHA was designed by NIST & NSA and is the US federal standard for use with the DSA signature scheme (nb the algorithm is SHA, the standard is SHS)

it produces 160-bit hash values

SHA overview

pad message so its length is a multiple of 512 bits

initialise the 5-word (160-bit) buffer (A,B,C,D,E) to

(67452301,efcdab89,98badcfe,10325476,c3d2e1f0)

process the message in 16-word (512-bit) chunks, using 4 rounds of 20 bit operations each on the chunk & buffer

output hash value is the final buffer value

SHA is a close relative of MD5, sharing much common design, but each having differences

SHA has very recently been subject to modification following NIST identification of some concerns, the exact nature of which is not public

current version is regarded as secure

Other Hash Functions

HAVAL

a variable length one-way hash function designed by Uni of Wollongong and recently published at Auscrypt'92

it processes messages in 1024-bit blocks, using an 8-word buffer and 3 to 5 rounds of 16 steps each, creating hash values of 128, 160, 192, 224, or 256 bits in length

uses highly non-linear 7-variable functions in each step

is faster than MD5

it not subject to MD5 type analysis, no attack is known

Using Private Key Ciphers

a large number of "Modes of Use" have been proposed which use a block cipher to create a hash value

original proposal was by Davies and Meyer

many other proposals

most have been broken using a birthday attack

the design of fast, secure hash functions of this form is still being studied, with many questions unresolved

What Were Digital Signature Schemes

public key signature schemes

the private-key signs (creates) signatures, and the public-key verifies signatures

only the owner (of the private-key) can create the digital signature, hence it can be used to verify who created a message

anyone knowing the public key can verify the signature (provided they are confident of the identity of the owner of the public key - the key distribution problem)

usually don't sign the whole message (doubling the size of information exchanged), but just a hash of the message

digital signatures can provide non-repudiation of message origin, since an asymmetric algorithm is used in their creation, provided suitable timestamps and redundancies are incorporated in the signature

RSA

RSA encryption and decryption are commutative, hence it may be used directly as a digital signature scheme

given an RSA scheme {(e,R), (d,p,q)}

to sign a message, compute:

S = Md(mod R)

to verify a signature, compute:

M = Se(mod R) = Me.d(mod R) = M(mod R)

thus know the message was signed by the owner of the public-key

would seem obvious that a message may be encrypted, then signed using RSA without increasing it size

but have blocking problem, since it is encrypted using the receivers modulus, but signed using the senders modulus (which may be smaller)

several approaches possible to overcome this

more commonly use a hash function to create a separate MDC which is then signed