Chapter 21  Code Review

Retired Content

This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies.
This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

Applies to:

See the Landing Page for the starting point and a complete overview of Improving Web Application Security: Threats and Countermeasures.

Summary: This chapter shows you how to review code built using the .NET Framework for potential security vulnerabilities. It shows you the specific review questions to ask and discusses the tools that you should use. In addition to general coding considerations, the chapter includes review questions to help you review your applications for cross-site scripting, SQL injection and buffer overflow vulnerabilities.

Overview

Code reviews should be a regular part of your development process. Security code reviews focus on identifying insecure coding techniques and vulnerabilities that could lead to security issues. The review goal is to identify as many potential security vulnerabilities as possible before the code is deployed. The cost and effort of fixing security flaws at development time is far less than fixing them later in the product deployment cycle.

This chapter helps you review managed ASP.NET Web application code built using the Microsoft .NET Framework. In addition, it covers reviewing calls to unmanaged code. The chapter is organized by functional area, and includes sections that present general code review questions applicable to all types of managed code as well as sections that focus on specific types of code such as Web services, serviced components, data access components, and so on.

This chapter shows the questions to ask to expose potential security vulnerabilities. You can find solutions to these questions in the individual building chapters in Part III of this guide. You can also use the code review checklists in the "Checklists" section of the guide to help you during the review process.

FxCop

A good way to start the review process is to run your compiled assemblies through the FxCop analysis tool. The tool analyzes binary assemblies (not source code) to ensure that they conform to the .NET Framework Design Guidelines, available on MSDN. It also checks that your assemblies have strong names, which provide tamperproofing and other security benefits. The tool comes with a predefined set of rules, although you can customize and extend them.

Performing Text Searches

To assist the review process, check that you are familiar with a text search tool that you can use to locate strings in files. This type of tool allows you to quickly locate vulnerable code. Many of the review questions presented later in the chapter indicate the best strings to search for when looking for specific vulnerabilities.

You may already have a favorite search tool. If not, you can use the Find in Files facility in Visual Studio .NET or the Findstr command line tool, which is included with the Microsoft Windows operating system.

Note If you use the Windows XP Search tool from Windows Explorer, and use the A word or phrase in the file option, check that you have the latest Windows XP service pack, or the search may fail. For more information, see Microsoft Knowledge Base article 309173, "Using the 'A Word or Phrase in the File' Search Criterion May Not Work."

Search for Hard-Coded Strings

Before you perform a detailed line-by-line analysis of your source code, start with a quick search through your entire code base to identify hard-coded passwords, account names, and database connection strings. Scan through your code and search for common string patterns such as the following: "key," "secret," "password," "pwd," and "connectionstring."

For example, to search for the string "password" in the Web directory of your application, use the Findstr tool from a command prompt as follows:

findstr /S /M /I /d:c:\projects\yourweb "password" *.*

Findstr uses the following command-line parameters:

/S  include subdirectories.

/M  list only the file names.

/I  use a case insensitive search.

/D:dir  search a semicolon-delimited list of directories. If the file path you want to search includes spaces, surround the path in double quotes.

Automating Findstr

You can create a text file with common search strings. Findstr can then read the search strings from the text file, as shown below. Run the following command from a directory that contains .aspx files.

findstr /N /G:SearchStrings.txt *.aspx

/N prints the corresponding line number when a match is found. /G indicates the file that contains the search strings. In this example, all ASP.NET pages (*.aspx) are searched for strings contained within SearchStrings.txt.

ILDASM

You can also use the Findstr command in conjunction with the ildasm.exe utility to search binary assemblies for hard-coded strings. The following command uses ildasm.exe to search for the ldstr intermediate language statement, which identifies string constants. Notice how the output shown below reveals a hard-coded database connection and the password of the well known sa account.

Note Ildasm.exe is located in the \Program Files\Microsoft Visual Studio {version number}\SDK\{Framework Version number}\bin folder. For more information about the supported command-line arguments, run ildasm.exe /?.

Cross-Site Scripting (XSS)

Your code is vulnerable to cross-site scripting (XSS, also referred to as CSS) attacks wherever it uses input parameters in the output HTML stream returned to the client. Even before you conduct a code review, you can run a simple test to check if your application is vulnerable to XSS. Search for pages where user input information is sent back to the browser.

XSS bugs are an example of maintaining too much trust in data entered by a user. For example, your application might expect the user to enter a price, but instead the attacker includes a price and some HTML and JavaScript. Therefore, you should always ensure that data that comes from untrusted sources is validated. When reviewing code, always ask the question, "Is this data validated?" Keep a list of all entry points into your ASP.NET application, such as HTTP headers, query strings, form data, and so on, and make sure that all input is checked for validity at some point. Do not test for incorrect input values because that approach assumes that you are aware of all potentially risky input. The most common way to check that data is valid in ASP.NET applications is to use regular expressions.

You can perform a simple test by typing text such as "XYZ" in form fields and testing the output. If the browser displays "XYZ" or if you see "XYZ" when you view the source of the HTML, then your Web application is vulnerable to XSS. If you want to see something more dynamic, inject <script>alert('hello');</script>. This technique might not work in all cases because it depends on how the input is used to generate the output.

The following process helps you to identify common XSS vulnerabilities:

Identify code that outputs input.

Identify potentially dangerous HTML tags and attributes.

Identify code that handles URLs.

Check that output is encoded.

Check for correct character encoding.

Check the validateRequest attribute.

Check the HttpOnly cookie option.

Check the <frame> security attribute.

Check the use of the innerText and innerHTML properties.

Identify Code That Outputs Input

View the page output source from the browser to see if your code is placed inside an attribute. If it is, inject the following code and retest to view the output.

"onmouseover= alert('hello');"

A common technique used by developers is to filter for < and > characters. If the code that you review filters for these characters, then test using the following code instead:

&{alert('hello');}

If the code does not filter for those characters, then you can test the code by using the following script:

<script>alert(document.cookie);</script>;

You may have to close a tag before using this script, as shown below.

"></a><script>alert(document.cookie);</script>

Searching for ".Write"

Search for the ".Write" string across .aspx source code and code contained in any additional assembly you have developed for your application. This locates occurrences of Response.Write, and any internal routines that may generate output through a response object variable, such as the code shown below.

The <style> tag also can be a source of injection by changing the MIME type as shown below.

<style TYPE="text/javascript">
alert('hello');
</style>

Check to see if your code attempts to sanitize input by filtering out certain known risky characters. Do not rely upon this approach because malicious users can generally find an alternative representation to bypass your validation. Instead, your code should validate for known secure, safe input. The following table shows various ways to represent some common characters:

Table 21.2 Character Representation

Characters

Decimal

Hexadecimal

HTML Character Set

Unicode

" (double quotes)

&#34

&#x22;

&quot;

\u0022

' (single quotes)

&#39

&#x27;

&apos;

\u0027

& (ampersand)

&#38;

&#x26

&amp;

\u0026

< (lesser than)

&#60

&#x3C;

&lt;

\u003c

> (greater than)

&#62

&#x3E;

&gt;

\u003e

Identify Code That Handles URLs

Code that handles URLs can be vulnerable. Review your code to see if it is vulnerable to the following common attacks:

If your Web server is not up-to-date with the latest security patches, it could be vulnerable to directory traversal and double slash attacks, such as:

If your code filters for "/", an attacker can easily bypass the filter by using an alternate representation for the same character. For example, the overlong UTF-8 representation of "/" is "%c0f%af" and this could be used in the following URL:

http://www.YourWebServer.com/..%c0f%af../winnt

If your code processes query string input, check that it constrains the input data and performs bounds checks. Check that the code is not vulnerable if an attacker passes an extremely large amount of data through a query string parameter.

http://www.YourWebServer.com/test.aspx?var=InjectHugeAmountOfDataHere

Check That Output Is Encoded

While not a replacement for checking that input is well-formed and correct, you should check that HtmlEncode is used to encode HTML output that includes any type of input. Also check that UrlEncode is used to encode URL strings. Input data can come from query strings, form fields, cookies, HTTP headers, and input read from a database, particularly if the database is shared by other applications. By encoding the data, you prevent the browser from treating the HTML as executable script.

Check for Correct Character Encoding

To help prevent attackers using canonicalization and multi-byte escape sequences to trick your input validation routines, check that the character encoding is set correctly to limit the way in which input can be represented.

Check that the application Web.config file has set the requestEncoding and responseEncoding attributes configured by the <globalization> element as shown below.

Check the validateRequest Attribute

Web applications that are built using the .NET Framework version 1.1 or later perform input filtering to eliminate potentially malicious input, such as embedded script. Do not rely on this, but use it for defense in depth. Check the <pages> element in your configuration file to confirm that the validateRequest attribute is set to true. This can also be set as a page-level attribute. Scan your .aspx source files for validateRequest, and check that it is not set to false for any page.

Check the HttpOnly Cookie Option

Internet Explorer 6 SP 1 supports a new HttpOnly cookie attribute that prevents client-side script from accessing the cookie from the document.cookie property. Instead, an empty string is returned. The cookie is still sent to the server whenever the user browses to a Web site in the current domain. For more information, see the "Cross-Site Scripting" section in Chapter 10, "Building Secure ASP.NET Pages and Controls."

Check the <frame> Security Attribute

Internet Explorer 6 and later supports a new security attribute on the <frame> and <iframe> elements. You can use the security attribute to apply the user's Restricted Sites Internet Explorer security zone settings to an individual frame or iframe. For more information, see the "Cross-Site Scripting" section in Chapter 10, "Building Secure ASP.NET Pages and Controls."

Check the Use of the innerText and innerHTML Properties

If you create a page with untrusted input, verify that you use the innerText property instead of innerHTML. The innerText property renders content safe and ensures that script is not executed.

SQL Injection

Your code is vulnerable to SQL injection attacks wherever it uses input parameters to construct SQL statements. As with XSS bugs, SQL injection attacks are caused by placing too much trust in user input and not validating that the input is correct and well-formed.

A common approach is to develop filter routines to add escape characters to characters that have special meaning to SQL. This is an unsafe approach, and you should not rely on it because of character representation issues.

More Information

Buffer Overflows

When you review code for buffer overflows, focus your review efforts on your code that calls unmanaged code through the P/Invoke or COM interop layers. Managed code itself is significantly less susceptible to buffer overflows because array bounds are automatically checked whenever an array is accessed. As soon as you call a Win32 DLL or a COM object, you should inspect the API calls closely.

The following process helps you to locate buffer overflow vulnerabilities:

Locate calls to unmanaged code.

Scan your source files for "System.Runtime.InteropServices," which is the namespace name used when you call unmanaged code.

Check the string parameters passed to unmanaged APIs.

These parameters are a primary source of buffer overflows. Check that your code checks the length of any input string to verify that it does not exceed the limit defined by the API. If the unmanaged API accepts a character pointer, you may not know the maximum allowable string length unless you have access to the unmanaged source. A common vulnerability is shown in the following code fragment:

Note Buffer overflows can still occur if you use strncpy because it does not check for sufficient space in the destination string and it only limits the number of characters copied.

If you cannot inspect the unmanaged code because you do not own it, rigorously test the API by passing in deliberately long input strings and invalid arguments.

Check file path lengths.

If the unmanaged API accepts a file name and path, check that your wrapper method checks that the file name and path do not exceed 260 characters. This is defined by the Win32 MAX_PATH constant. Also note that directory names and registry keys can be 248 characters maximum.

Check output strings.

Check if your code uses a StringBuilder to receive a string passed back from an unmanaged API. Check that the capacity of the StringBuilder is long enough to hold the longest string the unmanaged API can hand back, because the string coming back from unmanaged code could be of arbitrary length.

Check array bounds.

If you use an array to pass input to an unmanaged API, check that the managed wrapper verifies that the array capacity is not exceeded.

Check that your unmanaged code is compiled with the /GS switch.

If you own the unmanaged code, use the /GS switch to enable stack probes to detect some kinds of buffer overflows.

Managed Code

Use the review questions in this section to analyze your entire managed source code base. The review questions apply regardless of the type of assembly. This section helps you identify common managed code vulnerabilities. For more information about the issues raised in this section and for code samples that illustrate vulnerabilities, see Chapter 7, "Building Secure Assemblies."

If your managed code uses explicit code access security features, see "Code Access Security" later in this chapter for additional review points. The following review questions help you to identify managed code vulnerabilities:

Is your class design secure?

Do you create threads?

Do you use serialization?

Do you use reflection?

Do you handle exceptions?

Do you use cryptography?

Do you store secrets?

Do you use delegates?

Is Your Class Design Secure?

An assembly is only as secure as the classes and other types it contains. The following questions help you to review the security of your class designs:

Do you limit type and member visibility?

Review any type or member marked as public and check that it is an intended part of the public interface of your assembly.

Are non-base classes sealed?

If you do not intend a class to be derived from, use the sealed keyword to prevent your code from being misused by potentially malicious subclasses.

For public base classes, you can use code access security inheritance demands to limit the code that can inherit from the class. This is a good defense in depth measure.

Do you use properties to expose fields?

Check that your classes do not directly expose fields. Use properties to expose non-private fields. This allows you to validate input values and apply additional security checks.

Do you use read-only properties?

Verify that you have made effective use of read-only properties. If a field is not designed to be set, implement a read-only property by providing a get accessor only.

Do you use virtual internal methods?

These methods can be overridden from other assemblies that have access to your class. Use declarative checks or remove the virtual keyword if it is not a requirement.

Do you implement IDisposable?

If so, check that you call the Dispose method when you are finished with the object instance to ensure that all resources are freed.

Do You Create Threads?

Multithreaded code is prone to subtle timing-related bugs or race conditions that can result in security vulnerabilities. To locate multithreaded code, search source code for the text "Thread" to identify where new Thread objects are created, as shown in the following code fragment:

The following review questions help you to identify potential threading vulnerabilities:

Does your code cache the results of a security check?

Your code is particularly vulnerable to race conditions if it caches the results of a security check, for example in a static or global variable, and then uses the flag to make subsequent security decisions.

Does your code impersonate?

Is the thread that creates a new thread currently impersonating? The new thread always assumes the process-level security context and not the security context of the existing thread.

Note In .NET Framework 2.0, by default, the impersonation token still does not flow across threads. However, for ASP.NET applications, you can change this default behavior by configuring the ASPNET.config file in the %Windir%Microsoft.NET\Framework\{Version Number}\ directory. For more information, see the "Threading" section in Security Guidelines .NET Framework 2.0.

Does your code contain static class constructors?

Check static class constructors to check that they are not vulnerable if two or more threads access them simultaneously. If necessary, synchronize the threads to prevent this condition.

DoyousynchronizeDispose methods?

If an object's Dispose method is not synchronized, it is possible for two threads to execute Dispose on the same object. This can present security issues, particularly if the cleanup code releases unmanaged resource handlers such as file, process, or thread handles.

Do You Use Serialization?

Classes that support serialization are either marked with the SerializableAttribute or derive from ISerializable. To locate classes that support serialization, perform a text search for the "Serializable" string. Then, review your code for the following issues:

Does the class contain sensitive data?

If so, check that the code prevents sensitive data from being serialized by marking the sensitive data with the [NonSerialized] attribute by or implementing ISerializable and then controlling which fields are serialized.

If your classes need to serialize sensitive data, review how that data is protected. Consider encrypting the data first.

Does the class implement ISerializable?

If so, does your class support only full trust callers, for example because it is installed in a strong named assembly that does not include AllowPartiallyTrustedCallersAttribute? If your class supports partial-trust callers, check that the GetObjectData method implementation authorizes the calling code by using an appropriate permission demand. A good technique is to use a StrongNameIdentityPermission demand to restrict which assemblies can serialize your object.

Note In .Net 2.0 StrongNameIdentityPermission only works for partial trust callers. Any demand including link demand will always succeed for full trust callers regardless of the strong name of the calling code.

Does your class validate data streams?

If your code includes a method that receives a serialized data stream, check that every field is validated as it is read from the data stream.

Do You Use Reflection?

To help locate code that uses reflection, search for "System.Reflection"  this is the namespace that contains the reflection types. If you do use reflection, review the following questions to help identify potential vulnerabilities:

Do you dynamically load assemblies?

If your code loads assemblies to create object instances and invoke types, does it obtain the assembly or type name from input data? If so, check that the code is protected with a permission demand to ensure all calling code is authorized. For example, use a StrongNameIdentity permission demand or demand full trust.

Note In .Net 2.0 StrongNameIdentityPermission only works for partial trust callers. Any demand including link demand will always succeed for full trust callers regardless of the strong name of the calling code..

Do you create code dynamically at runtime?

If your assemblies dynamically generate code to perform operations for a caller, check that the caller is in no way able to influence the code that is generated. For example, does your code generation rely on caller-supplied input parameters? This should be avoided, or if it is absolutely necessary, make sure that the input is validated and that it cannot be used to adversely affect code generation.

Do you use reflection on other types?

If so, check that only trusted code can call you. Use code access security permission demands to authorize calling code.

Do You Handle Exceptions?

Secure exception handling is required for robust code, to ensure that sufficient exception details are logged to aid problem diagnosis and to help prevent internal system details being revealed to the client. Review the following questions to help identify potential exception handling vulnerabilities:

Do you fail early?

Check that your code fails early to avoid unnecessary processing that consumes resources. If your code does fail, check that the resulting error does not allow a user to bypass security checks to run privileged code.

How do you handle exceptions?

Avoid revealing system or application details to the caller. For example, do not return a call stack to the end user. Wrap resource access or operations that could generate exceptions with try/catch blocks. Only handle the exceptions you know how to handle and avoid wrapping specific exceptions with generic wrappers.

Do you log exception details?

Check that exception details are logged at the source of the exception to assist problem diagnosis.

Do you use exception filters?

If so, be aware that the code in a filter higher in the call stack can run before code in a finally block. Check that you do not rely on state changes in the finally block, because the state change will not occur before the exception filter executes.

For an example of an exception filter vulnerability, see "Exception Management" in Chapter 7, "Building Secure Assemblies."

Do You Use Cryptography?

If so, check that your code does not implement its own cryptographic routines. Instead, code should use the System.Security.Cryptography namespace or use Win32 encryption such as Data Protection Application Programming Interface (DPAPI). Review the following questions to help identify potential cryptography related vulnerabilities:

Do you use symmetric encryption?

If so, check that you use Rijndael (now referred to as Advanced Encryption Standard [AES]) or Triple Data Encryption Standard (3DES) when encrypted data needs to be persisted for long periods of time. Use the weaker (but quicker) RC2 and DES algorithms only to encrypt data that has a short lifespan, such as session data.

Do you use the largest key sizes possible?

Use the largest key size possible for the algorithm you are using. Larger key sizes make attacks against the key much more difficult, but can degrade performance.

Do you use hashing?

If so, check that you use MD5 and SHA1 when you need a principal to prove it knows a secret that it shares with you. For example, challenge-response authentication systems use a hash to prove that the client knows a password without having the client pass the password to the server. Use HMACSHA1 with Message Authentication Codes (MAC), which require you and the client to share a key. This can provide integrity checking and a degree of authentication.

Do you generate random numbers for cryptographic purposes?

If so, check that your code uses the System.Security.Cryptography.RNGCryptoServiceProvider class to generate random numbers, and not the Random class. The Random class does not generate truly random numbers that are not repeatable or predictable.

Do You Store Secrets?

If your assembly stores secrets, review the design to check that it is absolutely necessary to store the secret. If you have to store a secret, review the following questions to do so as securely as possible:

Do you store secrets in memory?

Do not store secrets in plaintext in memory for prolonged periods. Retrieve the secret from a store, decrypt it, use it, and then substitute zeros in the space where the secret is stored.

Note The .NET Framework 2.0 supports the new ProtectedMemory class, which is a managed wrapper to DPAPI used for protecting data in memory. Additionally, .Net Framework 2.0 supports the SecureString type for storing sensitive text values securely in memory.

Do you store plaintext passwords or SQL connection strings in Web.config or Machine.config?

Do not do this. Use aspnet_setreg.exe to store encrypted credentials in the registry on the <identity>, <processModel>, and <sessionState> elements. For information on obtaining and using Aspnet_setreg.exe, see Microsoft Knowledge Base article 329290, "How To: Use the ASP.NET Utility to Encrypt Credentials and Session State."

Check that the code uses DPAPI to encrypt connection strings and credentials. Do not store secrets in the Local Security Authority (LSA), as the account used to access the LSA requires extended privileges. For information on using DPAPI, see "How To: Create a DPAPI Library" in the "How To" section of "Microsoft patterns & practices Volume I, Building Secure ASP.NET Applications: Authentication, Authorization, and Secure Communication" at http://msdn.microsoft.com/en-us/library/aa302402.aspx.

Do you store secrets in the registry?

If so, check that they are first encrypted and then secured with a restricted ACL if they are stored in HKEY_LOCAL_MACHINE. An ACL is not required if the code uses HKEY_CURRENT_USER because this is automatically restricted to processes running under the associated user account.

Note It is much easier to use DPAPI in .NET 2.0 because the ProtectedData class provides a managed wrapper to DPAPI.

Note Do not rely on an obfuscation tool to hide secret data. Obfuscation tools make identifying secret data more difficult but do not solve the problem.

Do You Use Delegates?

Any code can associate a method with a delegate. This includes potentially malicious code running at a lower trust level than your code.

Do you accept delegates from untrusted sources?

If so, check that you restrict the code access permissions available to the delegate methods by using security permissions with SecurityAction.PermitOnly.

Do you use assert before calling a delegate?

Avoid this because you do not know what the delegate code is going to do in advance of calling it.

Code Access Security

All managed code is subject to code access security permission demands. Many of the issues are only apparent when your code is used in a partial trust environment, when either your code or the calling code is not granted full trust by code access security policy.

Use the following review points to check that you are using code access security appropriately and safely:

Do you support partial-trust callers?

Do you restrict access to public types and members?

Do you use declarative security?

Do you call Assert?

Do you use permission demands when you should?

Do you use link demands?

Do you use Deny or PermitOnly?

Do you use particularly dangerous permissions?

Do you compile with the /unsafe option?

Do You Support Partial-Trust Callers?

If your code supports partial-trust callers, it has even greater potential to be attacked and as a result it is particularly important to perform extensive and thorough code reviews. Review the <trust> level configuration setting in your Web application to see if it runs at a partial-trust level. If it does, the assemblies you develop for the application need to support partial-trust callers.

The following questions help you to identify potentially vulnerable areas:

Is your assembly strong named?

If it is, then default security policy ensures that it cannot be called by partially trusted callers. The Common Language Runtime (CLR) issues an implicit link demand for full trust. If your assembly is not strong named, it can be called by any code unless you take explicit steps to limit the callers, for example by explicitly demanding full trust.

Note Strong named assemblies called by ASP.NET applications must be installed in the Global Assembly Cache.

Do you use APTCA?

If your strong named assembly contains AllowPartiallyTrustedCallersAttribute, partially trusted callers can call your code. In this situation, check that any resource access or other privileged operation performed by your assembly is authorized and protected with other code access security demands. If you use the .NET Framework class library to access resources, full stack walking demands are automatically issued and will authorize calling code unless your code has used an Assert call to prevent the stack walk.

Do you hand out object references?

Check method returns and ref parameters to see where your code returns object references. Check that your partial-trust code does not hand out references to objects obtained from assemblies that require full-trust callers.

Do You Restrict Access to Public Types and Members?

You can use code access security identity demands to limit access to public types and members. This is a useful way of reducing the attack surface of your assembly.

Do you restrict callers by using identity demands?

If you have classes or structures that you only intend to be used within a specific application by specific assemblies, you can use an identity demand to limit the range of callers. For example, you can use a demand with a StrongNameIdentityPermission to restrict the caller to a specific set of assemblies that have a have been signed with a private key that corresponds to the public key in the demand.

Note In .Net 2.0 StrongNameIdentityPermission only works for partial trust callers. Any demand including link demand will always succeed for full trust callers regardless of the strong name of the calling code.

Do you use inheritance demands to restrict subclasses?

If you know that only specific code should inherit from a base class, check that the class uses an inheritance demand with a StrongNameIdentityPermission.

Do You Use Declarative Security Attributes?

Declarative security attributes can be displayed with tools such as Permview.exe. This greatly helps the consumers and administrators of your assemblies to understand the security requirements of your code.

Do you request minimum permissions?

Search for ".RequestMinimum" strings to see if your code uses permission requests to specify its minimum permission requirements. You should do this to clearly document the permission requirements of your assembly.

Do you request optional or refuse permissions?

Search for ".RequestOptional" and ".RequestRefuse" strings. If you use either of these two actions to develop least privileged code, be aware that your code can no longer call strong named assemblies unless they are marked with the AllowPartiallyTrustedCallersAttribute.

Do you use imperative security instead of declarative security?

Sometime imperative checks in code are necessary because you need to apply logic to determine which permission to demand or because you need a runtime variable in the demand. If you do not need specific logic, consider using declarative security to document the permission requirements of your assembly.

Do you mix class and member level attributes?

Do not do this. Member attributes, for example on methods or properties, replace class-level attributes with the same security action and do not combine with them.

Do You Call Assert?

Scan your code for Assert calls. This may turn up instances of Debug.Assert. Look for where your code calls Assert on a CodeAccessPermission object. When you assert a code access permission, you short-circuit the code access security permission demand stack walk, which is a risky practice. What steps does your code take to ensure that malicious callers do not take advantage of the assertion to access a secured resource or privileged operation? Review the following questions:

Do you use the demand, assert pattern?

Check that your code issues a Demand prior to the Assert. Code should demand a more granular permission to authorize callers prior to asserting a broader permission such as the unmanaged code permission.

Do you match Assert calls with RevertAssert?

Check that each call to Assert is matched with a call to RevertAssert. The Assert is implicitly removed when the method that calls Assert returns, but it is good practice to explicitly call RevertAssert, as soon as possible after the Assert call.

Do you reduce the assert duration?

Check that you only assert a permission for the minimum required length of time. For example, if you need to use an Assert call just while you call another method, check that you make a call to RevertAssert immediately after the method call.

Do You Use Permission Demands When You Should?

Your code is always subject to permission demand checks from the .NET Framework class library, but if your code uses explicit permission demands, check that this is done appropriately. Search your code for the ".Demand" string to identity declarative and imperative permission demands, and then review the following questions:

Do you cache data?

If so, check whether or not the code issues an appropriate permission demand prior to accessing the cached data. For example, if the data is obtained from a file, and you want to ensure that the calling code is authorized to access the file from where you populated the cache, demand a FileIOPermission prior to accessing the cached data.

Do you expose custom resources or privileged operations?

If your code exposes a custom resource or privileged operation through unmanaged code, check that it issues an appropriate permission demand, which might be a built-in permission type or a custom permission type depending on the nature of the resource.

Do you demand soon enough?

Check that you issue a permission demand prior to accessing the resource or performing the privileged operation. Do not access the resource and then authorize the caller.

Do you issue redundant demands?

Code that uses the .NET Framework class libraries is subject to permission demands. Your code does not need to issue the same demand. This results in a duplicated and wasteful stack walk.

Do You Use Link Demands?

Link demands, unlike regular demands, only check the immediate caller. They do not perform a full stack walk, and as a result, code that uses link demands is subject to luring attacks. For information on Luring Attacks, see "Link Demands" in Chapter 8, "Code Access Security in Practice."

Search your code for the ".LinkDemand" string to identify where link demands are used. They can only be used declaratively. An example is shown in the following code fragment:

For more information about the issues raised in this section, see "Link Demands" in Chapter 8, "Code Access Security in Practice." The following questions help you to review the use of link demands in your code:

Why are you using a link demand?

A defensive approach is to avoid link demands as far as possible. Do not use them just to improve performance and to eliminate full stack walks. Compared to the costs of other Web application performance issues such as network latency and database access, the cost of the stack walk is small. Link demands are only safe if you know and can limit which code can call your code.

Do you trust your callers?

When you use a link demand, you rely on the caller to prevent a luring attack. Link demands are safe only if you know and can limit the exact set of direct callers into your code, and you can trust those callers to authorize their callers.

Do you call code that is protected with link demands?

If so, does your code provide authorization by demanding a security permission from the callers of your code? Can the arguments passed to your methods pass through to the code that you call? If so, can they maliciously influence the code you call?

Have you used link demands at the method and class level?

When you add link demands to a method, it overrides the link demand on the class. Check that the method also includes class-level link demands.

Do you use link demands on classes that are not sealed?

Link demands are not inherited by derived types and are not used when an overridden method is called on the derived type. If you override a method that needs to be protected with a link demand, apply the link demand to the overridden method.

Do you use a link demand to protect a structure?

Link demands do not prevent the construction of a structure by an untrusted caller. This is because default constructors are not automatically generated for structures, and therefore the structure level link demand only applies if you use an explicit constructor.

Do you use explicit interfaces?

Search for the Interface keyword to find out. If so, check if the method implementations are marked with link demands. If they are, check that the interface definitions contain the same link demands. Otherwise, it is possible for a caller to bypass the link demand.

Do You Use Potentially Dangerous Permissions?

Check that the following permission types are only granted to highly trusted code. Most of them do not have their own dedicated permission type, but use the generic SecurityPermission type. You should closely scrutinize code that uses these types to ensure that the risk is minimized. Also, you must have a very good reason to use these permissions.

Table 21.3 Dangerous Permissions

Permission

Description

SecurityPermission.UnmanagedCode

Code can call unmanaged code.

SecurityPermission.SkipVerification

The code in the assembly no longer has to be verified as type safe.

SecurityPermission.ControlEvidence

The code can provide its own evidence for use by security policy evaluation.

SecurityPermission.ControlPolicy

Code can view and alter policy.

SecurityPermission.SerializationFormatter

Code can use serialization.

SecurityPermission.ControlPrincipal

Code can manipulate the principal object used for authorization.

ReflectionPermission.MemberAccess

Code can invoke private members of a type through reflection.

SecurityPermission.ControlAppDomain

Code can create new application domains.

SecurityPermission.ControlDomainPolicy

Code can change domain policy.

Do You Compile With the /unsafe Option?

Use Visual Studio .NET to check the project properties to see whether Allow Unsafe Code Blocks is set to true. This sets the /unsafe compiler flag, which tells the compiler that the code contains unsafe blocks and requests that a minimum SkipVerification permission is placed in the assembly.

If you compiled with /unsafe, review why you need to do so. If the reason is legitimate, take extra care to review the source code for potential vulnerabilities.

Unmanaged Code

Give special attention to code that calls unmanaged code, including Win32 DLLs and COM objects, due to the increased security risk. Unmanaged code is not verifiably type safe and introduces the potential for buffer overflows. Resource access from unmanaged code is not subject to code access security checks. This is the responsibility of the managed wrapper class.

Generally, you should not directly expose unmanaged code to partially trusted callers. For more information about the issues raised in this section, see the "Unmanaged Code" sections in Chapter 7, "Building Secure Assemblies," and Chapter 8, "Code Access Security in Practice."

Use the following review questions to validate your use of unmanaged code:

Do you assert the unmanaged code permission?

If so, check that your code demands an appropriate permission prior to calling the Assert method to ensure that all callers are authorized to access the resource or operation exposed by the unmanaged code. For example, the following code fragment shows how to demand a custom Encryption permission and then assert the unmanaged code permission:

For more information see "Assert and RevertAssert" in Chapter 8, "Code Access Security in Practice."

Do you use SuppressUnmanagedCodeAttribute?

This attribute suppresses the demand for the unmanaged code permission issued automatically when managed code calls unmanaged code. If P/Invoke methods or COM interop interfaces are annotated with this attribute, ensure that all code paths leading to the unmanaged code calls are protected with security permission demands to authorize callers. Also check that this attribute is used at the method level and not at the class level.

Note Adding a SupressUnmanagedCodeSecurityAttribute turns the implicit demand for the UnmanagedCode permission issued by the interop layer into a LinkDemand. Your code is vulnerable to luring attacks.

Is the unmanaged entry point publicly visible?

Check that your unmanaged code entry point is marked as private or internal. Callers should be forced to call the managed wrapper method that encapsulates the unmanaged code.

Do you guard against buffer overflows?

Unmanaged code is susceptible to input attacks such as buffer overflows. Unmanaged code APIs should check the type and length of supplied parameters. However, you cannot rely on this because you might not own the unmanaged source. Therefore, the managed wrapper code must rigorously inspect input and output parameters. For more information, see "Buffer Overflows" in this chapter.

Note All code review rules and disciplines that apply to C and C++ apply to unmanaged code.

Do you range check enumerated types?

Verify that all enumerated values are in range before you pass them to a native method.

Do you use naming conventions for unmanaged code methods?

All unmanaged code should be inside wrapper classes that have the following names: NativeMethods, UnsafeNativeMethods, and SafeNativeMethods. You must thoroughly review all code inside UnsafeNativeMethods and parameters that are passed to native APIs for security vulnerabilities.

Do you call potentially dangerous APIs?

You should be able to justify the use of all Win32 API calls. Dangerous APIs include:

Threading functions that switch security context

Access token functions, which can make changes to or disclose information about a security token

Do You Disable Detailed Error Messages?

If you let an exception propagate beyond the application boundary, ASP.NET can return detailed information to the caller. This includes full stack traces and other information that is useful to an attacker. Check the <customErrors> element and ensure that the mode attribute is set to "On" or "RemoteOnly".

<customErrors mode="On" defaultRedirect="YourErrorPage.htm" />

Do You Disable Tracing?

Trace information is also extremely useful to attackers. Check the <trace> element to ensure that tracing is disabled.

Do You Validate Form Field Input?

Attackers can pass malicious input to your Web pages and controls through posted form fields. Check that you validate all form field input including hidden form fields. Validate them for type, range, format, and length. Use the following questions to review your ASP.NET input processing:

Does your input include a file name or file path?

You should generally avoid this because it is a high risk operation. Why do you need the user to specify a file name or path, rather than the application choosing the location based on the user identity?

If you accept file names and paths as input, your code is vulnerable to canonicalization bugs. If you must accept path input from the user, then check that it is validated as a safe path and canonicalized. Check that the code uses System.IO.Path.GetFullPath.

Do you call MapPath?

If you call MapPath with a user supplied file name, check that your code uses the override of HttpRequest.MapPath that accepts a bool parameter, which prevents cross-application mapping.

Check that your code validates the data type of the data received from posted form fields and other forms of Web input such as query strings. For non-string data, check that your code uses the .NET Framework type system to perform the type checks. You can convert the string input to a strongly typed object, and capture any type conversion exceptions. For example, if a field contains a date, use it to construct a System.DateTime object. If it contains an age in years, convert it to a System.Int32 object by using Int32.Parse and capture format exceptions.

How do you validate string types?

Check that input strings are validated for length and an acceptable set of characters and patterns by using regular expressions. You can use a RegularExpressionValidator validation control or use the RegEx class directly. Do not search for invalid data; only search for the information format you know is correct.

Do you use validation controls?

If you use a validation control such as RegularExpressionValidator,RequiredFieldValidator, CompareValidator, RangeValidator, or CustomValidator, check that you have not disabled the server side validation and are not relying purely on client-side validation.

Do you rely on client side validation?

Do not do this. Use client-side validation only to improve the user experience. Check that all input is validated at the server.

Are You Vulnerable to XSS Attacks?

Be sure to review your Web pages for XSS vulnerabilities. For more information, see "Cross-Site Scripting (XSS)" earlier in this chapter.

Check that input is validated for type, range, format, and length using typed objects, and regular expressions as you would for form fields (see the previous section, "Do You Validate Form Field Input?"). Also consider HTML or URL encoding any output derived from user input, as this will negate any invalid constructs that could lead to XSS bugs.

Do You Secure View State?

If your application uses view state, is it tamperproof? Review the following questions:

Is view state protection enabled at the application level?

Check the enableViewState attribute of the <pages> element in the application Machine.config or Web.config file to see if view state is enabled at the application level. Then check that enableViewStateMac is set to "true" to ensure it is tamperproof.

<pages enableViewState="true" enableViewStateMac="true" />

Do you override view state protection on a per page basis?

Check the page-level directive at the top of your Web pages to verify that view state is enabled for the page. Look for the enableViewStateMac setting and if present check that it is set to "true". If enableViewStateMac is not present and set to true, the page assumes the application-level default setting specified in the Web.config file. If you have disabled view state for the page by setting enableViewState to "false" the protection setting is irrelevant.

Do you override view state protection in code?

Check that your code does not disable view state protection by setting Page.EnableViewStateMac property to false. This is a safe setting only if the page does not use view state.

More Information

For more information about securing view state, see the following article:

Are Your Global.asax Event Handlers Secure?

The global.asax file contains event handling code for application-level events generated by ASP.NET and by HTTP modules. Review the following event handlers to ensure that the code does not contain vulnerabilities:

Application_Start. Code placed here runs under the security context of the ASP.NET process account, not the impersonated user.

Application_BeginRequest. Code placed here runs under the security context of the ASP.NET process account, or the impersonated user.

Application_EndRequest. If you need to modify the properties of outgoing cookies, for example to set the "Secure" bit or the domain, Application_EndRequest is the right place to do it.

Application_AuthenticateRequest. This performs user authentication.

Application_Error. The security context when this event handler is called can have an impact on writing the Windows event log. The security context might be the process account or the impersonated account.

protected void Session_End. This event is fired non-deterministically and only for in-process session state modes.

Do You Provide Adequate Authorization?

Review the following questions to verify your authorization approach:

Do you partition your Web site between restricted and public access areas?

If your Web application requires users to complete authentication before they can access specific pages, check that the restricted pages are placed in a separate directory from publicly accessible pages. This allows you to configure the restricted directory to require SSL. It also helps you to ensure that authentication cookies are not passed over unencrypted sessions using HTTP.

How do you protect access to restricted pages?

If you use Windows authentication, have you configured NTFS permissions on the page (or the folder that contains the restricted pages) to allow access only to authorized users?

Have you configured the <authorization> element to specify which users and groups of users can access specific pages?

How do you protect access to page classes?

Have you use added principal permission demands to your classes to determine which users and groups of users can access the classes?

Do you use Server.Transfer?

If you use Server.Transfer to transfer a user to another page, ensure that the currently authenticated user is authorized to access the target page. If you use Server.Transfer to a page that the user is not authorized to view, the page is still processed.

Server.Transfer uses a different module to process the page rather than making another request from the server, which would force authorization. Do not use Server.Transfer if security is a concern on the target Web page. Use HttpResponse.Redirect instead.

Web Services

ASP.NET Web services share many of the same features as ASP.NET Web applications. Review your Web service against the questions in the "ASP.NET Pages and Controls" section before you address the following questions that are specific to Web services. For more information about the issues raised in this section, see Chapter 12, "Building Secure Web Services."

Do you expose restricted operations or data?

How do you authorize callers?

Do you constrain privileged operations?

Do you use custom authentication?

Do you validate all input?

Do you validate SOAP Headers?

Do You Expose Restricted Operations or Data?

If your Web service exposes restricted operations or data, check that the service authenticates callers. You can use platform authentication mechanisms such as NTLM, Kerberos, Basic authentication or Client X.509 Certificates, or you can pass authentication tokens in SOAP headers.

If you pass authentication tokens, you can use the Web Services Enhancements (WSE) to use SOAP headers in a way that conforms to the emerging WS-Security standard.

Do You Constrain Privileged Operations?

The trust level of the code access security policy determines the type of resource the Web service can access. Check the <trust> element configuration in Machine.config or Web.config.

Do You Use Custom Authentication?

Use features provided by Web Service Enhancements (WSE) instead of creating your own authentication schemes.

Do You Validate All Input?

Check that all publicly exposed Web methods validate their input parameters if the input is received from sources outside the current trust boundary, before using them or passing them to a downstream component or database.

Do You Validate SOAP Headers?

If you use custom SOAP headers in your application, check that the information is not tampered or replayed. Digitally sign the header information to ensure that it has not been tampered. You can use the WSE to help sign Web service messages in a standard manner.

Check that SoapException and SoapHeaderException objects are used to handle errors gracefully and to provide minimal required information to the client. Verify that exceptions are logged appropriately for troubleshooting purposes.

Serviced Components

This section identifies the key review points that you should consider when you review the serviced components used inside Enterprise Services applications. For more information about the issues raised in this section, see Chapter 11, "Building Secure Serviced Components."

Do you use assembly level metadata?

Do you prevent anonymous access?

Do you use a restricted impersonation level?

Do you use role-based security?

Do you use method level authorization?

Do you use object constructor strings?

Do you audit in the middle tier?

Do You Use Assembly Level Metadata?

Check that you use assembly level metadata to define Enterprise Services security settings. Use the assemblyinfo.cs file and use attributes to define authentication and authorization configuration. This helps to ensure that the settings are established correctly at administration time. Although the administrator can override these settings, it provides the administrator with a clear definition of how you expect the settings to be configured.

Do You Prevent Anonymous Access?

Check that your code specifies an authentication level using the ApplicationAccessControl attribute. Search for the "AuthenticationOption" string to locate the relevant attribute. Check that you use at least call-level authentication to ensure that each call to your component is authenticated.

Do You Use a Restricted Impersonation Level?

The impersonation level you define for your serviced components determines the impersonation capabilities of any remote server that you communicate with. Search for the "ImpersonationLevel" string to check that your code sets the level.

Check that you set the most restricted level necessary for the remote server. For example, if the server needs to identify you for authentication purposes, but does not need to impersonate you, use the identify level as shown above. Use delegation-level impersonation with caution on Windows 2000 because there is no limit to the number of times that your security context can be passed from computer to computer. Windows Server 2003 introduces constrained delegation.

Note In Windows Server 2003 and Windows2000 Service Pack 4 and later, the impersonation privilege is not granted to all users.

If your components are in a server application, the assembly level attribute shown above controls the initial configuration for the component when it is registered with Enterprise Services.

If your components are in a library application, the client process determines the impersonation level. If the client is an ASP.NET Web application, check the comImpersonationLevel setting on the <processModel> element in the Machine.config file.

Do You Use Role-Based Security?

Check that role-based security is enabled. It is disabled by default on Windows 2000. Check that your code includes the following attribute:

[assembly: ApplicationAccessControl(true)]

Do you use component level access checks?

COM+ roles are most effective if they are used at the interface, component, or method levels and are not just used to restrict access to the application. Check that your code includes the following attribute:

If your method code calls ContextUtil.IsCallerInRole, check that these calls are preceded with calls to ContextUtil.IsSecurityEnabled. If security is not enabled, IsCallerInRole always returns true. Check that your code returns a security exception if security is not enabled.

Do You Use Object Constructor Strings?

Search your code for "ConstructionEnabled" to locate classes that use object construction strings.

If you use object constructor strings, review the following questions:

Do you store sensitive data in constructor strings?

If you store data such as connection strings, check that the data is encrypted prior to storage in the COM+ catalog. Your code should then decrypt the data when it is passed to your component through the Construct method.

Do you provide default construction strings?

Do not do this if the data is in any way sensitive.

Do You Audit in the Middle Tier

You should audit across the tiers of your distributed application. Check that your service components log operations and transactions. The original caller identity is available through the SecurityCallContext object. This is only available if the security level for your application is configured for process and component-level checks by using the following attribute:

Remoting

This section identifies the key review points that you should consider when you review code that uses .NET Remoting. For more information about the issues raised in this section, see Chapter 13, "Building Secure Remoted Components."

Do you pass objects as parameters?

Do you use custom authentication and principal objects?

How do you configure proxy credentials?

Do You Pass Objects as Parameters?

If you use the TcpChannel and your component API accepts custom object parameters, or if custom objects are passed through the call context, your code has two security vulnerabilities.

If the object passed as a parameter derives from System.MarshalByRefObject, it is passed by reference. In this case, the object requires a URL to support call backs to the client. It is possible for the client URL to be spoofed, which can result in a call back to an alternate computer.

If the object passed as a parameter supports serialization, the object is passed by value. In this instance, check that your code validates each field item as it is deserialized on the server to prevent the injection of malicious data.

To prevent custom objects being passed to your remote component either by reference or by value, set the TypeFilterLevel property on your server-side formatter channel sink to TypeFilterLevel.Low.

To locate objects that are passed in the call context, search for the "ILogicalThreadAffinative" string. Only objects that implement this interface can be passed in the call context.

Do You Use Custom Authentication and Principal Objects?

If you use custom authentication, do you rely on principal objects passed from the client? This is potentially dangerous because malicious code could create a principal object that contains extended roles to elevate privileges. If you use this approach, check that you only use it with out-of-band mechanisms such as IPSec policies that restrict the client computers that can connect to your component.

How Do You Configure Proxy Credentials?

Review how your client code configures credentials on the remoting proxy. If explicit credentials are used, where are those credentials maintained? They should be encrypted and stored in a secure location such as a restricted registry key. They should not be hard-coded in plain text. Ideally, your client code should use the client process token and use default credentials.

Data Access Code

This section identifies the key review points that you should consider when you review your data access code. For more information about the issues raised in this section, see Chapter 14, "Building Secure Data Access."

Do you prevent SQL injection?

Do you use Windows authentication?

Do you secure database connection strings?

How do you restrict unauthorized code?

How do you secure sensitive data in the database?

Do you handle ADO.NET exceptions?

Do you close database connections?

Do You Prevent SQL Injection?

Check that your code prevents SQL injection attacks by validating input, using least privileged accounts to connect to the database, and using parameterized stored procedures or parameterized SQL commands. For more information, see "SQL Injection" earlier in this chapter.

Do You Use Windows Authentication?

By using Windows authentication, you do not pass credentials across the network to the database server, and your connection strings do not contain user names and passwords. Windows authentication connection strings either use Trusted_Connection='Yes' or Integrated Security='SSPI' as shown in the following examples.

Do You Secure Database Connection Strings?

Review your code for the correct and secure use of database connection strings. These strings should not be hard coded or stored in plaintext in configuration files, particularly if the connection strings include user names and passwords.

Search for the "Connection" string to locate instances of ADO .NET connection objects and review how the ConnectionString property is set.

Do you encrypt the connection string?

Check that the code retrieves and then decrypts an encrypted connection string. The code should use DPAPI for encryption to avoid key management issues.

Do you use a blank password?

Do not. Check that all SQL accounts have strong passwords.

Do you use the sa account or other highly privileged accounts?

Do not use the sa account or any highly privileged account, such as members of sysadmin or db_owner roles. This is a common mistake. Check that you use a least privileged account with restricted permissions in the database.

Do you usePersist Security Info?

Check that the Persist Security Info attribute is not set to true or yes because this allows sensitive information, including the user name and password, to be obtained from the connection after the connection has been opened.

How Do You Restrict Unauthorized Code?

If you have written a data access class library, how do you prevent unauthorized code from accessing your library to access the database? One approach is to use StrongNameIdentityPermission demands to restrict the calling code to only that code that has been signed with specific strong name private keys.

Note In .Net 2.0 StrongNameIdentityPermission only works for partial trust callers. Any demand including link demand will always succeed for full trust callers regardless of the strong name of the calling code.

How Do You Secure Sensitive Data in the Database?

If you store sensitive data, such as credit card numbers, in the database, how do you secure the data? You should check that it is encrypted by using a strong symmetric encryption algorithm such as 3DES.

If you use this approach, how do you secure the 3DES encryption key? Your code should use DPAPI to encrypt the 3DES encryption key and store the encrypted key in a restricted location such as the registry.

Do You Handle ADO .NET Exceptions?

Check that all data access code is placed inside try/catch blocks and that the code handles the SqlExceptions, OleDbExceptions or OdbcExceptions, depending on the ADO .NET data provider that you use.

Do You Close Database Connections?

Check that your code is not vulnerable to leaving open database connections if, for example, exceptions occur. Check that the code closes connections inside a finally block or that the connection object is constructed inside a C# using statement as shown below. This automatically ensures that it is closed.

using ((SqlConnection conn = new SqlConnection(connString)))
{
conn.Open();
// Connection will be closed if an exception is generated or if control flow
// leaves the scope of the using statement normally.
}

Summary

Security code reviews are similar to regular code reviews or inspections except that the focus is on the identification of coding flaws that can lead to security vulnerabilities. The added benefit is that the elimination of security flaws often makes your code more robust.

This chapter has shown you how to review managed code for top security issues including XSS, SQL injection, and buffer overflows. It has also shown you how to identify other more subtle flaws that can lead to security vulnerabilities and successful attacks.

Security code reviews are not a panacea. However, they can be very effective and should feature as a regular milestone in the development life cycle.

Additional Resource

This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies.
This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.