Author: Dan

This tutorial will guide you through the steps required to get Node.js and Angular 2.0 setup installed on your windows system in order to begin development with Angular 2.0 web applications. The setup is fairly simple, so there shouldn’t be too much room for things to go wrong. This is what you need to do to setup node.js and angular 2.0 for development on windows. For setup on Linux, the process is quite similar, but the commands will differ.

Once you have this installed, you should be able to use it right away without needing to restart your PC, but you should test it out first to make sure. Open command prompt and run the following 2 commands to make sure that node.js and npm are correctly installed with the path variables setup.

1

2

node--version

npm--version

You now have the means to get started.

Install Typescript for Angular 2.0

Typescript is what is used for Angular 2.0 development. You can install this using npm. Run the following command inside the windows command prompt to install typescript.

1

npm install-gtypescript

And thats is, you now have your system setup to begin developing applications with node.js and Angular 2.0. To make sure everything works, here are some extra steps to setup a simple project.

Creating a Test Project

You should try keep all of your development under 1 main directory. Make sure that the command prompt is open as an administrator and run the following commands to setup a development area on the C drive.

1

2

3

cdC:/

mkdir Angular2&&cd Angular2

mkdir testproject&&cd testproject

You now have the directory setup and ready to store a test project. You will now need set this directory up in order to correctly host an Angular2.0 project.

In order to get a project going you need to setup a config file for typescript in the root of the project folder. This needs to be created for each project. You can create it manually each time once you get familiar with what needs to go inside. For now you can run the following command to build it.

Navigate to your testproject directory and you will see that there is now a file in this folder called tsconfig.json. Open this file up and add some additional lines to it. Modify the file so that it contains the following content.

JavaScript

1

2

3

4

5

6

7

8

9

10

11

12

13

14

{

"compilerOptions":{

"module":"commonjs",

"target":"es5",

"noImplicitAny":false,

"sourceMap":true,

"experimentalDecorators":true,

"emitDecoratorMetadata":true,

"outDir":"build"

},

"exclude":[

"node_modules"

]

}

From the command prompt, make sure you are in the same directory as the tsconfig.json file. Once here, run the following command.

1

npm init

You will get some prompts from the window asking you to enter information. The default information will suffice for most of it. Once complete, you will now have a file called package.json inside of the project directory. Open up this file and add some additional data, your file should be structured more like this when you are finished.

JavaScript

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

{

"name":"testproject",

"version":"1.0.0",

"description":"Test Prokect",

"main":"index.js",

"scripts":{

"start":"concurrent \"npm run tsc:w\" \"npm run lite\" ",

"tsc":"tsc",

"tsc:w":"tsc -w",

"lite":"lite-server"

},

"author":"",

"license":"ISC",

"dependencies":{

"angular2":"2.0.0-beta.7",

"systemjs":"0.19.22",

"es6-promise":"^3.0.2",

"es6-shim":"^0.33.3",

"reflect-metadata":"0.1.2",

"rxjs":"5.0.0-beta.2",

"zone.js":"0.5.15"

},

"devDependencies":{

"concurrently":"^2.0.0",

"lite-server":"^2.1.0",

"typescript":"^1.7.5",

"typings":"^0.6.8"

}

}

From the same project directory as before, run the following command to install all of the project dependencies.

1

npm install

This will take a few minutes to complete, once it finishes you will be ready to add some components to your app.

Quite often when dealing with an API or some other external service, you will find that you get Unix timestamps returned to you or alternatively have to provide a timestamp in Unix form when sending a request back. Since you are going to be running your .NET code on a windows machine, you don’t have a built in way to get the time in Unix format. It is pretty simple to obtain a Unix timestamp though. Here is what you need to do to convert C# DateTime object to a Unix timestamp.

The following helper method can be used to convert a standard datetime object into a Unix timestamp value. The value is quite large and will get bigger as time goes on, so make sure to use an appropriate variable type. Long will work fine for this scenario.

C#

1

2

3

4

5

6

publicstaticlongConvertToUnixTime(DateTime datetime)

{

DateTime sTime=newDateTime(1970,1,1,0,0,0,DateTimeKind.Utc);

return(long)(datetime-sTime).TotalSeconds;

}

It’s a pretty simple method with little to explain. Its worth noting that you need to keep this as a long. Int is not going to be big enough to hold the time value.

If you want to do things in reverse. I.e. Convert a Unix timestamp back to a C# DateTime object, you can use the following helper method.

Back in the early days of DDOS attacks this would have been a highly dangerous tool. Thankfully it is very easy to block a basic attack like this, so i don’t see the risk of explaining how to use this method. Apache has a tool built into it that will allow you to send a predefined number of requests to a website in order to see if it can handle the load. Here is what you need to do to benchmark a website using Apache.

Apache has a fantastic benchmark tool that you can use to check the performance of your website. If you are expecting a flood of traffic for some particular reason, it would be good to know if your server is capable of handling such a flood. This tool will give you a good idea on whether the server is going to be able to handle the load of traffic.

The format of the command is very simple. You can run it on any server that is using apache. Since most web servers that run apache, will be using some flavor of Linux, i will give you the linux command line method of performing this. The application is called ApacheBench. It may already be installed, but if it isn’t it should be easy to install it.

1

apt-get install apache2-utils

Once you have it installed you can run a quick test to see how your server handles it. The format of the request is very simple. The first number is the total number of requests you want to send and the second is the amount of requests you want to send at the same time (concurrently). The concurrent value is the most important as this is the one that is most likely to crash the server if it receives too many requests at once. You can play around with the values as you please.

The following command will send a total of 1000 requests to a single URL by grouping them in sets of 100 requests at a time. Make sure to include the “/” at the end of the website path.

1

ab-n1000-c100http://website.com/

If i run the above command against this website I get the following results.

There are various reasons that you might want to force a website to use SSL. In general, if you have an SSL cert setup for your website, you should probably force all users to https even if the page doesn’t contain sensitive data. In an ideal world, you would do this on the server side of things. Write some rules with the conf file that will force all traffic over https. If you are in a position where you can’t use the server then it is also easy to force SSL with PHP. It is also very easy to do it with pretty much any programming language, but for this example I will use PHP.

And there you have it. It is that simple. If you are using something like Cloudflare it can get a little tricky sometimes depending on how you have cloudflare configured to handle the SSL. For a standard site, this will be a simple way to force the use of SSL.

It is also worth mentioning that you must have an SSL cert configured on your server in order to make this work. If you do not have a site that supports HTTPS then you cannot make this work.

You are looking to create some sort of HTML element on your site that when clicked will trigger a file download. Seems like a pretty simple request, but there is a little more to it then first appears. By default web browsers will have a set way to handle certain file formats. For example, if you wanted a user to be able to download an image file, simply putting the path of the image in the href, would just open the image in a new tab/window rather than actually download the file. Here is what you need to download a file using ASP MVC.

In order to do this you will need to setup an “ActionResult” on one of your controllers. This means you will be setting the href or source of the link to the URL of the controller. For this example i created a controller called “Services” a method in the controller called “DownloadFile” and this method accepted a value that represented the file. For this you could use some kind of ID that lets you know where to find the file or simple URL encode the file path. This method is probably the easiest, but is not secure. If you are using a public site, you are going to have to setup a DB table to manage the files so that each row has a unique ID that you can use to identify the file. This is what i have done for this example.

I have created a custom document object that works with the DB table. Passing the ID of the row in the constructor (i know its bad practice..but its much quicker). This will pull all of the file information from the database including the path of the file on the disk.

In order to gain access to this file using a download link you will setup a link like the following.

1

<ahref="/Services/DownloadFile/1">Download File</a>

And there you have it, this will trigger the file download in your browser. This can be used to download any file from a windows machine. So long as your server has access to the drive, the file can be downloaded. This means you can download files that are outside of the root of your web servers home directory.

Classic ASP might seem like a language that is dead and gone, but it is still alive…somehow. With a language that has become outdated, it can be difficult to fight against modern security risks. Knowing how to prevent SQL injection with classic ASP is a valuable bit of code to have at your disposal. With bots capable of hacking sites, you don’t to make things easy for them. Thankfully, there is a way to setup prepared statements using Classic ASP.

If you are familiar with prepared statements, this shouldn’t be too much trouble. I will admint, this is a pretty ugly implementation, but ASP isn’t exactly bleeding edge, so this is the best we have. The first and most awkward thing about prepared statements with classic ASP, is that you need to declare the data type. For example, if a field in a DB is of type int, you need to declare this when creating the statement. It seems odd, but this is how it goes. the following code will show you a quick and easy way to pull a row from a database by using an ID that is passed in the querystring.

The only thing that needs to change are the parameters that you pass into the CreateParameter function. As I mentioned previously, you need to declare the data type when adding a command parameter. A full list of all of the data type codes can be found here http://www.w3schools.com/asp/met_comm_createparameter.asp

This is a pretty solid way to prevent sql injection with classic ASP. Nothing is ever bulletproof, so always be on the lookout for ways to further improve the security by validating data even further to prevent any bad data making its way into a query string.

When you have obtained a list of emails from a location that required little to no validation on whether it was a real email, you will be stuck trying to determine if the email address is real or not. You don’t want to risk sending out an email to these users without checking as a regular high bounce rate is a quick way to get your email server blacklisted. There is a 2 step method that you can use to validate if an email address is valid or not. This assumes that you have first filtered out values that are missing an @ symbol and a domain. This guide will show you how to check if an email is valid. For example, how can you tell if john@somesite.com is real or fake?

Step 1

The first thing you will need to do is check if the domain name is valid and has an active mail server/ MX record associated with it. Sometimes an email may have been valid at one stage, but the website has now been shut down. Sending an email to this address wont do anything. By checking to see if the domain name is real you will be able to filter out people who provide stupid domain names that never existed and also filter out emails from valid websites that are not capable of receiving emails.

For the example I am going to use PHP to write the script for this. Many other languages have similar methods that do the same thing, so this should be fairly easy to do with other programming languages. PHP has a function called “getmxrr()”. This function will obtain the MX record for a domain. For those who do not know what this is, a MX record is used in the DNS settings to point to the IP of a domains email server. If one is missing then the domain is not capable of receiving an email and is therefore invalid.

1

2

3

4

getmxrr($emailparts[1],$mxinfo);

if(isset($mxinfo[0])){

//this means there was a valid mx record returned.

}

Just because a domain has an MX record, does not mean that the email address is valid. In fact, this makes sending bad emails to this server even more likely to cause you to get blacklisted.

Step 2

This is the most difficult to test while also being the most important. If someone provides an email like asdasd@gmail.com, step 1 will return this as being a valid email address. gmail.com is a valid email domain, but asdasd is likely a non existent user. This step will allow you to determine whether this is a valid inbox or not. Keep in mind that this step requires you to directly contact the email server to essentially ask if the inbox exists. I would suggest you run this from a test machine so you do not run the risk of blacklisting the IP. This many requests in a short period might be considered suspicious.

If you have worked with mail servers in the past, you may be familiar with HELO. This can be used to easily check if a mailbox exists or not. If you send the command and get a positive response you know that this inbox exists. If not you know its fake. I have combined step 1 with step 2 to generate a complete script below that will allow you to check if an email is valid and filter out bad mailboxes.

I recently hit an issue where i needed to change the name of an XML node. It ended up being a lot more complicated than I had expected it to be. node.Name is a read only field, so you can’t take the simple route and rename it this way. Since you cannot rename the node, I had to create a new node and delete the old one. Not overly complex, but it is a little messy with the limitations of the XmlDocument that require references to the old document. I also wanted to make this solution reusable. Here is the solution I came up with to rename an xml node using C# XmlDocument.

I will start with the method and explain why it seems overcomplicated for a simple task. The first parameter is the doc that oldRoot belongs to. The reason this needs to be passed is because you cant create an XmlNode without an XmlDocument and if you want to add a node to a document, it needs to be created with that document. This is why doc needs to be passed over. The for loop will take all of the elements from the old node and add them to the new node. This will mean that there are now 2 nodes that have the exact same content, but one has the new name that you wanted to rename the node to.

Once the new node has all the data of the old node, you can append this new node to the document and then remove the old one. This is all that you need to do. There is no real need for a return type here. You have passed doc as a parameter, any changes made to this doc will be made on a global level. When the method completes the change will have been made to the doc that you passed to this method.

This setting is up there as one of the most dangerous settings you can have enabled on a web server. It will allow someone to potentially inject a tiny piece of code into your system that could in turn completely compromise your entire server. If you have some bad programming practices in place it could even mean someone could compromise your system without even having to inject code. If you are unsure whether you need this to be enabled the answer is likely NO! Disable it immediately.

What Does Allow URL Include Do?

When you are writing PHP scripts, it is possible to include another script by means of the include or require actions. A super simple example of this would be a crude web page.

PHP

1

2

3

4

5

6

require"database.php";

include"header.php";

?>

<h1>My web page</h1>

<?php

include"footer.php";

This is a fairly common way to use the include and require commands. When you have allow url include enabled it allows you to use a URL as the string inside of the require or include commands. This will make PHP include a remote file directly into the executing script. If you have a script that does something incredibly stupid such as using a dynamic variable from user input as the value for an include, you are opening the door to a world of pain. Even if you are careful, this can still be crazy dangerous, simply because it is not something that any scanning tools would consider dangerous.

Lets just say someone hacks your WordPress website. They pick some random script in the WordPress core and add an include that will include a remote script that some hacker has placed on another location. On your server, it will be a tiny piece of code that doesn’t look scary at all. The script being included is where the damage is done.

Allow URL include is one of those things that has very few uses. When its needed its powerful, but 99% of the time, you could easily work around the need for it. It is highly recommended you disable this directive on your web server.

How To Disable Allow URL Include

You can disable this directive from within the php.ini file on your web server. Open this file and search for a line that contains “allow_url_include". Create or edit this line to read as follows. Make sure there is not a hash character (#) in front of this line or it will not apply.

When it comes to dangerous PHP functions, allow_url_fopen is one that can be incredibly dangerous, but it is also something that is very useful and in most cases will need to remain enabled if you have written some advanced scripts. A common use for this setting would be with a REST based API. For example, if you want to get an item information from a REST URL, you could use something like the following.

PHP

1

file_get_contents("https://somesite.com/products/123");

Normally the file_get_contents function is used to get files from the local file system. When allow_url_fopen is enabled, you can use a URL with this function in order to get a remote file as if it were stored on the local web server.

Why Is it Dangerous?

The general answer is, it isn’t all that dangerous. Like any function, it can be dangerous if the code is written carelessly, but in general it shouldnt be a problem. The following example will show how this could become dangerous if used carelessly. Lets say you have a form field that accepts a file path. You then read the contents of this file when the form is submitted. What happens if a URL is entered instead of a file path. This will mean that the URL will be queries and this could open some dangerous doors.

PHP

1

file_get_contents($_POST['filepath']);

If you do not need this function then I would suggest you disable it immediately. Otherwise, it isn’t too much of a risk to keep it open, just be very very careful how and where it is used. Always validate data when passing the values to powerful functions.