David Meentshttps://www.davidmeents.com
Software engineering and DevOps TutorialsThu, 04 Jan 2018 02:47:35 +0000en-UShourly1https://wordpress.org/?v=4.9.4How to Create a Secure 3 Tiered Highly Available Network in AWS: Part Twohttps://www.davidmeents.com/configure-secure-3-tiered-highly-available-network-part-2/
https://www.davidmeents.com/configure-secure-3-tiered-highly-available-network-part-2/#respondSat, 17 Jun 2017 01:08:59 +0000https://www.davidmeents.com/?p=318How to Create a Secure 3 Tiered Highly Available Network in AWS: Part Two Welcome back! In the previous tutorial, we started working to create our own secure and highly available network on AWS. We provisioned all of the infrastructures we're going to need including the VPC, a bunch of subnets for our different network [...]

]]>How to Create a Secure 3 Tiered Highly Available Network in AWS: Part Two

Welcome back! In the previous tutorial, we started working to create our own secure and highly available network on AWS. We provisioned all of the infrastructures we’re going to need including the VPC, a bunch of subnets for our different network layers, and the internet and NAT gateways. In this final segment of our two-part series, we’re going to configure all the security to lock these layers down and allow only the traffic we want into our network. Then we’ll deploy a simple WordPress site to make sure everything works. Let’s get started.

Configure Route Tables

As of right now, we have lots of different parts, but no way for them all the interact with each other. Let’s fix that first. Open up your AWS console and go back to the VPC Dashboard. On the left-hand side menu, you’ll see “Route Tables” – navigate there. Immediately you’ll notice there’s an unnamed route table already in existence. When we created our VPC, AWS created a default one for us that allows all of our subnets to communicate to each other regardless of who they are. There is no internet access currently available to our network.

Start by renaming this route table so we can easily identify it later. I went with “foo-default”. After this is done select the route table and you’ll notice a menu appears in the lower half of the screen. The summary tab is pretty self-explanatory, so let’s start with the “Routes” tab. Here we see two entries, one for our VPC CIDR, 10.0.0.0/16, and one for some crazy looking IPv6 CIDR. The destination column says, “any traffic in our VPC” can access any other “local” target. These settings are perfect as they are right now, but we’ll come back to it shortly.

Move over to the “Subnet Associations” tab. You should see two sections. The bottom part is the available subnets; the top is what’s been assigned to this route table. For the default route table, we’re going to associate the NAT subnet we created, so click the edit button and check the box next to “foo-nat.” After a moment everything should finish loading, then return to the “Routes” tab.

Now we want to grant everything routing through this gateway to have access to the internet. Click the edit button and select “Add another route.” For the destination, we want any and all traffic, which in CIDR notation is 0.0.0.0/0. When you click in the “Target” field, you should see two available options, one of them being the Internet Gateway. Select it and click “Save”. What we’ve essentially done here is say, “Take all traffic coming from the associated subnet and route it through the IGW. You’ve just given you NAT access to the internet!

Create Route Tables for the Network Layers

Now it’s time to make three more to handle the traffic in our subnets. At the top of the screen click “Create Route Table.” Give it the name “foo-presentation” and select “foo-network” as the VPC. As mentioned before, our presentation layer should be available to the outside world (the internet), so in the “Routes” tab edit and allow all traffic through the IGW as we did in the previous section. Then on the “Subnet Associations” tab associate both presentation subnets.

Things will be a little different for the next two layers. The only way outside traffic should be able to talk to our application, and core layers are through the NAT or from its parent layer. So create two new Route Tables called “foo-application,” and “foo-core.” In each of them associate their respective subnets. In the “Routes” table though you might be able to guess what’s going to happen. We want to route all traffic through the NAT, so for the destination enter 0.0.0.0/0, and the target should be the NAT.

To continue our castle analogy, what we’ve just done is provide a map to our citizens on how to navigate around. Traffic can now enter and leave the castle however it pleases. This freedom isn’t up to par though; we want to directly control what can move where, so let’s go to the next section.

Configure Access Control Lists

We’re nearly there, all we have left to do is configure the access and security groups for our network. Under the security section on the left click “Network ACLs.” You should see again a single unnamed entry that was created by default by Amazon. An ACL, or Access Control List, allows us to filter where traffic can go inside our network.

The first thing we want to do is repurpose this default ACL for our NAT. The NAT ACL configuration is very straight forward. The Network Address Translation gateway needs unrestricted access to the Internet Gateway so it can handle request from our internal services. Rename the existing ACL to “foo-nat,” then click the item so that the menu appears at the bottom of the screen.

All of the settings under the “Inbound Rules” and “Outbound Rules” we’re going to leave alone. They are set to allow all traffic and out – exactly what we want. Move to the “Subnet Associations” tab. This ACL controls all of the subnets, and we’re unable to change them. The reason for this is that all subnets must have only one ACL at all times. So let’s start making some new ones.

Create more ACLs

Just like many of the other screens so far, click “Create” and supply a name tag like “foo-presentation” and select our VPC. Do this for all three of the layers. By default all of the inbound and outbound rules are open to everything, ignore that for now and associate the appropriate subnets to each ACL.

Now we need to define the rules we’re going to use for each access control list. Let’s start with the outermost layer and work our way down. Select the “foo-presentation” configuration and review the inbound and outbound rules. You can probably guess that the presentation layer is going to be pretty easy. This layer is publicly available and needs complete access to the internet. Click the “Edit” button and create your first rule. Set the rule number to 100, select “ALL Traffic” from the “Type” dropdown and supply the CIDR notation to allow all traffic (0.0.0.0/0).

Right off the bat, let’s assume for this tutorial that all outbound rules are going to be configured to allow all traffic. I’m not particularly concerned about my app sending malicious information out of my network. I’m more concerned about malicious interaction coming into my system. Anybody can leave the castle, but only those with the right permissions can come in. Set all outbound rules to allow all traffic.

In the “Inbound Rules” tab of the “foo-application” ACL lets start making some changes. We’re going to do some forward thinking and open up a few common ports and make some solid assumptions here. Foremost, we only want our other subnets to be able to talk to this layer, with one exception – we’ll get to that later. For now, keep in mind that all of our “Source” CIDRs could be coming from our entire network, 10.0.0.0/16. In general anything below a layer should be able to talk “up,” but only those with access should be able to talk “down.”

You can read these rules like this, “Only TCP traffic on port 80 can come from 10.0.0.0/16.”. Define the below rules in your application ACL.

You’ll notice that rule 400 is a little strange. We created a custom TCP rule to allow a range of ports from any our NAT’s subnet (10.0.0.0/24). We do this because often returned traffic comes through one of these ports. They’re called ephemeral ports and caused me lots of trouble when first starting out on this. We also added an SSH rule, so we access any server deployed into this subnet (from within the network at least).

Now, configure the core ACL.

You’ll notice here that we only allowed inbound traffic from its parent subnets (10.0.10.0/24 and 10.0.11.0/24).

What we’ve essentially done here is to create the “gates” into our castle and posted guards at each of them that interrogate anyone who tries to enter. Take a breather – we’re done with ACLs!

Configure Security Groups

Security groups allow an additional level of control over what traffic can enter a server within a subnet. We’ll use them pretty sparingly, entirely to limit incoming traffic.

Click the “Security Groups” option on the left you’ll be greeted again with the default security group. Rename it like the others “foo-default.” You’ll notice the structure of a security group is similar to an ACL. Ensure the inbound and outbound rules for this standard SG is set to allow all traffic. Now create a new security group for each of the three layers in our network.

In the presentation layer create a rule that allows all traffic from anywhere. In the application layer, create a rule that allows all traffic, but only from the presentation layer. For the core, repeat the process but only allow traffic from the application layer. All traffic should be allowed to leave.

Test out the network

We’re done! You’ve just made a private, secure and highly available network that you can use knowing you’re significantly more safe from outside interference. But how do we test it? In this tutorial, we’re going to do so by deploying a Bitnami WordPress EC2 instance and hitting it from a browser.

Due to the nature of this tutorial, we’re going to move pretty quickly and not touch on the details of deploying an AMI with EC2. Look for future articles to help with that.

Navigate to the EC2 Dashboard in AWS and click “Launch Instance.” On the left side select “AWS Marketplace,” search for “WordPress,” then choose the most recent Bitnami build. In the configuration screen, select “T2.micro” and click “Next,” not “Review and Launch.” On the next screen, we get to configure where we’re going to put this instance. Select the VPC we just built from the dropdown. Then chose the subnet “foo-application” in zone 2a. Leave the rest of the network configurations alone.

Click next a few times until you get to the “Assign Security Groups” page, and chose the existing security group we made for the application layer. Finally, click “Create.”

We just built our application in the “Application” layer, which means we need a way for traffic to get to it. So let’s provision an Elastic Load Balancer which we’ll put in the presentation layer. Adding the load balancer in front is a much more secure way of handling traffic as the recipient of our data will not be directly connecting with our website but to the ELB.

Navigate to “Load Balancer” in the EC2 dashboard and begin configuration of a “Classic Loadbalancer.” On the next page, you’ll be asked to create the ELB inside a VPC. Do so in our network. You’ll then need to select into what subnets you’ll want the ELBs to placed. Select both presentation layers. Click next and also assign the ELB the presentation security group. Ignore the warning about HTTPS for this tutorial. Change the health check to ping TCP with port 80. Then add the WordPress instance we just made in the previous section.

With a bit of luck, a few drawings, and maybe a couple of readthroughs, you’ll be able to view your new WordPress instance from the ELB public URL (found on the “Description” tab of the load balancer).

Summary

That’s it! You have successfully created your internal network on Amazons VPC. Your apps will be more secure, your customers happier, and you’ll get more sleep knowing your data is safe.

Thanks for reading and if you need any help be sure to leave a comment below! Happy coding!

]]>https://www.davidmeents.com/configure-secure-3-tiered-highly-available-network-part-2/feed/0How to Create a Secure 3 Tiered Highly Available Network in AWS: Part Onehttps://www.davidmeents.com/create-secure-tiered-highly-available-network-part-1/
https://www.davidmeents.com/create-secure-tiered-highly-available-network-part-1/#respondSat, 17 Jun 2017 01:07:13 +0000https://www.davidmeents.com/?p=310Recently I've been occupied with launching my application - I'm getting very close! I've been spending a lot of time thinking about what network infrastructure I wanted to put it on and what was the appropriate amount of security to use. I'm happy to say that I finally finished the bulk of the work on [...]

]]>Recently I’ve been occupied with launching my application – I’m getting very close! I’ve been spending a lot of time thinking about what network infrastructure I wanted to put it on and what was the appropriate amount of security to use. I’m happy to say that I finally finished the bulk of the work on this solution. My app, on the other hand, needs a little more help. Look to see it soon!

It wasn’t a difficult decision to go with Amazon Web Services as my suite of tools. I’ve had a lot of experience with them on some other projects (getobservatory.com, for example) and it just felt like the right fit. Plus, I wanted to dig into some serious networking strategies to provide the most secure solution to handle my (soon-to-be) users data. AWS gave me the affordability I needed with all the features I could dream of – and then some. It’s for these reasons (and a pinch of favoritism) that I chose to use AWS’ VPC and subnetting solution to host and manage my applications securely and with high availability. This two-part series is going to cover how to create your own highly available and secure network using Amazon’s VPC.

Getting Started: An Overview of your Network

The concepts involved in creating your Virtual Private Network can be a little hard to grasp, so I want to start by getting an understanding of what exactly it is we’re going to be making. It took me a dozen or more tries to get this to work, referencing many other tutorials and ‘real life’ experts, and my biggest complaint was the ambiguity of what it was I was doing.

So to set us on the path to success, let’s look at what a VPC is. VPC stands for ‘Virtual Private Network.’ A VPC allows a user to create an internal network ‘virtually’ through Amazon’s services. This amount of control is important for us because by monitoring our network we can pick and choose what traffic goes where and what it can access. For example, we don’t want just anybody to be able to send internet requests to our database and don’t want threatening individuals to be able to route through whatever port they want into our servers.

Then there’s subnets, a way for us split our network into different layers. In front of these layers, we can define rules grouped into Network Access Control Lists (ACL’s). These rules tell our network what’s allowed into that subnet. Furthermore, we can arrange our subnets into Security Groups, another way to fine tune the inbound and outbound traffic to that subnet.

To ensure our application is always available you want to make sure its split into two data centers, or regions. That way if one goes down you still have access to the other. And lastly, between all of this, we have various routing rules. Don’t worry; we’ll go over this in more detail later. So far, just think of it as a walled city, with guards at every gate.

The image below illustrates this as best as I could. It’s a spinoff of the diagram I use for my network where I keep track of all of my projects. If you’re not already planning on doing so, I highly recommend that you map out your network. It will save you a lot of time and frustration. I use lucidchart.com, but feel free to use whatever you makes you comfortable.

Now let’s get started!

Creating a VPC

The first thing we’re going to want to do is to build the Virtual Private Network. You need to create an AWS account. From the central console navigate to VPC (under the Networking & Content Delivery) section. On the left side of the screen click “Your VPCs.” At the top of the screen click the button “Create VPC”. You’ll need to provide a name tag. I used something like “foo-network.”

The next field is going to be where we define the range of IP addresses that the network will have available to it. A range of addresses is written in CIDR notation and looks like this: 10.0.0.0/16. The details of CIDR notation delves deeply into networking science and other intricate technologies that we won’t address today, but in general what this notation means is our network will have 65,536 (give or take a few reserved IPs) available for use. That’s quite a few, and should be plenty for just about anyone. The first set of digits, 10.0.0.0 is where the IPs start. They then increment like this: 10.0.0.0, 10.0.0.1, 10.0.0.2 … 10.0.0.255. From here it rolls into 10.0.1.0, and the process starts over again. Eventually, you reach 10.0.255.255 the uppermost limit to the network. We’ll talk a bit more about this later on.

For now, fill in 10.0.0.0/16 into the “IPv4 CIDR block*” field, and enable the Amazon provided IPv6 block option. Leave “Tenancy” set to default, and click “Yes, Create.”

Creating the Subnets

We have allocated a large block of IP addresses, but now we want to start splitting them into layers so that we can control access into each one. Going with the same castle analogy from earlier, imagine a large circular wall. Outside this wall is the internet, inside it is our network. Inside this wall build another one, and yet another one inside that. You’ll have three completely separate areas within your castle. The center most section we call the “Core,” it’s where we keep our data and most critical information. Outside that is the “Application” layer, where our apps and api’s will reside. Lastly, is the “Presentation” layer, where the internet is welcomed in with open arms.

So let’s go and create these different sections, or subnets, on our network now. In your AWS console click “Subnets” and select “Create Subnet” at the top.

You’ll see a pretty simple screen asking for a name tag. Let’s start with the outermost layer and call it “foo-presentation.” We then want to select our network from the “VPC” dropdown. The next section shows us our current CIDR configuration for this network, and below that we select our Availability Zones. It’s here we can begin introducing high-availability to our network. By creating a second subnet in a different zone for the same layer in our castle, we can ensure we always have one available should the other go down.

Finally, we get to defining our CIDR configuration for the subnet. Due to some fancy mathematics, the larger the value following the 10.0.0.0, the smaller pool of IP addresses becomes. So here we want to carve out a single block of 255 IP addresses from our network, so we’ll put in 10.0.1.0/24. What this equates to is all the IP values from 10.0.1.0 through 10.0.1.255. We’re going to create six subnets with the values below:

We leave some space between our CIDR blocks in case we ever need to add new ones to an existing layer. You could create three more subnets for a third zone if you so chose, but we won’t be doing so for this tutorial. Tell Amazon to specify a custom IPv6 CIDR for you in case you need them later. Lastly, you’ll notice we created a seventh subnet for our NAT drive, which we’ll learn about in the following sections.

After their all built, we can move on!

Creating an Internet Gateway

We now have this nicely walled off castle from the rest of the web, but there’s no way to get in or out of the network. For that, we need to provision an Internet Gateway. An IG is super easy to create, and just allows an internet connection to our Network. Think of it as the gatekeeper of our castle, standing guard outside telling travelers where to go.

In AWS, click “Internet Gateway,” and provide a name tag. For this tutorial, I used “foo-gateway.” We still need to assign it our VPC though. Click the IG and at the top select “Attach to VPC.” Pick your network from the drop and attach it.

Creating a Network Address Translation Gateway

The presentation layer of our network should have access to the internet without any complications as its the main entrance into our network. However, our internal subnets will still need to be able to use the internet to function. To do this securely, we will create a NAT (Network Address Translation) gateway. A NAT gateway just remaps the IP address of all of the servers connecting to it to a single IP.

From the same page on AWS select NAT Gateway on the left side and select “Create NAT Gateway” at the top of that new page. You’ll be asked to supply a subnet where the NAT will reside; we’re going to use 10.0.1.0/24, our presentation layer. We also need to assign to what public IP address we’re going to do our translation. Just click “Create New EIP” or chose an existing one from the drop-down list. Finally, click “Create.” You’ll be asked next to assign it to a route table, just exit out of this box; we will do that in the next section.

In Review

So far we’ve created all the infrastructure we’re going to need for our network. You’ve defined the container where all of your layers will go (the VPC), and you’ve also created a three-tiered set of subnets across two datacenters (zones). We’ve built all the walls of the castle, and we have the gatekeeper ready to send and receive internet connections. All we need now is connect everything together and provide the map into our network.

In the next tutorial, we’ll create our Route Tables, define security protocols with ACLs and Security, and make sure everything’s working great by deploying a sample project into our network.

]]>https://www.davidmeents.com/create-secure-tiered-highly-available-network-part-1/feed/0Add SSL Certification to EC2 WordPress Instance to use HTTPShttps://www.davidmeents.com/add-ssl-ec2-wordpress-instance-use-https/
https://www.davidmeents.com/add-ssl-ec2-wordpress-instance-use-https/#commentsSun, 12 Feb 2017 00:14:22 +0000http://davidmeents.com/?p=232Using HTTPS in your apps and websites is becoming increasingly important. Using it gets you in the good graces of Google, keeps your customer's data safe and secure, and (with Amazon Web Services) opens up the doors to efficiently load balancing your traffic. I wanted to add SSL to this website to not only get [...]

]]>Using HTTPS in your apps and websites is becoming increasingly important. Using it gets you in the good graces of Google, keeps your customer’s data safe and secure, and (with Amazon Web Services) opens up the doors to efficiently load balancing your traffic. I wanted to add SSL to this website to not only get the experience and learn how to tackle the task with AWS but to also rake in that sweet SEO bonus (of course). So I wrote the steps I took (minus the 4 hours of banging my head on the keyboard) to get this successfully installed. It was great practice and a lot of fun despite the endless redirect loop that WordPress stumped me with, so I highly recommend it.

For this guide, we’re going to add SSL certificate to a load balancer that we’re going to put in front of our WordPress instance. We’re going to tackle all that entails, such as updating our DNS and Search Console, to configuring WordPress to properly handle the HTTPS traffic all the way through to your admin panel.

What does it mean to add SSL

Simply put, SSL is a way to encrypt all data sent from one server to another. Doing so useful for all sorts of reasons, including banks, online retailers, and all kinds of applications that handle sensitive and personal information. Web sites that provide SSL encryption are flagged with ‘HTTPS’ (the secured version of ‘HTTP’) and are reliable services. This increase in security is why Google now gives SEO weight to SSL encrypted websites – and why I wanted it here! So let’s get started!

Provision your SSL Certificate with AWS Certificate Manager

Before beginning this step, make sure that the WHOIS information for your website is correct and you have access to the email account listed.

The easy steps first: log in to your console and select ‘Certificate Manager’ from the list of available services. At the top, you want to ‘Request a certificate’ and enter the different domain names of the website you want to add encrypt with SSL. You can use wildcards as described in the examples on the page, so for this site, I set the domain names to www.davidmeents.com, davidmeents.com, *.davidmeents.com. Then click ‘Add another name to this certificate’ followed by ‘Review and request.’ Lastly, click ‘Confirm and request’ and on the next screen read the warning and click ‘Confirm.’

You’ll shortly receive an email sent to all the emails listed in the WHOIS information for your domain. Follow the directions in the email to confirm that you requested the SSL and you now have a certificate!

Add SSL with AWS Classic Load Balancer

Amazon Web Services makes the process to add SSL super easy. We’ll be putting a classic load balancer in front of the WordPress instance (created with EC2 ), and we’ll route traffic through it. We’ll offload the certificate at this presentation layer and then send the secured user down into our website. While provisioning the SSL through Amazon is free, creating the Load Balancer will inquire typical charges based on usage at the rate of 2.5 cents an hour. This comes out to be about $20 a month – which is steep for just providing SSL. However, the load balancer has lots of other tricks for making your application or website highly available. I would recommend looking into them to get the most value out of your ELB.

Provisioning the Load Balancer

First things first, go to your EC2 dashboard and select ‘Load Balancers’ and click ‘Create Load Balancer.’ You’ll then have to make a decision between an ‘application load balancer’ and a ‘classic load balancer.’ Despite the message that the ‘application load balancer’ is preferred for HTTP/HTTPS, we’re going to use a classic load balancer for this tutorial. Using an application load balancer would require us to offload the SSL certificate inside our application, which would be WordPress. I don’t know about you, but I would much rather not mess with that mess. Select ‘Classic Load Balancer.’ The ELB will handle the certificate and then pass the encrypted user down into our WordPress app with minimal effort on our part.

Load Balancer Basic Configuration

On the next screen, set the initial configuration for our load balancer. Give your ELB a friendly name that ideally matches the convention you’ve been using for your different architecture. For example, my EC2 instance for this website is called dpm-wordpress I named my ELB the same thing. By adding similar tags to all related resources, we keep everything unified and make our lives easier. Next, you are going to leave the ‘Create LB Inside’ setting to the default ‘My Default VPC.’ Leave the other options the same.

Lastly, on this page, we want to add a ‘Load Balancer Protocol.’ Click Add, and in the protocol drop down select ‘HTTPS (Secure HTTP).’ After clicking this button, it should auto-populate the load balancer port to 443 (the secured port used for HTTPS instead of 80). Leave the ‘Instance Protocol, ‘ and ‘Instance Port’ set to their default options (HTTP and 80) and click the next choice.

Setting the Security Groups and SSL

On the next screen, you’ll select your security group that you assigned to your WordPress EC2 instance. For the sake of thoroughness, your security group should allow inbound traffic through the TCP Protocol on the Types and Ranges: HTTP / 80, SSH / 22, and HTTPS / 443.

After selecting your desired security group click next and on the next screen you’ll be setting the SSL certificate. In the ‘Certificate type’ section select ‘Choose an existing certificate from AWS Certificate Manager (ACM).’ Then (naturally) select the certificate you just created. In the last section leave the ‘Cipher’ settings to the defaults.

Add your WordPress instance and Create a health check

On the next page, we create the health check for our load balancer. The health check will ping our WordPress website on the port and protocol that we specify. Select ‘TCP’ from the ‘Ping Protocol’ and change the port to ‘443’. As we’re only load-balancing a single WordPress instance, I upped the ‘Interval’ to 60 seconds and left everything else to their default values. Next, select the EC2 instance where you have your WordPress website installed, and leave the ‘Availability Zone Distribution’ to its defaults. Give your load balancer a tag called ‘Name’ and pop in your naming convention one more time for good measure. This websites name tag is of course ‘dpm-wordpress’. After a quick review of all the settings, click Create!

Note: your classic load balancer is going to fail its health check and mark your instance status ‘OutOfService’. It fails because we haven’t finished configuring WordPress or your DNS to accept encrypted traffic sent from the 443 port.

Configuring WordPress for HTTPS sent from a Load Balancer

The next step to add SSL is arguably the most difficult – but we’re not going to have any problems. You’re almost there! The first thing we want to do is FTP into our WordPress instance where we’ll be editing the .htaccess and wp-config files. If you’re using the Bitnami WordPress installation (which I highly recommend!), you can find these files at /opt/bitnami/apps/wordpress/htdocs.

Always take the proper precautions when editing these critical WordPress files. Create backups!

Add an Apache Rewrite Rule to WordPress

Save your progress: following this tutorial any further will cause your website to no longer function properly until the rest of this tutorial is completed. I recommend reading the rest thoroughly before making these changes. Should you run into any issues, you can just undo these steps, and everything should return to normal.

Open up your .htaccess file first and let’s add in the rewrite rule we’re going to use to direct all inbound traffic ‘HTTPS’. Because our load balancer is taking all traffic handling it on port 443 and encrypting it before sending it down to our WordPress instance on port 80, we want to ensure that WordPress continues to honor the ‘HTTPS’ protocol and exclusively use only that. At the very top of this file, before anything else, add the following rule. Be sure to replace ‘your-domain’ with your actual domain name.

Easy as that you just added an Apache rewrite rule to take all traffic and direct it towards the HTTPS URL.

Force SSL on the admin pages

We also want to encrypt and use HTTPS on the WordPress admin pages. To do so open up your wp-config file and add in the following code underneath the ‘MySQL settings’ – but be sure to keep the code above the comment that says ‘stop editing’.

Two different things are happening here. The first one is we’re telling WordPress to use SSL for the admin pages. The next command prevents us from getting stuck in a never ending redirect loop. If you remember, we are taking all traffic to our website through 443, offloading our certificate, and then pushing it down into our WordPress application on port 80. Then, since we’re forcing SSL, WordPress tries to redirect us back to port 443. And so on, and so on. This second command checks the headers of the user as it enters the website to validate if it’s already coming from HTTPS and stops this redirect loop.

Lastly, a little bit further down we want to change the default WordPress site URLs. Sometimes this can be done from the admin control panel, but it’s just as easy to change it here in the wp-config file since we have it open. Scroll down to the appropriate section and just add in the ‘s’ in front of ‘HTTP,’ like so:

Save all of your changes and let’s jump back into our Amazon console to make some DNS changes.

Updating your DNS to point to the Load Balancer

At this stage, if you try to access your website you’re going to get a nasty error saying that your site isn’t secure. This error is because we are redirecting all of our traffic to ‘https’ without sending along the SSL certificate. So what we need to do now is point our DNS records to our load balancer, which will then take all the traffic, offload its SSL certificate, and pass it along properly to our website. We’re going to assume you’re using Amazon’s Route 53 for this task as well. However, the process should be similar to any DNS service.

Go to Route 53 and select your WordPress site out of the ‘Hosted Zones’. You want to update your ‘A’ records to point to the load balancer we made, and luckily for us, Amazon makes this ridiculously easy. Select the A record and then change the Alias from ‘no’ to ‘yes’. You should be presented with a list of load balancers – just select the right one. Be sure to update all of the A records you have. The DNS changes should only take a few moments but could take up to an hour.

That should be it! Your load balancer should be passing its health check, and your website should secure. If you’re still having issues with your connection not being private, try emptying your cache, retyping in the domain name, or waiting a little longer. If that’s not working – leave some details in the comments below, and we can try to get it working!

Last steps and getting SEO ready

There were some small things I did after updating my domain to HTTPS. Since I was mostly concerned about any SEO benefit (at this time) I wanted to make sure that Google was aware of my now nicely encrypted website. What I found out was that in the Search Console you need to have all variations of your site listed. For example, ‘http://www.davidmeents.com’ and ‘http://davidmeents.com’. This requirement means that you need to also add the ‘HTTPS’ versions of these two. Your WordPress site should have four properties when complete.

You’ve successfully gone through all the steps required to add SSL! I spent a good amount of time compiling this and getting stuck with redirect loops at every turn when I did it the first time. So hopefully I saved you some time, or solved your frustrating problems! Either way, please let me know in the comments if you have questions.

]]>https://www.davidmeents.com/add-ssl-ec2-wordpress-instance-use-https/feed/47How to set up the simplest scalable ‘hello world’ mocha-chai test everhttps://www.davidmeents.com/how-to-set-up-the-simplest-scalable-hello-world-mocha-chai-test-ever/
https://www.davidmeents.com/how-to-set-up-the-simplest-scalable-hello-world-mocha-chai-test-ever/#commentsMon, 26 Sep 2016 23:35:09 +0000http://35.167.82.218/?p=44Hello, everyone! I'm sorry it's been such a long time since the last post - while trying to move (again) I've been busy working on a number of projects that are beginning to take off. Today, though, we're going to be spending a few minutes getting a really simple mocha chai test working on your project. [...]

]]>Hello, everyone! I’m sorry it’s been such a long time since the last post – while trying to move (again) I’ve been busy working on a number of projects that are beginning to take off. Today, though, we’re going to be spending a few minutes getting a really simple mocha chai test working on your project. The goal of this little tutorial is to keep it as simple as possible while giving it the ability to scale with your app. We’ll set up a test-helper file that’s going to aggregate all of your tests, and then we’ll write our first test to get it started working out bugs!

I’ll be assuming that you have your own Node.js app ready to test. If you don’t, you can jump on into this tutorial and get one together in no time.

Install Mocha and Chai

As always, start by installing the dependencies we’re going to need and save them to your dev dependencies in your package.json file:

npm install --save-dev mocha chai

Create the test-helper

Next, you want to make a directory in your root app file called ‘tests’ and then create two files named test-helper.js and tests.js inside of it. Here’s where we’ll be creating our first mocha chai test. Your file structure should look like this:

Open up your test-helper.js file and we’ll start adding some pieces to it. This file is what’s going to aggregate all of our tests into one location so that our application can easily grow in complexity over time.

Start by setting the NODE_ENV='test' and then bringing in the required functions:

Here we see our first use of describe from Mocha. Describe allows us to define what the tests are going to be doing in this instance, then it executes the function that will call in our test. Inside our describe function we call the function we created, importTest and pass in the name of our tests file, and it’s location. Using this method we can import multiple files of tests to keep everything organized and broken into components.

Write a simple hello world mocha chai test

Now that we have our helper, let’s put together a real simple test. Open up the test.js file we created earlier and let’s import the necessary items.

With each of our tests we want to start by describing the test, so in the first parameter, you can give your test an easy name to recognize. Then, we can write the test itself. Sorry to disappoint, but writing big and complex tests is a topic for another day.

Lastly, we do need to add a little quality of life script to our package.json file:

The importance of testing cannot be understated. In a lot of instances, single/independent developers, and sometimes entire teams, get tunnel vision looking at only features and the end of the sprint they’re on. Don’t let that happen to your project! Get in there and start your testing – because it’s only going to get more difficult as your app get’s bigger! Luckily it couldn’t be easier writing a mocha chai test.

Thanks for reading today’s rather short article. I’m looking forward to talking with you guys so please leave your comments below! Give me some tips if you have them, point out my bugs, or ask your own questions. Either way, we’ll both get better from the knowledge and experience of others!

]]>https://www.davidmeents.com/how-to-set-up-the-simplest-scalable-hello-world-mocha-chai-test-ever/feed/2Journey into React Part 6: Managing state and connecting to an Api with Redux and Axioshttps://www.davidmeents.com/manage-state-connect-to-api-redux-axios/
https://www.davidmeents.com/manage-state-connect-to-api-redux-axios/#commentsFri, 26 Aug 2016 23:34:08 +0000http://35.167.82.218/?p=42Our React application is coming along nicely. We've got a basic front end all set up and we're ready to make a call to the test api endpoint that we created in the last article. This tutorial is going to focus on setting up and using Redux as well as Axios so that you can [...]

]]>Our React application is coming along nicely. We’ve got a basic front end all set up and we’re ready to make a call to the test api endpoint that we created in the last article. This tutorial is going to focus on setting up and using Redux as well as Axios so that you can make a call to that server. We’ll walk through a very simple reducer, touch on actions and action types, and finally get some information from our server and display it to our clients.

When you’re finished today your application will: have a link that triggers an action, which asks our server for a string, then uses the updated application state to display the message to the client. A very similar process to this one will be repeated numerous times going forward as we build features into our project. This procedure will often vary in its complexity so getting a good understanding of the basics is very important.

You can follow and view the complete source code for this tutorial here!

Installing new dependencies and creating directories

We’re going to be using quite a few new libraries today so first things first let’s get them installed and talk a bit about what they do.

npm install --save axios react-redux redux redux-thunk

axios axios allows us to make XMLHttp requests to our server. It also supports promises which are crucial to making the powerful and dynamic applications that we have in mind.

redux Redux is a predictable and simple way to manage your application state. It has a small API and allows for some incredible and powerful features like time-travel, hot reloading, and more.

react-redux Brings in the react bindings that we need to use Redux in our application. By default, Redux is not built for React, or any framework really.

redux-thunk Thunk is middleware that allows us to use actions to return a function, delaying the dispatch of those actions, or only dispatching them if certain conditions are met.

Next, before we can get started let’s lay the groundwork for some new files we’ll be creating. In your src directory create both an actions and a reducers folder. We’ll leave them empty for now.

Pit stop: how Redux and React work together

I found this great diagram below that visualizes the way that Redux flows and interacts with our application. Wrapping your head around what we’ll be creating next can be tricky, so let’s start with understanding what it is we’re making.

As you can see the application’s front-end consists of 5 parts:

The UI This is what the client sees and interacts with. When they do something in the application (like update their username) it triggers an action.

Actions The action then does whatever it’s designed to do. Afterward, it sends the results to the reducers. In this example, it’d make an API request to update the username and then it would receive a confirmation message.

Reducers Once the action has completed, it dispatches that information to the reducer by declaring an “action type” and attaching a payload. The action type tells the reducers what has happened, for example UPDATE_USERNAME, and the payload would be the confirmation message.

Store and State The store contains the application state, which now consists of the confirmation message. Since the state has changed, it then renders the message back to the UI letting the user know it was successful.

Now that we’ve got this out of the way let’s go ahead and start wiring up our actions and reducers.

Putting together the reducer

Let’s start by creating the reducer we’re going to use for this example. Using a bit of forward thinking, we’re going to call this reducer our auth_reducer and in the next tutorial, we’ll use it to handle our login and authorization actions. For today it’s just going to work with a test action.

Create your first action-type by creating the file types.js in your actions directory. In this file we are going to export simply a constant that describes the action we’ll be performing:

export const TEST_ACTION = 'test_action';

After you’ve saved your types.js file, create another one titled auth_reducer.js in your reducers directory and open it up. Now we need to import our action-type that we just made:

import { TEST_ACTION } from '../actions/types';

Next, define the initial state that will be stored. In this case, the message will be empty:

const INTIAL_STATE = { message: ''};

We’re only going to be receiving the “hello world” message from our API today, so our initial state object will only consist of a single item. Lastly, we export a function that contains a switch statement and returns the state.

This function takes our initial state and the action and tests to see what action was performed. If it’s our currently uncreated TEST_ACTION it updates our state with the payload from our API.

Combining reducers and exporting them

To finalize setting up our reducer we need to import it into a new file titled index.jsinside the reducers directory. This file collects all of our reducers into a singular location, combines them with the combineReducers function and exports them as the ‘root reducer’.

Create the action that will send information to the reducer

Now we need to make the action itself and tell it to dispatch to the reducer. Create a new file titled index.js in the actions directory and import axios (to make our HTTP requests) and the action-type we made. Also, define a constant that will contain the root API URL we set in our last tutorial. This will make our axios calls smaller and easier as our API routes get more complex.

Axios is simple enough to set up, and we’ll be using promises to dispatch the action to the reducers after the get request has been completed. We’ll also include a quick catch to handle any errors we might get.

As you can see, we use axios.get to specify the request. This function takes, at the minimum, the URL parameter which should point to the route we want to call on our server. .then takes the response and uses an arrow function to dispatch the type andpayload to our reducers. .catch handles any errors we may run into.

Creating a link to trigger the action

Open up the Dashboard file you created previously so that you can call the action we just made. Make sure you import your testAction at the top.

import { testAction } from '../actions/index.js';

Create a new method that will call the test action you just imported:

handleClickHello() {
this.props.testAction();
}

Now add a link to the render method that will call handleClickHello on click.

Connect your component to the application state

It’s important to give our Dashboard access to our store and make calling data from it easy. So let’s go ahead and do that now. Import the connect function from react-reduxinto your dashboard file.

import { connect } from 'react-redux';

Now at the bottom, outside the scope our component, we’re going to create a function that will map our application state to the components props.

function mapStateToProps(state) {
return {
auth: state.auth
};
}

We only need to map out the auth state (from our src/reducers/index.js file) because, well, that’s all we have. Lastly, we need to connect our component to our store:

export default connect(mapStateToProps, { testAction })(Dashboard);

Be sure to remove the export default from our component so that it reads:

class Dashboard extends Component {
//component
}

Lastly, let’s throw in a reference to the message itself inside the component so that we can see whatever message is returned to us from the server. You can now manipulate and call the message like any other variable in your component, referencing it through this.props.auth.

Supplying the provider and store to our application

The final thing we need to wire up now is our application itself. We need to create the store which will hold our reducers and we also need to wrap our app in the Providerfunction from react-redux. We need to import all these new libraries, so let’s start there. In your src/index.js file update your dependencies to reflect this:

Testing your project

We’ve done a lot of work here today. Let’s see if it has all come together. Start both your server and your client application, and navigate to http://localhost:8080/ (the dashboard). Now, as long as everything is set up correctly when you click the “knock knock” link, below it, you should see the “hello world” message that our server returned. Congratulations, your client is talking to your very own API!

State of the tutorial series

This segment is a major turning point in the series. We’ve covered most of the fundamentals to creating your own project, including the basics of working in React, making your own RESTful API, and managing application state, stores and redux. You could for all intents and purposes take the information you’ve learned in the last 6 articles and create an entire React based website or app. So, as we move forward, the topics of discussion will be more advanced, covering things such as JWT authentication and mongoDB management. We’ll move quickly through creating new components, reducers, routes, and actions. But if you have any questions you can always ask in the comments or refer back to these posts.

I am really excited to see the outcome of our app, and I expect there to be another 4 or 5 parts before we see the final product come together. I hope to see you there, and thanks for reading!

]]>https://www.davidmeents.com/manage-state-connect-to-api-redux-axios/feed/7How to Create a React.js Support Ticketing System Using MongoDBhttps://www.davidmeents.com/create-a-react-js-support-ticketing-system-using-mongodb/
https://www.davidmeents.com/create-a-react-js-support-ticketing-system-using-mongodb/#commentsTue, 09 Aug 2016 23:33:53 +0000http://35.167.82.218/?p=40In the last article I wrote we talked about how to create a Redux-Form inside a React/Redux application. Now I want to put that form to work for us, and create a simple support ticketing system using a Node.js/Express server and Mongoose to talk to a MongoDB. This will get us some good working knowledge [...]

]]>In the last article I wrote we talked about how to create a Redux-Form inside a React/Redux application. Now I want to put that form to work for us, and create a simple support ticketing system using a Node.js/Express server and Mongoose to talk to a MongoDB. This will get us some good working knowledge of how to use Redux-Form, but more importantly, we’ll work with Mongoose and API’s in general.

Warning: This tutorial only covers specific elements of the MERN stack process. As such it won’t go into much detail on actions, action types, reducers, or even Redux and React. What this tutorial will do is show you how to use Mongoose to save information into a MongoDB and use Axios to send information from your client to your API. If that’s alright with you, let’s get buckled in!

What do we need to get started?

You’re going to need both an API and a Client side application for this project. You can get started with creating your first API with this tutorial and you can also break into a simple “hello world” React app with this one.

You are also going to need a MongoDB to work with. You can set one up locally, but I would recommend heading over to mLab where you can create a free sandbox to play with.

Defining our Mongoose Schema

I personally like to start new features on the server side. This gives you a good overview of how you need/want your data structured and also tells you the API endpoints to send that data too. First things first we need to define a schema for our tickets. A MongoDB schema is defining the structure of the documents that we want to store in our collection (database). Open up a new file models/tickets.js. Inside this file, we want to import some basic dependencies, define them, and then export the TicketSchema.

This should be pretty self-explanatory, but if not you can view the official docs herethat do a good job of explaining it. Now we actually want to put in some information that we want to store in this document. At it’s simplest you’ll define the “fields” in your document as an object and declare a type. We also want to make sure that we set a required property for things like the user’s email and their message. I’ve gone ahead and populated a schema for this tutorial:

The timestamps parameter I specified at the end is a nifty little tool that will store information like when the ticket was created, and when it was modified. It’s all handled directly inside MongoDB.

Creating a controller to add the ticket to the database

Our API endpoint is going to point to a function inside a controller file that will save the data received from our client into our mongoDB. Go ahead and create a controller for our tickets, controllers/_ticket-control.js. Inside here we will bring our Ticketsschema and start working with the data we plan to receive from our client. We’ll also set this controller to ‘use strict’ so that it can catch some common coding bloopers, as well as allow us to use let, a replacement for var.

First thing inside our function, we simplify the req data that we received by saving what we anticipate being in them to variables for later use.

You’ll notice that you then incorporated some error handling to make sure that the client is sending all of the information that is going to be needed. Should these conditions fail to be met, it will simply stop the function and return a 422 error and describe what was missing.

Next, we define a new ticket and fill in the data. With this new ticket object we use the mongoose function .save to save it into our document. Some more error handling follows this if something should go wrong, but if it’s a success we send the 201 and a success message.

Configuring Mongoose with our MongoDB

We’re going to take a side step here to set up mongoose so that it can actually communicate with our database. It’s pretty simple and only requires adding a couple of lines to your server/index.js file. On your mLab dashboard, you can select your database, and view the connection information. You may need to create a user for the collection. This connection information will include a username, password, and database location and name. Copy those and save them into your server/index.js.

Make sure to use strong security methods to secure your database for production, like using environment variables and storing the reference to them in a separate file. For the sake of this tutorial though I’ve included it all into our server/index.js file.

And with that, our API is all set up to receive our new ticket from a troubled user!

Creating a form and sending the data to an action

Now it’s time to set up a simple Redux-Form on our client side. Create the file client/components/ticket_form.js and pass the forms properties into the function that calls our action. We’ll start making this action called submitTicket next.

Creating an action to send our form data

Let’s create an action handler that will send the formProps to our API endpoint. Inside you client/actions/index.js file (or wherever you’re storing your action creators) let’s build out a post request with Axios. Make sure that your action types have the appropriate actions available and import them so that your action can dispatch it.

We use Axios here to communicate with our server. By defining the API_URL and directing it to the route we created earlier we are able to pass data to the function that will save this data in our database. The Axios post request sends an object containing the name, email, and message. Then when it receives the 201 response from our server it dispatches the action type and passes the payload to our reducers to update the app state with the success message.

All done!

This tutorial covered a lot of topics pretty quickly. You created a schema that defined what your support tickets were going to look like, then you allowed an API call to save a new ticket into your Mongo database. You also created a connection between your client and your server with Axios that sends your Redux-Form properties. All in all a pretty good day!

This concept can be manipulated in a number of ways to handle various situations, like saving just about anything to your database. Using this method to save user information is not recommended though as it does not include any encryption for their passwords. It is nonetheless an efficient way to handle simple (nonreal-time) messages, events, contacts, or support tickets.

Thanks for reading, if you have questions don’t be afraid to ask below. Your feedback is appreciated as well so that I can continue to improve these tutorials. If you have any suggestions for topics, please leave those as well!

]]>https://www.davidmeents.com/create-a-react-js-support-ticketing-system-using-mongodb/feed/6How to Create a Redux-Form with Validation and Initialized Valueshttps://www.davidmeents.com/create-redux-form-validation-initialized-values/
https://www.davidmeents.com/create-redux-form-validation-initialized-values/#commentsSat, 06 Aug 2016 23:33:37 +0000http://35.167.82.218/?p=38You want your React app to have an intelligent form that validates the data being input before sending it off to your API. You want it to be easy and straightforward to implement with the features you'd expect. Features like setting initial values of your fields. Luckily for you, Redux Form makes all that a [...]

]]>You want your React app to have an intelligent form that validates the data being input before sending it off to your API. You want it to be easy and straightforward to implement with the features you’d expect. Features like setting initial values of your fields. Luckily for you, Redux Form makes all that a breeze. Even better, with the latest release candidate (Redux Form 6.0.0rc3) and later, you can upgrade to the latest version of React and not have those pesky unknown props errors bring your project to a screeching halt.

This tutorial is going to help you set up a Redux Form that uses the latest syntax, and how to get that form set up with some simple validation and initial values. We’ll be pretending that we’re creating an “update user information” form for a hypothetical application. The form is going to have access to actions to make submissions, and we’ll work under the assumption that we’ve stored our user information in the user reducer.

Install the Redux Form 6.0.0 release candidate (or later)

We’re going to be using the latest Redux version (at the time of this writing), 6.0.0-rc.3 for this tutorial. This is because with the changes made in React 15.2 (and later) standard releases of Redux-From don’t work without throwing a plethora of bugs. Furthermore, the syntax of Redux Form has been modified to streamline and simplify the process of creating and managing your forms. There’s no point in learning a syntax that’s going to be absolute as soon as the release candidate is published, so let’s get a head start!

Open up your console and use NPM to install the Redux Form release candidate:

npm install --save redux-form@6.0.0-rc.3

Our tutorial will also be dependent on react and react-redux, so make sure you have the latest versions of those installed as well.

Setting up your component

Opening up a blank document we want to import the dependencies we’re going to need to create our form. The form we’re making is connected to our application state, and will have an awareness of the actions we’ve created elsewhere in the project. This allows our form to send values to an API or another service directly.

You’ll notice that we brought in our application state (user) and set it as props at the bottom. This is going to allow us to initialize our form with data that’s already defined in our state.

Defining your form

The first thing we want to do is define our form. So just underneath our dependencies, outside of the scope of the component, add the following:

const form = reduxForm({
form: 'ReduxFormTutorial'
});

Handling the validation of our form can get messy if we do it “inline” as part of the render function of our component. So to clean that up and make it reusable (plus easier to manage), create a const that returns the input and logic for any errors that our field receives:

Now we need to define the redux form required property handleSubmit inside the render function of the component. Without this, the form simply will not work at all, and you’ll get a bunch of ugly errors.

const { handleSubmit } = this.props;

It’s time to start making up our form! Inside the render() function return the following example form. Make sure to use the <Field/> component imported from redux form in place of <input />.

At this point, your form should function properly, that is if this.props.submitFormActionwere to point to an existing action that we had created. However, our form doesn’t have any sort of validation or initial data prefilled into the fields. And those are nice, we want those.

Initialize your form with data

For our example here we’re pretending that our form handles updates to our user’s information. We wouldn’t want them to have to type in all of their information every time they switch email accounts or phone numbers. So we want the form to initialize with their existing information which can then be altered in whatever way they wish.

Now, we can create our function and define the initial values. Afterward, we call the redux form property initialize and pass it our data. The objects names must correlate with name property of our <Field />‘s above.

It’s as easy as that. When the component mounts, it will define our values and push it to the form’s fields.

Adding form validation

The last thing we want to do now is add in some sort of form validation to make sure all the fields have something in them and it’s of the appropriate format. Outside of our component, we want to define a function called validate that will take our formPropsand run them through various tests.

We’re going to make a quick jog back up to where we defined our form and add one more line to check our properties against our validation criteria. This process will also check our fields against HTML validation that was defined when we set the <Field />property type to “email” (for example).

const form = reduxForm({
form: 'ReduxFormTutorial',
validate
});

And there you have it! Your redux form is now set to initialize with values, validate itself before submitting, and then pass it’s approved values to your action. Redux form is truly one of the most pivotal dependencies you will integrate into your application so getting a strong understanding of it is important.

That’s all for now and thanks for reading! Once again, you can view the complete source code here. And if you have any questions, comments, or want to suggest a topic for me write about next, just leave it in the section below!

]]>https://www.davidmeents.com/create-redux-form-validation-initialized-values/feed/27Journey into React Part 5: Creating a RESTful API with Expresshttps://www.davidmeents.com/react-node-tutorial-creating-a-restful-api-with-express/
https://www.davidmeents.com/react-node-tutorial-creating-a-restful-api-with-express/#commentsWed, 03 Aug 2016 23:33:22 +0000http://35.167.82.218/?p=36We're going to be changing gears a bit here with Part 5 of our React.js tutorial and focus now on the back-end of our application. This section will center on creating a restful node api that uses express, cors and body-parser to send JSON responses to our clients. It'll do this when they send a [...]

]]>We’re going to be changing gears a bit here with Part 5 of our React.js tutorial and focus now on the back-end of our application. This section will center on creating a restful node api that uses express, cors and body-parser to send JSON responses to our clients. It’ll do this when they send a “get” request to our hello world API. We’ll also create our first controller and first API route to handle these requests!

Part 5 is a little unique in that it does not rely on the previous 4 tutorials, and our server stands completely independent of what we’ve previously worked on. However, the API that we’ll be starting today will be the one that’ll connect with our client application in the future parts of this series. This restful node api will handle our server side authentication and database communication in the future.

If you’re just joining me for this part, then welcome aboard! But, if you want to start creating your own full-stack React applications then I recommend you head over to Part 1 of this Journey into React. Now, let’s get started on our restful node api!

Required global dependencies check-up

There’re a few things that are required (and also very convenient to have) that we’ll be using with this tutorial, so let’s make sure you have them installed. If not, we’ll get that taken care of – no problem. We need Node of course (and most importantly) so do a quick check in your console:

node -v

This should return your version number, or tell you that it’s simply not there at all. If you get the latter response you can go download node here. You can also update your existing Node build by typing in to your console:

npm install --g node

After a moment that should be taken care of, test again to make sure it installed successfully. Next, repeat this process for nodemon, a handy tool we’ll use to automatically restart our server anytime we make a change to it:

nodemon -v
npm install --g nodemon //if not present

Restructure our directories

Before we get started with our restful node api we have some cleanup to do. If you’ve been following along through this tutorial from the beginning you will want to follow this section. If you only care about getting your express server to work, you can skip this part.

Since we’ll be creating a full-stack application that needs both a server and a client to work we need to reorganize our project directory so that it’s a bit cleaner and can accommodate this. Open up your “journey-into-react” project directory (or in your Atom.io editor) and create an empty folder called “client”.

Next, on your console, you want to uninstall the dependencies from the client-side application. It’s easier to do it from the console as often the file paths are too long for Windows to be able to simply “delete” them. You could also try to move the node_modules directory into the “client” one we just made, but I’ve had issues with that in the past. So, copy this into your console and hit enter:

NOTE: If you are running the newest version of Windows 10, they’ve finally removed the restriction on “filepath” lengths that prevented us from simply “deleting” the node_modules directory. You can skip all this unnecessary npm uninstall stuff and simply right click and delete your modules.

That’s the hardest part, I promise. Next, we want to move our src, index.html, package.json and webpack.config.js into the “client” directory as well, then we’re all set. You will need to cd into the client and run npm install again to use your application in future.

Create a new directory in your ‘journey-into-react’ project folder called “server” and let’s get coding!

Initialize our project and get the basics installed

Much like we did in the first part of this tutorial (ages ago!) we need to run npm init and follow the prompts on screen to get our package.json file created. Next, we install the dependencies that we need:

npm install --save body-parser cors express

Make sure you add the --save so that it adds these dependencies to your package.json file. Open up your package.json file and under “scripts” we want to add in a custom run script to save us some time, and automatically start our server with nodemon:

I’ve added a console log there so you can verify that it’s working. Go ahead and turn on your server with npm run dev (the custom run script we wrote). With any luck, you’ll see a little message on your console! Press ctrl-c twice to shut off your server.

Creating our first controller and function

Now that we know the server is working, let’s start making it do stuff. Create a new folder in the server root directory named “controllers”. Here you will store all the functions that will be called when your client-side application makes a request to your API. We want to start with an easy one, so make a controller named _our-controller.js. We’ll likely change the name later, but for the sake of this tutorial let’s roll with this.

As of now, _ourController doesn’t require any dependencies, so we get to jump right into making our hello world function. Every function that we write is going to take three parameters req, res, and next. Requests, Responses, and next are always used when making HTTP calls/requests. Since this simple function isn’t going to take any values we only need to worry about returning a response when it’s called.

It was starting to sound like a lot, but really it’s quite simple. We’re exporting a function called helloworld, taking an empty request, and we’re responding with the message “hello world”.

You’ll remember we installed and included body-parser in our index.js file. Our message is a JSON object and it’s the use of body-parser that allows our restful node api to do that. This, and cors will become increasingly important as our resftul node api sends entire objects of information to our clients.

Using a route to point to our hello world function

The last key part we need to add to our server is a router file. Create one titled router.js in the server root directory. We need express and our controller in here so import those:

Now let’s write up our route. It’s going to use the apiRoutes we defined above, and we want to use a get method. We’ll talk more about the different HTTP methods in a later tutorial. We need to define a path for our route, and then tell it what function to call when that path is accessed:

apiRoutes.get('/helloworld', _ourController.helloworld);

Once again, it sounds much more difficult than it is. Now that this is complete, we can save and move on to wiring it all together.

Wiring in the router and testing it

Let’s open up our index.js file again and add two more small things. We need to import the route we just made so add that near the top:

const router = require('./router');

And we want the router to take our App as a parameter, so just below app.use(cors()); add in:

router(app);

We should be all set now to take our restful node api out for a spin. We can do that with the help of a handy free tool Postman. Get that installed and your free account setup, then open it up.

Once Postman is open and you’re logged in, you’ll see at the top the option to change your HTTP method and enter a URL. Set the method to “GET” and type in your routes URL: http://localhost:3000/api/helloworld. You should see a neat little “Hello World!” pop up below in the body box.

You’ve done it! You have made your first restful node api and successfully made a call to it! This is going to open the door to all the amazing things we have planned for our application. Using very similar methods you’ll be able to push information to a mongoose database, and send information back to your clients. You can even use this API structure to make calls to third-party API’s like Facebook and Twitter.

That’s all for today, if you have any questions or comments please leave them below! I value your feedback and look forward to talking with you. Until later, happy coding!

]]>https://www.davidmeents.com/react-node-tutorial-creating-a-restful-api-with-express/feed/1Creating Paginated Tables with React That You can Sort, Filter, and Customizehttps://www.davidmeents.com/creating-tables-with-react-that-you-can-sort-filter-and-customize/
https://www.davidmeents.com/creating-tables-with-react-that-you-can-sort-filter-and-customize/#respondThu, 21 Jul 2016 23:33:04 +0000http://35.167.82.218/?p=34I recently incorporated first set of react tables into an app that I'm building. My goal was to display all of the clients that are using my app for the admins to manage. I wanted it to have all those nice features, like sorting by column, filtering by name/role/etc, and it had to be fast. Needless [...]

]]>I recently incorporated first set of react tables into an app that I’m building. My goal was to display all of the clients that are using my app for the admins to manage. I wanted it to have all those nice features, like sorting by column, filtering by name/role/etc, and it had to be fast. Needless to say, I pretty quickly threw out the standard HTML <table>and started looking into React options.

What I settled on was Reactable. It is an incredible piece of work that makes all of those features easy to implement and customize. The hardest part became how I wanted to organize my data and not how to make the table itself work. This tutorial’s goal is to show you how to set up your own React tables and implement sorting and filtering features.

You can find the complete source code for our table component at this gist.

Setting up your work environment

First things first, you need to get Reactable installed into your project:

npm install --save reactable

Then get it imported into your dedicated “table” component, let’s call it sg-teams.

Working with data in our Reactable

For our tutorial, we are going to be creating by hand the array of objects that we want to show in our table. In my application, the table was populated by a database, however, the concept is nearly identical. Our example objects are going to be filled with valuable information regarding Stargate Teams, like: who is leading the team, what their assignment is, and how many people are in the team. We’ll want to be able to sort this table so that we can identify a prime candidate for a mission. Or, perhaps, we’ll want to just filter our data by “Oneil” so that we can find SG-1. Come on, we all know who’s really saving the day here.

The data in our objects (in our array) do not have to be sorted so that all the names:come first, then the leader, etc. You can just throw in any of your information or only part of it. Reactable will be able to decipher it and populate the appropriate data into their columns.

However, this table will be the most simple, and our columns will have the very untidy titles given to the information in our array. Right now we have no ability to filter or sort this information, but we can solve this and the ugly titles by adding a few parameters to our Table.

This is pretty incredible stuff right here. By simply adding those parameters we get the instant benefit of sorting (numbers, strings, and dates), paginating, and filtering our data. With the Thead element we can define the column titles to display, and it’s all neatly packaged up.

filterable takes an array that lists the column names you want to filter by.

noDataText returns a string when no data matches the filter

itemsPerPage paginates the table

currentPage is required to paginate, setting it to 0 starts the table on the first page.

sortable pretty self-explanatory.

<Thead> Short for ‘Table Header’, used to define the Th values.

With these options included we get a much more functional react tables that are ready to actually do some work for Stargate Command.

]]>https://www.davidmeents.com/creating-tables-with-react-that-you-can-sort-filter-and-customize/feed/0Journey into React Part 4: Styling your App with Scss and Webpackhttps://www.davidmeents.com/journey-into-react-part-4-styling-with-scss-and-webpack/
https://www.davidmeents.com/journey-into-react-part-4-styling-with-scss-and-webpack/#respondMon, 20 Jun 2016 23:32:41 +0000http://35.167.82.218/?p=32I'm back! There was a bit of some down time there while I relocated my business (and myself!) to Minnessota, but after much ado - part 4 is here! If you missed the last few segments you can always catch up here. This time we are going to be working on how to style a react [...]

]]>I’m back! There was a bit of some down time there while I relocated my business (and myself!) to Minnessota, but after much ado – part 4 is here! If you missed the last few segments you can always catch up here. This time we are going to be working on how to style a react app with some scss by implementing a whole host of new webpack loaders into our webpack.config.js file that will allow us to preprocess that scss for some slick customization. We’ll be working with the Lemonade Grid system to streamline our web design, and then we’ll put it all together to get our navigation bar looking pretty – all the while setting us up for easy improvements down the road. I’m pretty excited to get started on this one with you, so let’s go!

Getting our new dependencies ready

The first thing we want to do in order to style a react app is get our new loaders installed so that Webpack knows how to handle the css and scss filetypes that we’ll be using in our project. So of course, open up your command line and let’s use npm to install the following loaders and save them to the dev dependencies in our package.json file:

This shouldn’t take terribly long, and in no time they will be installed and ready to go. We have got enough of these loaders now to pretty much future proof our styling efforts on our app, so brush your hands and let’s get down to it!

Update the Webpack configuration to handle css/scss

Next we need to actually equip our webpack.config.js file to test for css/scss file-types. Open up the configuration file and start by importing a new constant at the top of our file:

const ExtractTextPlugin = require('extract-text-webpack-plugin');

Now we need to add a new loader to our module.exports object. Inside the loaders:brackets add on the new scss test, like so:

This is basically like saying, “test if a file ends with the extension .scss, then use the extract text plugin loader with the parameters css!sass. It’s actually a pretty simple addition, only two lines, and shouldn’t take too long.

Lastly, in our webpack.config.js file we need to add a new parameter to our module.exports object called “plugins”. These are additions to webpack that give it increased capabilities, similar to loaders. Beneath our loaders object, you should see an output object – its under this that we want to add the new one that uses the extract text plugin we included above to export our processed scss into a single cssfile. This will look like this:

And that’s it for our webpack configuration file. Once again, it is one of the more vague and complex additions we’ll be adding to our application, but hopefully you’ve nailed it. If you have questions please leave them in the comments below! Now, let’s style a react app!

Creating the stylesheets

So our webpack can now read and understand scss, so let’s give it some to work with. We want to keep a nice clean workspace, so add a new directory in our src folder, and title it assets. Inside assets, add stylesheets, then create the files base.scss and navigation.scss and open them.

It’s here that we’ll be adding all of our styling for our application, and then our navigation bar as well. Use your creativity to create the base and navigation styles you’d like, or head to the github repo and copy the simple ones I used for this tutorial.

As this is a React tutorial, not a scss or css one, I’ll skip most of the styling part, and we can get back into how to implement these styles into our app!

Let’s actually style a React app

If you recall, we told webpack to look in our index.js file to find everything it needs to make our application work, then compile it into a single file. So with this in mind, we need to go to our index.js file and let it know to require our newly created base and navigation scss files so that they get included with the bundle.js file that is served to our clients.

We just recently told our wepback configuration file to export our processed scss into a file titled app.css that resides in the assets/stylesheets directory, so now we need to bring that into our index.html, much like you would in any normal website, by including it in the <head> tag:

Finally the last thing we want to do is make sure that webpack is reading our scss, and exporting it properly, and that it is actually styling our application. So boot up your application with npm run dev and see if it works!

Adding Lemonade Grid to your application

Effectively implementing a grid into your application (or any website for that matter) is an amazing time saver and a great way to style a react app. The last thing I want to do today is bring in Lemonade grid into our project. It’s an incredibly simple and powerful grid system that was developed by“Life’s Good”. I use it on my business’s website, and now on this project as well. So first we need to install a new dependency:

$npm install --save lemonade-grid

Next you’ll need to download the stylesheet, which you can get here. Add this stylesheet to your assets/stylesheets directory, and import it into your index.js file like we did before;

require('./assets/stylesheets/lemonade.scss');

Now that we have this, we can easily incorporate the grid system into our application by defining classnames that use the bit- system that is defined in the Lemonade docs.

In our navigation.js file, add className="frame" to the uppermost div. Inside this container (the navigation container), add a new header element with the title “Journey into React”. This should be above the unordered list we created last time. Give both your header tag and your unordered list the className="bit-2". This is essentially saying that you want the title to take up half of the navigation bar, and you want your links to take up the other half.

Notice that you have to use className in place of the traditional class attribute that you would in html. That’s because technically we are writing javascript, and javascript already has a class command. The solution is to use className instead.

This will scale properly on mobile devices, allowing us to bypass implementing a collapsible mobile navigation. Furthermore, starting our grid system out now is going to save us a ton of time down the road because a good structure in our application will make troubleshooting, improving, and scaling it a breeze.

A collapsible navigation menu is still really cool though, and can be done completely in React. I have created a brief exercise on that already if you want to check it out.

How is our application coming?

So to summarize our project so far: we are creating an application that is going to allow us to manage an online database of our contacts with all the basic controls (ie, adding, deleting, viewing, calling, etc). In part one we got our windows based work environment set up, and in part two we got a hello world application started from scratch. From there things started getting really interesting; in part three we used React Router to navigate between new pages, or locations, in our app, and today we finally got it to look pretty.

We’ve already got the fundamentals out of our application should look, and we even have some basic controls (navigation), but where do we go from here? Ultimately we want our program to store information in a mongo database, so we are going to need an api to securely communicate with it. Next time we are going to be switching gears and starting a node.js server api that will communicate and share information with the React application we’ve been working on.

Until next time, happy coding! Please leave your question, feedback, and comments below!