http://blog.tyrsius.com/Ghost 0.7Thu, 29 Mar 2018 18:31:43 GMT60I struggled quite a bit with this one, because understanding the components of hot reloading was very difficult. There are a lot of explanations out there that are light on details, or just downright wrong.

]]>http://blog.tyrsius.com/hot-reload-in-jspm-0-17/980152f3-8a06-4776-8ffb-245797fbb5e1Tue, 15 Mar 2016 17:16:32 GMTI struggled quite a bit with this one, because understanding the components of hot reloading was very difficult. There are a lot of explanations out there that are light on details, or just downright wrong.

I should have taken Dan's advice from the very next paragraph, because using the hot reload api is much simpler.

The hot reload section of the new jspm beta guide shows you how to setup the incredibly simple __reload hook. This is great for simple React application that use component state, but doesn't work for Flux/Redux implementations that store state outside of the components.

To get this working with redux, you need two pieces.

First, you need to be able to hydrate your store. Dan provided the code for this in this github issue.

I have these split into two different modules, since I like to keep my store configuration out of main.js. I don't know how you need to organize your code, but the important part is that the main.js module exports store, so that it is available to the __reload hook. Remember, the m argument to the __reload hook is the previous module. The scope of the __reload hook is the current/new module/

]]>It's no secret that I love jspm. I think it does everything right. I think Webpack requires far too much configuration. jspm is also much more standards-oriented, so I expect the patterns I learn and develop to last much longer, which is something I sorely need in JavaScript development.

However,

]]>http://blog.tyrsius.com/javascript-tests-with-mocha-and-jspm/b9a7adf2-44bd-4d8c-8c77-b74ad534b67aThu, 18 Feb 2016 17:57:11 GMTIt's no secret that I love jspm. I think it does everything right. I think Webpack requires far too much configuration. jspm is also much more standards-oriented, so I expect the patterns I learn and develop to last much longer, which is something I sorely need in JavaScript development.

However, testing it is difficult bordering on silly. I just spent three days trying to get tests to work, and the solution I have for you isn't perfect. It requires a harness file, and locking yourself into Babel 5 (for now). I tried severaldifferentguides, each getting me close-but-not-quite-there.

Let's get to it.

Setting up dependencies

You will need the following installed by npm, not jspm, since our harness file is run by node.

This will find all of my tests (I like to colocate them with my source files, you may prefer a tests directory), load them into mocha, and start the testing process.

For anything to work correctly though, you probably need to tinker with SystemJS to mock modules or remove any bundles you have configured. I have to mock out modules with browser dependencies, like axios.

This will remove the existing module and replace it with a completely empty one. You might opt to load up Sinon and replace the module with a spy here.

You can also remove any bundles you have with this line. I have to do this because my bundles and source code are stored differently than they are hosted, and SystemJS operates on the filesystem when run by node.

System.bundles = {}

Save this file somewhere, and setup a test script to call it in your package.json

"scripts": {
"test": "node tests/harness.js"
}

Writing a test

Writing a test will still require you to load the module with SystemJS, unfortunately. You can't just import it in like normal. Babel will transform imports into require calls, and fail to find them since it will look in node_modules instead of jspm_packages.

You'll notice I don't use arrow functions for the mocha before hooks or test suites. Mocha discourages this in its documentation due to the lexical binding of this breaking there tests.

]]>There are way too many solutions to this online that just don't work. I want a full-page absolutely centered DIV. It needs to center in the browser, which means forcing the correct height and width.

This uses flexbox, so it doesn't work in Internet Explorer (yes, even IE11... I thought

]]>http://blog.tyrsius.com/center-div-vertically-and-horizontally-with-full-width-and-height/ba71918b-8a18-4c7d-b678-c578be4a8448Fri, 29 Jan 2016 02:11:43 GMTThere are way too many solutions to this online that just don't work. I want a full-page absolutely centered DIV. It needs to center in the browser, which means forcing the correct height and width.

This uses flexbox, so it doesn't work in Internet Explorer (yes, even IE11... I thought we were passed this?)

There are a couple guides out there for doing this with Mailgun, but I found subtle problems with each of them.

The How

There are four pieces to this setup.

The Domain Setup in Mailgun

The SMTP Setup in Mailgun

The Email Forwarding in Mailgun

The SMTP Setup in Gmail

Domain Setup in Mailgun

First, make a MailGun account. Then, create a domain from the domains tab.

I recommend not using a subdomain for this. If you use a subdomain you can still receive email from the root domain, but you can't send from the root domain. You probably want to do this if you are using it for personal email.

You will be greeted with a page with a bunch of instructions for setting up the DNS records needed. A couple things you need to know.

The domain doesn't need to in the hostname field (for DigitalOcean). Your DNS settings should look like this (for a root domain).

The TXT values need to be wrapped in quotes. You can see the start of this in the image above

The CNAME record is completly optional. I don't plan on using this setup to track advertising campaigns, so I didn't do it. Adding this will cause Mailgun to intercept outgoing mail and replace links with tracking links. I do not recommend adding this entry to your DNS for personal email.

Once you have done this, go back to the Domains tab and click on the domain. Then click on the big Domain Verification & DNS header to expand it, and click the Check DNS Records Now button. You should see green checkmarks on the MX and TXT records indicating that setup was successful.

SMTP Setup in Mailgun

On the domain page (the one for your domain, not the "domains" tab in Mailgun), click on Manage SMTP credentials.

This is where you can define the accounts that can be sent from on your domain. You can setup multiple accounts here, and gives them passwords. It's pretty straightforward.

Email Forwarding in Mailgun

Forwaring is how the email will actually get to Gmail. It is done using Mailgun Routes.

A route is composed of a Filter and an Action. You want it to look like this

The match)recipient filter describes the receiving account, and the forward action tells Mailgun where to send it. The stop() stops Mailgun from trying to match more routes, which is handy if you are doing this for multiple accounts. Each account gets its own route.

SMTP setup in Gmail

Gmail will now be receiving the emails that are sent to your custom domain accounts, but to be able to send/reply from those accounts gmail needs access to the SMTP server at that domain. In gmail go to Gear Icon > Settings > Accounts and Import.

Under Send mail as, click Add another email address you own.

Enter the name you want to use (probably your own) and the full email address (including domain). Hit next. Then, you need to give the server info

SMTP Server: smtp.mailgun.org

PORT: 587

Username: the full email address, including domain

Password: the password you put into the Mailgun SMTP account

Once you've entered that, you can optionally make this email address the default that gmail uses when sending new email. By default, gmail will respond using whatever email address a message was sent to, though a dropdown is available on outgoing messages to change this.

Wrap up

That's it, really. You can now send and receive email at your custom domain all from inside Gmail with as many accounts as you want, for free!

When working with Nodejs starting your applications and keeping them running is not always straightforward. There are tools to help you with this, like forever and pm2, but if you are doing any kind of deploy-and-build step you will need to have more of a plan than "run the application."

My application server management consists of five pieces

The application directory structure

Shell scripts to manage the individual application

A git repository with a post-receive hook

A shell script to manage all applications

Cron to make sure everything stays up and starts after a reboot

Application structure on the server

A consistent directory structure will help keep you sane as the number of projects on your server grows. Here is what I use

~/
|- /git
| +- /project.git
+- /webapps
+- /project
|- /app
+- /run

The git repository holds the source code. The /webapps directory has a project folder for every project. The /project/app directory holds the actual application code. The /project/run directory holds several scripts that control the application.

Shell scripts for the application

I use five shell scripts to manage every application. If I ever take some time to learn bash better, I will condense them down to one script with flags, but I am not there yet. To simplify project setup, all five can be downloaded and Regexed (sed) into place with this command.

I have these scripts to encapsulate application control. They are called from various places, and if individual applications have special constraints, or all applications need to change, I only need to make changes to these scripts. Some of them, like the one-line stop may seem like overkill, but if I ever switch from forever to pm2 its going to save me a lot of headaches to only update a couple stop scripts.

These are the scripts.

./start

The start script is a safe-for-cron script. It can be run and re-run without any negative side-effects, because it checks to see if the app that it's about to start is running before trying to actually start it.

./restart-install

Since the app needs to install in-betweenstop and start, the restart script can't be used (which is part of what really limits its utility). This is the script that is actually called from git's post-receive hook.

A shell script for all applications

Now some people skip this step and just load the start script for each application into their cron jobs. I don't like doing this, because I want to run cron every fifteen minutes and after reboot. This would mean writing the call to the start script twice, for each application. Instead, I wrap all of them into one script.

apps=(
"blog"
"portfolio"
"home"
#etc...
)
for i in "${apps[@]}"
do
/home/tyrsius/webapps/$i/run/start
done

This has the added advantage of the very clean "one app per line" format that the apps array loop gives us. Also, because each start script is safe to run multiple times, so is this script.

Cron jobs

Last, but certainly not least, is the cron job that makes sure the apps are always running.

It's simple, and it never needs to grow. When new apps are added, they go into start-all-apps.

The PATH for cron jobs won't include node normally, which I why I have to manually set the path at the top; otherwise, the start script calls to forever will fail.

Spinning up new apps

This can all be done manually, without too much effort. However, as a developer, I prefer to automate as much as possible. WIth that in mind, I have a script that will create a new project directory, a new git repo, and even add an entry to start-all-script for me. Its long, and uses scripts I have stored on GitHub, but you might find it interesting.

I've written about this before, but my methods have changed somewhat since then. Digital Ocean also makes it quite a bit easier than webfaction. Hopefully you have already gone though the DNS Managment guide. if you have not installed git and setup a CNAME for git.DOMAIN.com, you should do that now.

Setting up the remote server

If you are just using SSH to connect to git, instead of HTTP, the setup is actually pretty simple. You just need a git workspace somewhere on your remote machine (droplet, our this case).

I recommend keeping all of your git workspaces in ~/git.

mkdir ~/git

For this guide I am going to be using my portfolio as an example. This is how you create a workspace.

cd ~/git
git init --bare portfolio.git

The .git suffix is conventional, not necessary. The --bare flag just tells git not to stick things into the standard .git hidden directory. Since we aren't using this directory to do work, the extra nesting doesn't help. Its still a full repo, but getting to the very useful /hooks directoy will be /portfolio.git/hooks instead of /portfolio.git/.git/hooks. Much cleaner.

That's... actually all you need to do on the server. This directory is ready to be pushed to over ssh.

Confuring the local machine

Since we are pushing over ssh, we need to tell git where the server is at. This is done by configuring your remote branches. You usually only have one remote, origin, which is the default remote that receives your changes when you git push. If you are not using GitHub or some other public repository as your primary, you can alter this remote

git remote set-url origin USER@git.DOMAIN.com:git/portfolio.git

However, you are probably using Github, and want to continue using it. In that case, you need to add a new remote branch. I am going to call this remote digi, for Digital Ocean

git remote add digi USER@git.DOMAIN.com:git/portfolio.git

We can push to this remote with

git push digi master

You will be prompted for your SSH password, and then the branch will push. Your droplet now has all your source code in it.

Updating your server on push

Once your code has been pushed into the repository, you're probably going to want to do something with it. Like move it into the directory it is hosted from.

Unfortunately this guide is going to hit a chicken/egg problem, since we haven't talked about that yet. If you haven't setup your application yet, you might want to hop over to the Application Managment guide first. Otherwise, continue.

We can do this very easily by setting up a post-receive hook in git. This is a script that will run when a git receives a push. The perfect spot for copying files our of the repository, and running any setup commands our application requires.

I use one of two post receive hooks.

The single branch hook

This hook will fire after every push, which implies that you are really only working with one branch. For applications that I host in GitHub, which contains all my branches and tags, I only ever push the master branch to my hosting server. In this case, a hook that fires every time is good enough, since only production code is ever going to be received.

This script assumes you are using the structure I use in the Application Managment guide, where PROJECT/app contians the application code, and PROJECT/run contains scripts for managing it.

This hook updates the /app directory with the latest code before running the restart-install script. This makes it easy to manage to the actual process of stopping the app, building the new code, and restarting it from the app, instead of from git. Once git is setup, we shouldn't ever have to come back here.

The multiple branch hook

Some of my projects are not open source. Actually, right now just one. Its the project that runs my house (Nest, my Hue Lights, some other smart stuff). I don't have a solid grasp on how to secure these things yet, and I don't want anyone trolling my energy bill. You may also have the occasional need to keep your code off of GitHub. If you do, you probably still want your source code to live off of your home computer, and this means keeping the masteranddev branch on your remote server.

The single branch hook is going to give you trouble in this situation. Luckily, we can make this script conditional on the branch name.

Where to get it from

Yum has a Nodejs package. I don't recommend using it. It update slowly, and you are probably going to want to manage node and npm's version yourself. I highly recommend installing and updating node yourself, without a package manager (other than npm). Later on, being able to update Nodejs though npm is going to make your life a lot easier.

Where to put it

There is some confusion over where it is best to install things on linux. I think this answer gives a good overview of the purpose of the available options, but it still isn't clear to me if the recommended solution to this is /usr/bin or /usr/local/bin.

I don't use either.

I install Nodejs in ~/bin/node. This does produce the rather redudnant PATH=$PATH:$HOME/bin/node/bin, but I still prefer to have it in a directory under $HOME. Nodejs, primarily because of npm, is kind of unique. It isn't a shareable binary, because it's "global" environment shouldn't be shared with other users. It's mutable, and I want to keep it completely isolated. /usr/local/bin would do that, but I want to make it extra special. This might seem crazy, and it might be. If you want to install under /usr/local/bin, just make the manual adjustment.

Whatever you do, do not install nodejs in a location that requires sudo access. Not only will it make it pointlessly difficult to run your application later, sudo changes your path in ways that will turn npm into a bi-polar ax murderer.

Manual Installation

I use this script, modified from [here](https://gist.github.com/isaacs/579814. It stuff it in ~/bin/node and adds $HOME/bin/node/bin to your path. The make install is going to take a few minutes.

Nginx is a highly-capable server, suitable for many use cases. The purpose of this guide is to show nginx's use as a reverse proxy, not as the application server itself. It is assumed you will be using an application server, like NodeJS, to perform the rest of the work.

Reverse Proxies

If you came here from the main guide, you may not know what a reverse proxy is, or what it is used for. A reverse proxy handles incoming connections by re-routing them to internal destinations, concealing the real target from the outside world. This is the reverse of a standard proxy, which handles outgoining connections, concealing the real source from the outside world.

The use case we are interested in is having a single server handle the requests for multiple applications, by using the http domain to figure out which internal port to use.

If you use Node.js in development you are probably used to going to http://localhost:3000 or http://localhost:9000 to see your server. That :3000 is a port number. You don't normally see it because when it is left off the browser assumes that it is port 80 for http and port 443 for https. You need admin/sudo/root access to bind to those ports, so development commonly picks a high number so that it can run as a normal user.

When we want to have multiple applications running on a single server, for example a blog and a portfolio, they cannot both listen on that servers port 80 (or 443 if they are using SSL). This is where nginx comes in. You can configure these applications to listen on other ports (like 3000 and 3001) and then have nginx route calls to blog.tyrsius.com (which is port 80 by default) and tyrsius.com (still port 80), and route them to 3000 and 3001 internally.

This is what we are going to do.

Configuring Nginx

The default config file for nginx is located at /etc/nginx/nginx.conf. It contains a server block for the default server. We are going to ignore that, since it doesn't affect us, but if you want to change the default response nginx serves to visitors who browse to your servers IP address directly, this is where you would do it.

By default, nginx also loads all of the .conf file in /etc/nginx/conf.d/ with a wildcard include statement. We will take advantage of this by adding a .conf file for each application we are going to host on this server. I am going to stick with my blog and portfolio examples.

sudo nano /etc/nginx/conf.d/blog.conf

These conf files are in a close-to-json format that I do not know the name of (if it even has one). Configuration is done in blocks, and the top-level block we need is the server block

server {
listen 80;
server_name blog.tyrsius.com;
}

This tells nginx to create a server listening on port 80 (the default http port) for requests to blog.tyrsius.com.

The second level block we need to configure is the location block. You can have more than one of these in a server block, but don't worry about that for now.

The only important value in here is the top one, proxy_pass. It controls where nginx will route requests to blog.tyrsius.com:80. I picked 32100 as a base to increment ports from on all my node applications. You can safely pick any unique port in the range 1024–49151, though there is some confusion on whether that range extends to 65535. It only has to be unique to your server.

The rest of the values are boilerplate, and we will be using them everywhere. To simplify this, we can extract them into another file, and use an include to pull them in. SInce we are going to be using them a lot, this is a good idea.

You might notice the redirect here is a little different. This is because going from HTTP to HTTPS doesn't require us to change the subdomain, so we can use the backreference $host instead of spelling it out.

You should also notice that the server block that handles the real request is listening on 443 instead of 80, which is the default ssl port. It also has ssl after the port, which tells nginx the kind of request it is.

The SSL values are listed below the location, and are pretty boilerplate. In fact, the bottom 3 values can (and should) be extracted into another include. My real conf looks like this.

You should also notice that the proxy_pass is not going to HTTPS. That's because the internal application is still listening on the same HTTP port it was before. Nginx is handling the SSL stuff for the application. This is actually very handy, especially for node.js applications, since they don't even have to know they are running in https mode. You can develop with the same node server that you use in production!!.

DNS

The nginx server is only part of the story. To actually get requests made to blog.tyrsius.com or tyrsius.com the DNS for these domains needs to point them to the server running nginx. If you are using Digital Ocean, this is pretty easy. To see how to do that, check out this excellent guide from the Digital Ocean Community.

When you are done, it should look something like this (the blurry bit is your droplet IP address)

If you are going to be installing NodeJS and working with git on CentOS, there are a lot of things you are going to need installed. Like a c++ compiler. And Git.

You could go through all of these one by one, but there is an easier way. You will get nearly everything you need with

sudo yum group install "Development Tools"

In additiona, a lot of package that you might need later, like Fail2Ban, exist in an "extended" repository for yum. You can install that repository with

yum install epel-release

]]>If you want to get to the meat of the post, jump down to the guide.

Intro

I recently made the move from WebFaction, which offers a shared/managed host with SSH access, to Digital Ocean, which offers virtual private servers with SSH access. They are both billed as being

]]>http://blog.tyrsius.com/digital-ocean-for-beginners/60c1f199-2282-4071-9698-d6efd2f31beaThu, 07 Jan 2016 22:29:43 GMTIf you want to get to the meat of the post, jump down to the guide.

Intro

I recently made the move from WebFaction, which offers a shared/managed host with SSH access, to Digital Ocean, which offers virtual private servers with SSH access. They are both billed as being "for developers", but WebFaction does more work for you. The tradeoff is you don't get root/sudo access.

This wasn't a problem for me until I wanted to automate SSL key installation with Let's Encrypt. Even without sudo access you can obtain keys, but WebFaction required that I open a support ticket to get the certs installed. Since Let's Encrypt's certs only last 90 days, this was going to be an issue. Hence to the move.

Moving from a managed host to one that I had to fully manage meant learning a lot of sys admin stuff in a short period of time. To catalog this process for other developers who know how to build applications but not run servers, I am putting together a series on 1st time setup. Since I came from WebFaction I chose to stick with CentOS, which has made things slightly harder on me since most guides seem to be written for Ubuntu/Debian. Hopefully this helps you out.

The Guide

I'll assume you have already created your droplet, since it's pretty simple. I was able to find most of this information from Digital Ocean Community posts, but I wanted to centralize it. Searching for this is hard, and you can find a lot of bad information before you find the good stuff. I will cite the original guides where appropriate, since they contain great additional information.

I am breaking this guide into 2 parts, because it's going to be very large. OS Configuration, and Hosting Configuration. Each section will get its own post, but they will all be linked to from here so that there is a canonical source.

Most of this information came from this guide on servermom, except the last line that ensures the service starts at boot. Weird.

The important bits are

cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local`

This copies the default config into a use file that can be edited without getting erased during updates. Standard fare.

systemctl restart fail2ban.service
systemctl enable fail2ban

These start the service and configure on-boot startup, respectively.

The config file

nano /etc/fail2ban/jail.local

The config file has some very reasonable defaults, but they are all commented out at the beginning. It confused me at first that the section at the top--"How to activate jails"-- didn't work if you just uncommented the bits. It doesn't have everythying it needs.

This file works like an ini file. It has sections denoted by square brackets, like [DEFAULT] that correspond to a jail. A jail is basically a configuration for how to handle bans. Don't worry too much about this, you only need the [DEFAULT] one.

Scroll down until you see this section

# The DEFAULT allows a global definition of the options. They can be overridden
# in each jail afterwards.
[DEFAULT]
#
# MISCELLANEOUS OPTIONS
#

Yours probably has [DEFAULT] commented out (A line starting with # is a comment). Uncomment this line (by removing the #). Then, find the lines starting with bantime, findtime and maxretry below it. Make sure they are uncommented as well.

This is the ssh jail, which is our primary concern. Make sure yours is uncommented like the one above. If you have the default ssh port then port = ssh will work, otherwise you need to replace the =ssh with your ssh port number.

Once you have your configuration file the way you want it, restart fail2ban with