This gives us a new array, combining the values of the first two. But, we often only want the unique values. Rather than looping over every item, checking for dupes, etc, we can take advantage of the new Set object. A `Set` lets you store unique values of any type, and automatically tosses duplicates. As an iterable, it's easy to convert it from an Array like object to a true array.

And there you have it. Simple, unique value array. The `Set` object allows for any type, so you could use complex objects as well, but again you would have to provide a custom `compareFunction` for handling the `sort()`.

This entry was posted on May 29, 2019 at 11:12 PM and has received 669 views. There are currently 0 comments.
Print this entry.

If you aren't compiling your JavaScript (well, EcmaScript...) code with Babel, you're probably missing out on some of the best features of the evolving language. On the other hand, if you're working in React or Angular every day, you've probably come across some of the most dynamic features. Destructuring is a prime example, but it's also important to understand the "gotchas".

Let's take a simple example to show you some of the power of destructuring.

So, what's it do? Let's break it down. I took the first node and assigned it to band. I then assigned the bandName and members variables from the band's label and members values, respectively. Then, I took the first four items from my members array, and assigned each of them to a variable as well. This offers you a lot of power, simplifies your code, and can save some CPU cycles as well.

But, what happens if something doesn't exist? Say you had a band with no members? (That's a trick), or members but no drummer? In those cases the members or drummer variables would be undefined.

Using the spread operator, with destructuring, we add a new key to the drummer object. But, wait...

We also replaced the drummer object. This is important. While using destructuring like this can be easy, and very effective, it can have consequences. If you needed to update drummerby reference, you just killed the reference assignment.

And, the above statement would error (as will the array example below). This is because we declared drummer (and members) using const. While we could adjust, add, or remove keys and values, we can't replace the variable. We would have to declare using let instead of const.

The same holds true when using a spread operator and destructuring when attempting to update an array.

While the members array now has a fifth item, the reference to band.members is no longer valid, as you replaced the variable.

But, this is no big deal, unless you needed to update the reference to the original variable. As long as you're aware of this limitation, it's easy to fallback on other methods to update those references. Let's change our variable declarations a little bit, and retool this code to work for us.

I mentioned in a previous post the three different methods for defining Environment variables for our Docker environment, but I hit a small bit I didn't immediately realize.

You cannot reference those variables directly in your Dockerfile during setup. You can create new Environment Variables in your Dockerfile (hey, method 4), but you can't access those externally defined variable in your Dockerfile process.

Here's the deal. When you run `docker-compose build` is creating the layers of your stack, but not firing off your entrypoints, which is where the meat of your processes are, and the bits that do access the Environment Variables. So, what if, in your Dockerfile, you wanted to define your server timezone. We set a timezone environment variable in a previous post. How can we then pass that to Dockerfile for the `build`?

Arguments. I can define a build argument in my Docker Compose file, and then reference that from Dockerfile during `build`. Improving on that further, I can dynamically set that Argument, in the Docker Compose file, using the Environment Variable I already set. Let's look at a small section of the Docker Compose file, where I define my Argument.

This multi-part series goes in depth in converting this site infrastructure to a containerized setup with Docker Compose. See the bottom of this post for other posts in the series.

So, before we continue I think it's important to layout some of the next steps in what it is I wanted/needed to accomplish. I'm using Docker Compose to define my infrastructure. I started with the database, as that will be used for multiple sites, so that was a no brainer.

Yeah, I set some stretch goals in there too. But, it's what I wanted, so I got to work.

In my initial implementation on Digital Ocean I used the default lucee4-nginx container. Nginx is a nice, easily configurable web server. And, it worked great for a year, up until Digital Ocean restarted my Droplet while running some necessary security maintenance on their infrastructure. Suddenly, nothing worked.

Whoops.

OK, so this was the first thing I had to figure out. Turned out to be a relatively easy thing to track down. I was using the "latest" container. Lucee updated the lucee4-nginx container version of Tomcat. There were changes to the container's internal pathing that no longer jived with the various settings files I had, so I just had to resolve the pathing issues to get it all straight. I also took the opportunity to go ahead and switch to Lucee 5.2.

Now I was back up and running on my initial configuration, but (as you can see in the list above) I had some new goals I wanted to accomplish. So I sat down and started looking over my other requirements to figure out exactly what I needed. One of the first things I looked into was the SSL certs. I could buy expensive wildcard domain certs, but this is a blog. It creates no direct income. Luckily there's LetsEncrypt. LetsEncrypt is a great little project working to secure the internet, creating a free, automated and open Certificate Authority to distribute, configure and manage SSL certs.

Long story short, my investigation of all of my requirements made me realize that I needed to decouple Lucee from Nginx, putting each in it's own separate container. I'm going to use Nginx as a reverse proxy to multiple containers/services, so decoupling makes the most sense. I'm still keeping things small, because this is all out of pocket, but one of the advantages of Docker Compose is I can define multiple small containers, each handling it's own defined responsibility. In the end it comes down to this.

Everyone's configuration changes over time, and this is what I came up with after my latest analysis of my requirements. I've already gone through multiple rounds of attacking each different requirement, and probably haven't finalized yet, but next post we'll step in again and setup our Nginx container and start some configuration.

This entry was posted on April 27, 2018 at 5:18 PM and has received 10373 views. There are currently 0 comments.
Print this entry.

This multi-part series goes in depth in converting this site infrastructure to a containerized setup with Docker Compose. See the bottom of this post for other posts in the series.

Building on our last post, we're going to continue our step-by-step setup be talking more about the database setup. I had decided to use MariaDB for my database. For anyone unfamiliar, MariaDB was a fork of MySQL created by many of MySQL's core development team when Oracle bought MySQL, to maintain an open source alternative. Since this blog was using a MySQL database on the shared hosting platform, I needed something I could now use in our DigitalOcean Droplet.

In that last post I showed you the beginnings of our Docker Compose configuration.

I explained the basics of this in the last post, but now let me go into some more depth on the finer points of the MariaDB container itself. First, most of the magic comes by using Environment variables. There are three different ways of handling setting environment variables with Docker Compose. First, you can define environment variables in a .env file at the root of your directory, with variables that would apply to all of your containers. Secondly, you can create specific environment variable files (in this case the mariadb.env file) that you can attach to containers using the env_file configuration attribute, like we did above. And a third way is to add environment variables to a specific container using the environment configuration attribute on a service.

Why so many different ways to do the same thing? Use cases. The .env method is for variables shared across all environments. The env_file method can take multiple files, where you may need to define variables for more than one container and share them to another, but not all, and the environment method is just on that one container. There may even be instances where you use all three methods.

In that vein, let's look at a possible use case for a "global" environment variable. I want to use the same timezone in all of my containers. In my .env file I put the following:

ColdFISH is developed by Jason Delmore. Source code and license information available at coldfish.riaforge.orgTIMEZONE=America/ChicagoTZ=America/Chicago1TIMEZONE=America/Chicago2TZ=America/Chicago

I applied the same value to two separate keys, because some prebuilt containers look for it one way while others look for it another, but this is a perfect example of a "global" environment variable.

Now we can look at environment variables that are specific to our MariaDB container. Here's where things can get tricky. Some prebuilt containers are fairly well documented, some have no documentation at all, and most lie somewhere in between. The MariaDB container documentation is pretty good, but sometimes you have to dig in to get everything you need. Let's step in.

First, I needed MariaDB to setup the service. To do this right, you have to define the password for the root user. This is something that can go in your container specific environment variables, or the container specific environment variable file.

ColdFISH is developed by Jason Delmore. Source code and license information available at coldfish.riaforge.orgMYSQL_ROOT_PASSWORD=mydbrootuserpw1MYSQL_ROOT_PASSWORD=mydbrootuserpw

While this will get the service up and running, it's not enough. I needed by blog database automatically setup by the build, as well as the user that my blog would use to access the database. Luckily, the prebuilt MariaDB container makes this pretty easy as well.

ColdFISH is developed by Jason Delmore. Source code and license information available at coldfish.riaforge.orgMYSQL_DATABASE=databaseiwantmadeMYSQL_USER=userofthatdbMYSQL_PASSWORD=passwordofthatuser1MYSQL_DATABASE=databaseiwantmade2MYSQL_USER=userofthatdb3MYSQL_PASSWORD=passwordofthatuser

Boom! Without any extra code I created my database and the user I needed. But...

This was just the first step. I now have the service, the database, and the user, but no data. How would I preseed my blog data without manual intervention? Turns out that was fairly simple as well. Though it's barely glossed over in the container documentation, you can provide scripts to fill your database, and more. Remember these lines from the Docker Compose service definition?

I was binding a local directory to a specific directory in the container. I can place any .sql or .sh file in that directory that I want, and the container will automatically run them in alphabetical order during the start up of the container.

OK. Backup. What? So, the container documentation says you can do this, but it doesn't really tell you how, or go into any kind of depth. So, I went and looked at that containers Dockerfile and found the following near the end:

ColdFISH is developed by Jason Delmore. Source code and license information available at coldfish.riaforge.orgENTRYPOINT ["docker-entrypoint.sh"]1ENTRYPOINT ["docker-entrypoint.sh"]

This is a Docker command that says "when you start up, and finish all the setup above me, go ahead and run this script." And, that script is in the GitHub repo for the MariaDB container as well. There's a lot of steps there as it sets up the service, and creates that base database and user for you, and then there's this bit of magic:

The secret sauce. Now, I don't do a ton of shell scripting, but I am a linguist who's been programming a long time, so I know this is a loop that runs files. It runs shell files, it runs the sql scripts, it'll even run sql scripts that have been zipped up gzip style. Hot Dog!

So, what it tells me is that the files it will automatically process need to be located in a directory /docker-entrypoint-initdb.d, which you see I mapped to a local directory in my Docker Compose service configuration. To try this out, I took my blogcfc.sql file, dropped it into my local sqlscripts mapped directory, and started things up. I was then able to use the command line to log into my container and mysqlshow to verify that not only was the database setup, but that it was loaded with data as well.

But, it gets better. I needed a database for my Examples domain as well. This required another database, another user, and data. Now, I like to keep the .sql script for data, and use a .sh file for setting up the db, user and permissions. I also wanted to put needed details in my mariadb.env file that I'll probably need in another (Lucee) container later.

ColdFISH is developed by Jason Delmore. Source code and license information available at coldfish.riaforge.org...EXAMPLES_DATABASE=dbnameEXAMPLES_USER=dbuserEXAMPLES_PASSWORD=userpw...1...2EXAMPLES_DATABASE=dbname3EXAMPLES_USER=dbuser4EXAMPLES_PASSWORD=userpw5...

Then, I created my shell script for setting up the Examples database, and dropped it into that sqlscripts directory.

Drop in an accompanying .sql script to the same directory, to populate the database (remember that all these scripts are run in alphabetical order), and now I have a database service to fulfill my needs. Multiple databases, multiple users, pre-seeded data, we have the whole shebang.

This multi-part series goes in depth in converting this site infrastructure to a containerized setup with Docker Compose. See the bottom of this post for other posts in the series.

As I mentioned in the last post, it was time to change hosting and I decided to go with DigitalOcean. But first, I had to figure out how to get all of my infrastructure deployed easily. DigitalOcean supports Docker, and I knew I could setup multiple containers easily using Docker Compose. I just had to decide on infrastructure.

Docker Compose allows one to script the setup of multiple containers, tying in all the necessary resources. There are thousands of prebuilt containers available on Docker Hub to choose from, or you can create your own. I knew I was going to have to customize most of my containers, so I chose to create my own, extending some existing containers. To begin with, I knew that I had three core requirements.

Now, I could've used a combined Lucee/NGINX container (Lucee has one of those built already), but I knew that I would use NGINX for other things in the future as well, so thought it best to separate the two.

When setting up my environment, I stepped in piece by piece. I'm going to layout each container in separate posts (as each had it's own hurdles), but here I'll give you some basics. You define your environment in a docker-compose.yml file. Spacing is extremely important in these files, so if you have an issue bringing up your environment spacing will be one of the first things you want to check. Here I'll show a simple configuration for a database server.

Here I've defined a network called my-network, and on that network I have a database service in a container called mydb. That container is aliased on the network as mydb and mysql. An alias is a name this container will be called when referenced by other containers. I bound a local folder (sqlscripts) to a folder in the container (docker-entrypoint-initdb.d). I also included a local file that contains the Environment Variables used by the container. This container used the actual mariadb image, but you could easily replace this line to point it to a directory with it's own Dockerfile defining your container (i.e. change 'image: mariadb:latest' to 'build: ./myimagefolder').

Bringing up your containers is simple. First you build your work, then you bring it up. From a terminal prompt:

ColdFISH is developed by Jason Delmore. Source code and license information available at coldfish.riaforge.org> docker-compose build> docker-compose up1> docker-compose build2> docker-compose up

You can add '-d' to that last command to skip all of the terminal output and drop you at a prompt, but sometimes it's good to see what's happening. To stop it all (when not doing '-d') just do Ctrl-C, otherwise just use 'docker-compose stop' or 'docker-compose down'. Going forward it will probably help to review the Docker Compose Command Line Reference

The Docker Compose File Reference is very extensive, providing a ton of options to work with. Here I'm using the 3.3 version of the file, and it's important to know which one you're using when you look at examples on the web, as options change or become deprecated from version to version.

That's a start to a basic Docker Compose setup. Continuing in the series we'll go over each container individually, and see how our Compose config ties it all together. Until next time...

This entry was posted on April 18, 2018 at 5:46 PM and has received 5126 views. There are currently 0 comments.
Print this entry.

This multi-part series goes in depth in converting this site infrastructure to a containerized setup with Docker Compose.

For many years Full City Hosting hosted my blog for free. Emmet is a great guy, and they had shared CF servers, so it wasn't a big deal.

Fast forward a decade plus, two books, tons of traffic... Hey. And, FC phased out their shared CF servers, and moved to the cloud. Time to move. (For the record, Emmet is still a great guy.)

The first thing to decide was "Where do I host this?" There's a few moving parts here (CF, MySQL, DNS, multiple domains, etc). And there are costs to consider. And learning curve. Every enterprise app I'd supported had been on a Windows Server, and that wasn't going to happen with my blog and examples apps on a budget.

Emmet suggested DigitalOcean. I could host multiple small containers on a small Droplet for about $10 a month. This should be enough to give me exactly what I need to run my few, small domains.

Step 2: Figure out the required infrastructure. Deployment to DigitalOcean is simple with Docker. I could create containers for my web server, my app server, my database, etc. But Adobe Coldfusion costs money, and while I had a license for CF X, Adobe Coldfusion isn't really conducive to containerized deployment either.

So, I'm gonna cover how I did all of this, step by step. I found a lot of little traps along the way, but it's been a ride I'll share with you all here. Kick back, strap in, and let me know where I zigged when I should've zagged.

This entry was posted on April 18, 2018 at 4:57 PM and has received 2556 views. There are currently 2 comments.
Print this entry.

If you've come to JavaScript after learning to program in other languages, one thing that's probably stuck in
your craw over the years has been the lack of any way to define default parameters in functions. You've probably
written something like this in the past:

ColdFISH is developed by Jason Delmore. Source code and license information available at coldfish.riaforge.org// This is in a class foo (bar=true) { if (bar) { console.log('First round is on Simon!'); } else { console.log('No drinks today!'); } }1// This is in a class2 foo (bar=true) {3 if (bar) {4 console.log('First round is on Simon!');5} else {6 console.log('No drinks today!');7}8}

Simple. We're saying that if bar is true, then Simon is buying, otherwise we're out of luck,
and defaulting our argument to true. We can then call this method a few times to test it out:

You can see from my comments what the output of those methods would be. It's important to note here that even
the undefined value passed as an argument triggered the default argument value as well.

Hopefully default arguments will help you to significantly simplify your code in the future. As usual, if you
have any feedback/praise/complaints/war stories, feel free to comment below, or drop me a private message
through the "contact" link on the page.

This entry was posted on November 20, 2015 at 11:10 AM and has received 2182 views. There are currently 0 comments.
Print this entry.

I've been using Promises for some time now. JQuery has acted as a shim for some time, and several other libraries
have been around as well. ES2015 includes Promises natively. If you're unfamiliar with Promises, I strongly
suggest you read this great post by
Dave Atchley.

Like I said though, I've been using Promises for a while now. So, when I started moving to ES2015 it was a bit
of a kick in the pants to find issues with implementing my Promises. Let me give you an example of how something
might've been written before:

Seems pretty straightforward, right? addOrder() gets called, which does some stuff and then retrieves
an order from the service. When the service returns the order, that's passed to the _updateOrders() method,
where it finds the correct item in the array and updates it (I know, it's flawed, but this is just an example to
show the real problem).

So, what's the problem? That works great. Has for months (or even years). Why am I writing this post? Fair
question. Let's take a look at refactoring this controller into an ES2015 class. Our first pass might look like
this:

When MyController.addOrder() gets called, with this code, the get() method is called on the
service, and... BOOM! Error. It says there is no _updateOrders() on this. What? What happened?

Well, it's not on your scope. Why? Because ES2015 has changed how scope is handled, especially within the context
of a class. "this" is not the same in the context of the Promise's then(), at this point. But then, how
are you supposed to reference other methods of your class?

An arrow function expression (also known as fat arrow function) has a shorter
syntax compared to function expressions and lexically bind the this value (does not bind its own this,
arguments, super, or new.target). Arrow functions are always anonymous.

If you aren't still confused at this point you are a rockstar. Basically what it says is that this will
become of the context from which you're using the arrow function. So, in terms of a Promise, if we change our
addOrder() method like this:

ColdFISH is developed by Jason Delmore. Source code and license information available at coldfish.riaforge.orgaddOrder (order) { // ... do some stuff // get original this._dataService.get(order.id) .then((order) => this._updateOrders(order)) .catch(function (error) { // do something with the error }); }1addOrder (order) {2 // ... do some stuff3 // get original4 this._dataService.get(order.id)5 .then((order) => this._updateOrders(order))6 .catch(function (error) {7 // do something with the error8});9}

This then fixes our this scoping problem within our then. Now, I know this isn't much in the
way of an explanation into "How" it fixes it (other than setting the right this), and I know I'm not
explaining what an arrow function is either. Hopefully this is enough to stop you from banging your head against
the wall anymore, provides a solution, and gives you some clues on what to search for in additional information.

So, as always I welcome your feedback/suggestions/criticisms. Feel free to add a comment below, or drop me a
direct line through the "contact" links on this page.

This entry was posted on November 19, 2015 at 10:26 AM and has received 2099 views. There are currently 0 comments.
Print this entry.

Today I want to talk about the value of ES2015's new let and const variable declarations, and
give you some use case scenarios. But first, let me tell you why I was really looking at all.

Ben Nadel is one of my favorite people. You will not, ever,
meet a nicer guy. Ben is the kind of guy that if the two of you were walking down the street in a blizzard, and you
were cold, he'd give you the shirt off of his own back and go topless so you wouldn't freeze. Yes, he really is
that nice of a guy.

I'd like to say that I've learned many things from Ben over the years. He blogs about everything as he learns
it, sharing what he finds along the way. And he's the first to tell you that he's not always right. Sometimes the
comments to his posts are even more informative than the posts themselves. And, sometimes, he gives his opinion
on a matter of programming and that opinion might not always follow best practice.

About a week ago, Ben posted an article titled Var For Life - Why Let And Const Don't Interest Me In JavaScript. He's very clear, in his post,
saying that his article is an opinion piece. His thoughts are clear, his examples make sense, and
it's easy to see where he's coming from. You'll also find some really thought provoking discussion in the comment
thread both for and against.

But I think it's important to truly explore these new constructs in JavaScript. They were introduced with one
true goal in mind: to help manage memory in our applications. With the proliferation of JavaScript based applications,
both client-side and server-side, the need to carefully analyze our architecture has increased a dozen fold. How
you manage your variable declarations will directly impact your overall memory utilization, as well as assist you
in preventing race conditions within your app. The let and const declarations really fine tune
that control.

The let declaration construct is fairly straightforward. It is a block level scoping mechanism,
supplanting var usage in most situations, and controls the "this" level access of those variables.
The var declaration construct was a function level scope. What's the difference between block level
scoping and function level scoping? Consider the following:

ColdFISH is developed by Jason Delmore. Source code and license information available at coldfish.riaforge.orgfor (var i = 0; i < 10; i++) { console.log('i = ', i); } console.log('now we are outside of our block. i = ', i); // i now equals 101for (var i = 0; i < 10; i++) {2 console.log('i = ', i);3}4 console.log('now we are outside of our block. i = ', i); // i now equals 10

Function level scoping means that variables declared using the var construct are available only
within the confines of that function, but are not restricted to the block they are declared within. Running
the above example shows you that i still exists outside of the for loop block. What happens
though if we change that declaration to a block level declaration?

ColdFISH is developed by Jason Delmore. Source code and license information available at coldfish.riaforge.orgfor (let i = 0; i < 10; i++) { console.log('i = ', i); } console.log('now we are outside of our block. i = ', i); // throws an error that i doesn't exist1for (let i = 0; i < 10; i++) {2 console.log('i = ', i);3}4 console.log('now we are outside of our block. i = ', i); // throws an error that i doesn't exist

In the case above, the variable i is now a block scoped variable and, as such, is only
available within the confines of the for loop. The variable is cleared from memory once execution is complete
(since there are no references created to those variables in the block), and their values are not available
outside of the block, reducing the opportunity for race conditions.

Probably the most misunderstood of these constructs is the const form of variable declaration. Most
still think of this as setting an immutable constant, but that's not entirely correct. Let me give
you an example:

ColdFISH is developed by Jason Delmore. Source code and license information available at coldfish.riaforge.orgconst myVar = {}; myVar.foo = 'bar'; console.log(myVar.foo); myVar = {}; // You were just fine til you got to this line1const myVar = {};2 myVar.foo = 'bar';3 console.log(myVar.foo);4 myVar = {}; // You were just fine til you got to this line

"Wait? What?" Yes, you can change a variable declared with const. Sorta.

When you set a variable with const, you are assigning a variable to a specific location in memory.
It is set to the type you initially assign. You can adjust properties of that variable, but you can not replace
the variable, even with one of the same type. This is why examples with a simple type (string or numeric or
boolean) would throw an error, but you could create and remove and adjust object keys or array elements all day
long. The variable itself isn't constant, it's location in memory is.

Which allows me to change an example from a
previous post. In that post, I talked about using implicit ES2015 getters and setters, and showed an example
of broadcasting a variable change in a service from within a custom setter method. I had a variable in my
Controller that was not passed in to the Service by reference, so any time I changed the Service variable it had
to broadcast that change to my Controller so I could update the controller level variable. In my original example,
the variable was originally assigned to the class' "this" scope. But with const I can assign that variable
and hold it's location in memory, thereby passing the memory reference and changing how I can control workflow.

Is this wise? I'm sure if you aren't careful you can create issues. But, by passing that memory reference
around you also eliminate the need to duplicate variables and broadcast events unnecessarily, reducing your memory
footprint and cpu utilization.

Learning when to use let and when to use const will take some time for many who've worked
with JavaScript for any length of time. I'm sure this will be one of those new features that takes some significant
time to gain true traction among developers. In the end run, it will force us all to think ahead about the
architecture of our applications in advance (always a good thing), and the impact of our code on performance.

Now, if I can just convince Ben ;)

This entry was posted on November 18, 2015 at 10:27 AM and has received 2191 views. There are currently 1 comments.
Print this entry.