The new version of Phoenix introduces a concept
called contexts. Contexts are use to separate your app
into specific domains as well as house the business logic
for your CRUDy interactions. I am a big fan of contexts,
but the way that they are presented in the templates include
a good amount of duplicated-ish code. For example, if i had
an Accounts context that controlled a User schema and
a Profile schema, i would need to create functions
get_user and get_profile which both take in an id
and return a User.

I was finding myself rewriting a lot of the same code
every time i added a new schema to a context, so I made
this little module that can be used with any phoenix app
to make contexts a little more concise.

defmoduleMyAppWeb.Contextdo@moduledoc"""
The context module provides a few methods used throughout
all of the contexts through its using macro. You can then
override methods for certain schemas like so.
defmodule MyApp.Users do
use MyAppWeb.Context
alias MyApp.Users.User
get(User, id), do: User |> super() |> Repo.preload([:url])
# call `context_fallbacks\0` to call defaults for overriden fns
context_fallbacks()
end
"""defmacro__using__(_opts)doquotedoimportEcto.{Query,Changeset},warn:falseimportMyAppWeb.ContextaliasMyApp.Repo@speclist(Ecto.Queryable.t())::[Ecto.Schema.t()]deflist(schema),do:Repo.all(schema)@specget(Ecto.Queryable.t(),integer|binary)::Ecto.Schema.t()|nildefget(schema,id),do:Repo.get(schema,id)@specget_by(Ecto.Queryable.t(),keyword|map)::Ecto.Schema.t()|nildefget_by(schema,clauses),do:Repo.get_by(schema,clauses)@speccreate(Ecto.Queryable.t(),map)::{:ok,Ecto.Schema.t()}|{:error,Changeset.t()}defcreate(schema,attrs\\%{})doschema|>struct|>schema.changeset(attrs)|>Repo.insert()end@specupdate(Ecto.Queryable.t(),Ecto.Schema.t(),map)::{:ok,Ecto.Schema.t()}|{:error,Ecto.Changeset.t()}defupdate(schema,%schema{}=entity,attrs)doentity|>schema.changeset(attrs)|>Repo.update()end@specdelete(Ecto.Schema.t())::{:ok,Ecto.Schema.t()}|{:error,Ecto.Changeset.t()}defdelete(entity),do:Repo.delete(entity)defoverridablelist:1,get:2,get_by:2,create:2,update:3,delete:1endend@doc"""
Call this macro at the end of your context file
to automatically call any of the main CRUD functions
that you have overridden with the defaults
"""defmacrocontext_fallbacks()doquotedodeflist(other),do:super(other)defget(other,id),do:super(other,id)defget_by(other,clauses),do:super(other,clauses)defcreate(other,params),do:super(other,params)defupdate(other,schema,params),do:super(other,schema,params)defdelete(other),do:super(other)endendend

The module has comments to explain how to use it, but basically
you just use using MyAppWeb.Context at the top of your context
module. Then you get all the basic CRUD operations for your ecto
schemas (this is assuming you are using ecto). Then, for the example
Accounts module, you would just do Accounts.get(User, 1) and
Accounts.get(Profile, 1) to get a user or account, respectively.

If you need to add any methods, go for it, for example
Accounts.get_current_user. If you need to override any
of the basic functions for a schema (to provide preloads
or any extra functionality when creating or updating a schema
for example), just make sure that you call context_fallbacks
at the end of your module file, so that any of your overrides
won’t destroy the functionality of other schema methods.

Close

Phoenix 1.3 was just released. We have been using the rc version
for a while at my work and loving it. The addition of contexts has really
cleaned up the way we think about structuring code. Another addition is the
notion of a Fallback Controller. Just in case you haven’t tried
out Phoenix 1.3 yet, the fallback controller allows you to only code the
happy path in your controllers, and anything that is not a Plug.Conn
struct fallback to a different controller to be handled. Using this along
with dialyzer, we have been able to add a bit of type safety to our application.

Say, we have a Phoenix controller that gets a user by id

# lib/my_app_web/controllers/user_controlller.exdefmoduleMyAppWeb.UserControllerdouseMyAppWeb,:controller# new in Phoenix 1.3, this is our context for our Accounts entitiesaliasMyApp.Accountsaction_fallbackMyAppWeb.FallbackController# notice this unhelpful spec, we'll fix this soon@specindex(Plug.Conn.t,map)::anydefget(conn,%{"id"=>id})dowithuserwhennotis_nil(user)<-Accounts.get_user(id)dorender(conn,"show.json",user:user)endendend

This is a pretty standard controller in Phoenix 1.3. The two noticeable
changes from 1.2 are the alias of a context, which is basically just a
module which handles your business logic for different domains, and the
action_fallback macro, which sets the fallback controller for this
controller. The fallback controller would look something like this.

so here, we know that Accounts.get_user/1 will return us a User
Ecto Schema, or nil. Knowing that if we don’t find the user, the
nil will pass through to the fallback controller and hit the call/2
function with nil as the second argument, rendering an ErrorView.
This is the basic idea of the fallback controller.

So now, we want to add some type safety to this application so that
we can make sure that we handle all of the unhappy paths in our fallback
controller. Now we are going to edit the lib/my_app_web.ex file and
add a controller_error type to the controller using macro, so that the
type is accessible in all of our controllers.

You will need Dialyxir to be able to check your specs
and describing the installation and configuration of this tool is
outside of the scope of this post, but the docs are good and it is
not too difficult. Now with this type, we will add accurate specs
to our user and fallback controllers.

See, now we make sure that our controller actions will return either
a plug, or a controller_error and that our call/2 function in
our fallback controller is able to handle any controller error we
have.

We know that Accounts.create_user/1 will always return either {:ok, user}
with the User Schema, or {:error, changeset}, where changeset is an Ecto
Changeset. If we run Dialyxir, we will get an error in the controller, because
it will see it is possible to return a value that is not either a Plug.Conn.t
or nil. All we need to do to fix this is update our controller_error type,
as well as handle this type of error in our fallback controller.

Nice, now we should get no errors from dialyzer. Basically just keep adding
error types to controller_error and handlers in your fallback controller
as you go and you can feel confident only coding the happiest paths in
your controllers.

Close

I’ve been thinking (and perhaps overthinking) a bit about
my redux workflow. Specifically how to handle side effects,
such as async requests. I have used redux-thunk and
redux-saga in the past. While they solve the problems
of async redux well, something never felt quite right and I
couldn’t put my finger on it.

Last week I came across this article on Mark’s
Dev Blog that made me realize why I don’t like these solutions.
This, along with using Elm for the last month or so, made
me seek out a simpler solution. I got turned onto redux-loop
which was closer to what I wanted but was a bit bulky and also allows
batching actions, which I see as not so great (see this tweet).
So I started writing a blog post titled…

Redux Side Effects Middleware in 12 lines

I was so young at this point. So foolish and bright-eyed.
I posted this untested snippet into slack at an attempt
to handle async actions like Commands in elm.
Here’s the (totally nonsense) code.

You’ll spot my error pretty quick. I forgot what the return value
of next is in a middleware, which is just the returned action and
not the updated state, also the return value of this function has
no bearing on state.

The middle of this function (lines 3-10) were where I was on
the right track. I wanted to be able to dispatch actions that
were one of three things:

state - the updated state, just like normal

[state, cmd] - the updated state and a command, which
will return an action to be dispatched, possibly async through
a promise

[state, [cmd, ...args]] - same as before, but the cmd and args
to be pass to it in an array.

I still needed to figure out how to intercept the actual reducer
though and not the dispatch function. With great hubris, i
titled a new blog post

Redux Side Effects Enhancer in 16 lines

Here I actually made an example application using create-react-app
and tried a few things here, but then found out about the store’s
replaceReducer and got pretty close

Keyword here is pretty close. I loaded this enhancer into my
simple application and it worked! I could return commands in
my reducer that would get fired off. Everything worked exactly
as expected. I even began to publish my blog post and begin to
enjoy the rest of my weekend when I saw the error.

What happens when you use combineReducers or reduceReducers
or anything that a normal person using redux would use? This
enhancer assumes that you have a single reducer that returns one
of the three possible return types. I fiddled with the enhancer and
shut my laptop case. It was too complicated to do in any lines of
code worth bragging about. That is until I changed the title the
second time.

Redux Side Effects in 14 Lines

I came back and discarded enhancers and middlewares. I realized
that I needed access to all of the user’s reducers to make this
actually work. And the only place I thought of to do that would
be in the reduceReducers function. And then I came up with this.

This works with multiple reducers. All the async actions dispatch
just as expected. You can achieve a similar approach with combineReducers
as well, I just was uninterested in doing it. The part of this that
is strange is that you have to reduce reducers after you create your
store and then use the replaceReducer function like so

This makes sense because you have to give your reducer access to
dispatch to let it produce more actions. This goes against
a lot of the main ideas of redux, but this pattern is
inherently such.

All of this goes with the same caveats in redux-loop.
Is it a good idea? Maybe. Does it put side effects in your
reducers? absotively. I just wanted to see if I could get a
reasonable approach to async actions in an afternoon and learn
a bit more about enhancers and the createStore function.

I have put up a repo that uses this function just to
show that it works for a simple use case. It is probably broken.
It probably doesn’t play well with other middlewares and reducers.
It most likely introduces some strange race conditions. I did
not test it and won’t. The reason is that I had already figured
out how to do all of this much more simply.

Redux Side Effects Middleware in 12 Lines: Redux

I forgot to mention my very first attempt at this was a middleware that put
the commands in the action creators and not the reducer, which
was much simpler and did not break the core tenets of redux.
You would basically dispatch an [action, cmd] pair instead
of just an action to get the same effect.

This approach is probably better. You don’t have to put side
effects in reducers. Putting async bits in action creators isn’t
too far off from thunks/sagas that folks are already used to.
Also, it is 12 lines, which means I wouldn’t have had to change
my post title. Three times.

Close

Overcoming Bookmarking Syndrome in the New Year ::
Permalink
- 01 Jan 2016

I save a lot of bookmarks for tech, Like, a lot. I went through all of my saved tutorial bookmarks, YouTube ‘Watch Later’ videos, Udemy courses, Instapaper feed, and unread tech books on my kindle and calculated that I have about 90 hours of learning content that I have on queue.

This list has been getting out of control for a while now. A hacker news article saved in Instapaper, a 2 hour conference talk posted on twitter gets added to ‘watch later’, someone in slack mentioning a new technology gets thrown in my haphazardly labeled TECH STUFF bookmark folder. It’s really easy to do this, but the more I do it, the larger the queue gets, the more intimidating it gets and, sadly, the less likely I am to even try to whittle down this goliath.

My first action of the new year, this morning, was to catalog every nook and cranny of this mountain, filter out things that aren’t relevant or that I probably never cared about in the first place (Still not certain why I bookmarked a digital signal processing library in Haskell). After this, I set a goal: two months. Any longer and the amount of new tech I would want to learn would clutter up my bookmarks again and create the same problem, any briefer and the effort per day would be too unpalatable.

An hour and a half a day isn’t easy. Some days you don’t have that. Some days you don’t feel like it. Some times you forget that you should only focus on this
iliadic Bloomberg article and not look up and fret over the mountain that you have set out to climb. I think I can handle it though. I keep my Sundays free and can catch some of the slack of fall-behind days through the week. I’m setting a recurring event in my calendar and plan on batching out what articles and tutorials I am going to get through each week. And I want to handle it because it is important to me.

It is important because I really want to keep learning. I want to learn new languages. I want to learn how to build an AI that plays Street Fighter. I want to build my own digital synthesizer. I want to never again be bamboozled by what git command I want. I want to learn to make new things and how to distribute and deploy them. And I don’t want to get scared by the amount of stuff I want to learn.

I feel this ‘bookmarking-syndrome’ puts too much confidence on a mythical ‘one day’ and is a poor coping mechanism for dealing with information overload. Maybe something like clearing out a bunch of bookmarks and a video playlist seems trivial, but for me right now it’s kind of important that I don’t keep forming a habit of being so overwhelmed by all that I don’t know that I don’t even try to learn, and instead start clearing off my feeds, tinkering around with new tools, and grokking in the new year.

Close

I love vim. Most people who use vim feel the same. It feels pure and simple. The commands make sense (after you learn them) and everything is configurable through plaintext files. It’s not for some people but for me it’s everything I need. Well, almost everything I need.

I tend to get envious of an IDE’s integrated debugger when I really need it, so I went searching for how to get the same functionality in my vim set up. I quickly found VDebug which also seems to be the only useful plugin for debugging in vim. I’m going to quickly walk through my setup for PHP debugging in vim (You can also use it for ruby, node, perl, and python though I have not tried these yet)

First, you need to install the plugin and configure a few settings. I have recently switched to neovim and have replaced Vundle with Vim-Plug, so my setup looks like this.

I found that I had to initialize the options dictionary or I ran into problems assigning properties. I set the port to 9000 and turned off the break_on_open setting so that it doesn’t break on the first line. I use vagrant and a virtual machine to do my PHP development so I need to tell vim how to map from my home filesystem to the virtual machine’s. I have a line later on in my .nvimrc which sources a local config file so I can use project specific settings.

You will need to change the path to your project of course. Just to be clear, that is the location of the project on my virtual machine on the left and my host machine on the right. Okay, cool that’s all you need on the vim side of things. You will now need to set some things up on the php side of things.

So ssh into your VM and install XDebug. This is the php module that will allow remote debugging. On an ubuntu box, simply running sudo apt-get install php5-xdebug should be good enough. You need to go to the xdebug site for instructions for your particular distro. This should automatically create an file at “/etc/php5/conf.d/xdebug.ini” which you will need to add the following to.

The zend extension part should be autopopulated, don’t copy the one above because the location of your .so file could be different depending on your version of php. You need to put your host machine’s IP address in the remote_host parameter. You can get this just by running ifconfig (or ipconfig on Windows). Now you should be ready to debug!

You can press <F10> to toggle a breakpoint in your code and then press <F5> to start the debugger, which will wait for 20 seconds for a connection. You will need to send a special signal in your request to tell PHP to start debugging. You can download a Chrome XDebug Helper plugin to toggle this, or just send a query string parameter of XDEBUG_SESSION_START=1 in your request. After this, you should have a debugging window pop up in your editor and you can see the VDebug docs for instructions on how to run through the script and evaluate code. Happy debugging!

Close

So you’ve got an idea for the next Ack, but you don’t know
how to write console applications! No worries, you can
write console applications with JavaScript and publish them
to npm pretty easily. I recently did this with a project
called sfold which allows you to quickly scaffold files
and folders for a project.

##Setup

First you need to make an empty directory and run npm init.
If you’ve never done this, it simply sets this directy up to
hold a node project and will create a package.json file.
For the rest of this tutorial, let’s assume we want to make a
console application called salute which takes in a name and
then prints to the console “Hello, your_name”.

Let’s now make a main.js file which will hold our
application. This will be the main file for our console app.
These are the full contents of the file.

The first line is a shebang which says that we should use
the node program to run this script. Then we just print to
the console the string “Hello “ and the 3rd argument. The
reason we want the 3rd is because when you call this from the
command line using npm, the first argument will be ‘node’ and
the second will be the absolute path to your main.js file
so that when calling salute colby, the 3rd argument is
actually ‘colby’.

##Running it

Now we need to edit the package.json a little bit. Delete
the property called ‘main’ and add one called ‘bin’ which
should look like this.

{"name":"salute",..."bin":{"salute":"main.js"},...}

The bin attribute contains key-value pairs where the key
is the name of the command called from the command line,
which we want to be salute. If you wanted to call your
application by typing ‘say_name’, you would change salute to
that here. And the value is the location of the script that
will be run, which is just main.js for us.

Now we need to hop back into your terminal. To test this,
first we need to link this package, which will allow you to
run it locally. just run npm link. Now your app should be
linked to your system so you can just run salute colby and
it will print out “Hello, colby” back to you. Great! now we
need to publish it.

##Publishing

If you haven’t already, you need to go to the
npm website and register an account.
Then from your terminal you can login with npm login with
your credentials. After that, all you have to do is run
npm register ./ and your application will be publically
available. All people will have to do is run
npm install --global salute (or whatever you name your
app), and they can use your awesome command line application!

Close

So I’m finally fully ready to announce When V.1.0 is ready! When was my
first capstone at Nashville Software School
and is a group activity planner. It’s powered by Firebase with an Angular frontend and you
can go ahead and log in and use it here

Basically the idea is that you login and can create events for groups.
You pick a name and a time range when the event can possibly happen and
the app generates a link. You can send the link to whoever you want to attend.
They put in their name and email and then edit their availability on a calendar
widget. Then you, the creator of the event, can view the merged calendar of
everyone’s availability. In the case where there is no possible way that every
participant can attend the event, the app will sort the participants by busyness
and then find the most optimal number of participants.

Feel free to give it a spin and if you have any issues, you can submit an
issue on the GitHub repo or put a comment
below.

Close

EDIT: I’ve redone my whole website since this post, so the game is no longer
on here, but you can check it out by looking at the code.

I’ve been getting into breaking functions down into
smaller chunks and write more functional type JavaScript.
This was prompted by wanting to learn and utilize lodash
better as well as I have been teaching myself Python, which
highly values collection manipulation and more compressed,
functional type methods. So inspired by pythonic coding, I
wrote a the classic game snake in Javascript with lodash.

You can see the game here.
I will be referencing it throughout
the rest of the post. Now this post title is a bit misleading, the code I wrote
isn’t super functional, but some parts of it do emphasize how utilizing set theory
can help write more concise code. Whatever, let’s have a look.

First a super simple example, When the user presses a key on the page with snake,
I want to act upon it if it is an arrow key and return early otherwise. This is
very easy in lodash.

_.contains is a lodash function that takes in an array and an item and
returns true if the array contains the item. So if the array
[37, 38, 39, 40] (The character codes of arrow keys) does
not contain the keyCode of the event, return early. This is
much simpler than checking for each key with equality. Alright,
more complicated/cooler example now.

My game of snake is based on a 16x16 grid. The snake and apple are just a
collection of x,y coordinates. I also keep track of the head of the snake and
the direction in a dir variable which is an x,y vector so if the snake was
moving up, the dir would be [0,-1] (Move horizontal 0 and vertically upwards 1)

Whenever the snake moves I have to see if the snake dies and restart the game. Here
is the code for that.

varhead=snake[snake.length-1],next=head.map(function(el,i){returnel+dir[i]});if(_.any(next,function(val){returnval<0||val>15})||_.any(snake,function(val){return_.isEqual(next,val)})){//Kill that snake

So first I get the next position the snake will be moving to by mapping the
position of the head of the snake with the dir vector (in the map function,
el is the coordinate and dir[i] will be the corresponding vector direction)

Next I check and see if the snake is about to go off the map. So the map is 16x16
with coordinates from 0 to 15, inclusive. So I use lodash’s any method to see if
either of the coordinates of next are greater than 15 or less than 0. _any will
return true if any item in the collection satisfies the condition in the function, which
makes sense.

Then I have to find out if the snake has run into itself, which would end the game. This
is a bit more complicated because I have to make sure the next coordinate is not
equal to any of the snake’s body part’s coordinates, but lodash makes this easy.

_.any(snake,function(val){return_.isEqual(next,val)})

lodash’s isEqual gives us a deep equals so we can compare arrays
which is just awesome. With that known, it almost reads like English.
If any item in snakeisEqual to the next coordinate, return true.
Okay, one more example.

This function is for coloring each cell on the canvas, I pass in an x
and y, and the function returns green if it is a snake cell, red if it
is the apple cell, or grey if it is empty. The first if statement
checks to see if any element, e in the snakeisEqual to the
[x,y] coordinate.

Hopefully these examples give you a few ideas on how you can use
lodash and JavaScript’s built in collection methods like _map and
reduce. Set theory posits that through simple functions like these
that any one collection or set of data can be transformed into any other
set of data. That makes them very useful and I would encourage everyone
to make use of them. It will make your life much easier.

Close

You can write simple scripts in your package.json for your projects that
will run simple commands like jshint *.js or karma start, but you can
also write your own js scripts and run them with node so that
npm run new_post will run node ./scripts/new_post.js for anything you
need.

I prompt the user (myself) for a title, then create a directory
that is named the title in snake case. Then I create some Yaml
front matter for the post and write it in a file in the new folder
called index.markdown. I have this script loaded in a folder in my
root named scripts and then in my package.json I have.

"scripts":{"new_post":"node bin/new_post"}

Now whenever I want to make a new post, i just run npm run new_post and
I am prompted for a title and all of the directory making and front matter
generation is handled for me. Using this method is great for one-off tasks
that wouldn’t necessarily make sense in an automated task runner like gulp.

Close

Back again with pt. II of my Angular testing post. This time I
will show you how to create tests for controllers, generate a new
controller for each test, and how to test http requests. Alright,
lets get right into it.

So here is the controller we are going to be testing. It gets a
name from our location and handles posting a new calendar then
redirect.

So let’s see how we want to get our controller into our tests. Now, what we actually
want is a function which creates a custom scope based on what we need to test, but this
isn’t too hard in angular.

So here we have injected $rootScope into the variable scope, which
will let us make new scope objects on the fly. We also made a function
which uses Angular’s built-in $controller service so we can initialize
the controller with a different scope per test. let’s make sure we can test
grabbing the name of the new cal out of the location first.

it("Should pull the name from the location.",function(){loc.search({name:"Hello"});calFn(scope);expect(scope.event.name).to.equal("Hello");})

In the controller, we grab the name from the query string and set it to the
event object’s name. This is a great example of why we needed a function
that makes our controller, instead of initializing it in the beforeEach. We
need to be able to initialize the location before th controller is instantiated.
Then we call calFn which makes our controller and changes scope.event based
on the location we just made.

This is all good, but pretty simple. What if we want to test something more
complicated, like http requsts? Here we use $httpBackend to spoof a server
and make sure our controller is sending out the POST request.

it("should post to server on scope.createEvent",function(){loc.search({name:"Hello"});calFn(scope);http.expectPOST('http://some.url',{name:'Hello',participants:{}})scope.createEvent;})

We initialize the location and controller in the same way again, but then
we use $httpBackend with the expectPOST request. It takes in the arguments
of what URL we are expecting and the data that it is expecting us to post. We then
create the event and the $httpBackend will automatically assert our test for us.

This is a pretty example, $httpBackend can accept all kinds of requests, respond
with custom values or error, and has pretty good documentation. From here you should be able to test all your controllers basic functionality and your http request.