https://dynamicprogrammer.com/metalsmith-feedTue, 18 Jul 2017 19:32:56 GMTWe can certainly setup a test db in Firebase and test directly into it. Even if that is not "unit testing" there are many instances where that can be preferred.

Since Firebase is a third party service I prefer to mock it out in this case.

Creating a service to encapsulate the use of Firebase

We will extract all code related to Firebase and the interaction with the database into it's own service and inject that service into our storage.js module.

Conclusions

In general I will always recommend to use a good mocking library for this cases.
But sometimes is easier to just go ahead and roll it like this.
These tests are exposing too much of the implementation but is very difficult to write better tests in this particular case.

Resources

]]>https://dynamicprogrammer.com/2017/06/04/testing-the-use-of-the-firebase-dbhttps://dynamicprogrammer.com/2017/06/04/testing-the-use-of-the-firebase-dbSun, 04 Jun 2017 00:00:00 GMTIf you haven't use Firebase before go to the Firebase console, log in with a Google account and follow the steps to create your first project.

We are naming ours tv-series.

Once we got that out of the way, we need to install the Firebase SDK, since we are using browserify to build our app, we will use the node package.

npm install firebase --save

Caveats (danger)

The structure we are using below is not exactly how you will use it for a multi user application so keep that in mind for now.

Notice that we delete the errors property of the show since Firebase will complain otherwise as is not valid JSON.
We will store our data in a shows bucket and we will use the data.id as the keys for the show information.

This call will work to add new shows as well as to update existing shows. If you want to make your updates atomics at the property level you should explore the .update() method for firebase, but in our case a complete object update works well.

Reading the shows.

Firebase have a few ways to read data, in many cases you may want to listen to change events so to keep data synchronized between devices or between users.
In our case we only need to read the shows when the application starts so we read a snapshot of the data using the .only() method.

We will also have to change the .get() method of our storage to be asynchronous and accept a callback as well as the consumer of the method.

Resources

]]>https://dynamicprogrammer.com/2017/06/03/adding-firebase-database-to-our-choo-applicationhttps://dynamicprogrammer.com/2017/06/03/adding-firebase-database-to-our-choo-applicationSat, 03 Jun 2017 00:00:00 GMTChoo version 5 further simplify the API making it even easier to get up and running with Choo.

The upgrade took less than 3 hours and the result is a much simpler an easier to reason about code.

The main changes are in the removal of the "store" and "models" concept (I left the models folder in her just as a reference) but in a real application I probably rename it as service.

Gone are all the concepts of "reducers", "subscriptions" and "effects" in favour for an event driven approach.

So, you pass the bus or the emit method method around and you are up to the races.

You subscribe to the events in the "model/service" and the only "magic" is that to refresh the UI you need to dispatch the "render" event. (Not 100% sure if this is the case all the time but it looks that way).

The idea of immutability is gone from the framework, you can still obviously go that route but is up to you as the implementer of the application and you will need to decide how you will go about it.

I think that the changes are great. It's even less prescriptive than before having even more of a library feeling than a full fledge framework while still giving you most of the things you need to build a simple page application.

The more I play with it, the more I think on introducing it to our tech stack and start using in a real product in production.

Resources

]]>https://dynamicprogrammer.com/2017/03/26/choo-version-5https://dynamicprogrammer.com/2017/03/26/choo-version-5Sun, 26 Mar 2017 00:00:00 GMTWe have been working in our incredible useful Tv series tracker that will help us in our mission of watch as much tv as we want for a few days now. (Yes, that was ironic).

If you have been following you may have noticed that is very easy to add an empty Show in the list.

We will add a "very" basic validation rule to the "title" field.

We can use html5 validation, just add the "required" attribute and we are pretty much done, but what's the fun on that.

As a side note I usually do prefer to use html5 validation attributes as much as I can but I have found very inconsistent support in some browsers, especially when we need to provide custom error messages and error displays.

We will look at how to integrate validate.js into our application. We can install it using npm.

npm install --save validate.js

We will add the validation into the show and shows model. We need to validate when the Add button is click and we need to make sure to clear up any error message if the field changes and the value entered is valid.

Since we need to use the validation in 2 different places we will create a module.

We will add two icons besides the episode and the season values, a minus to the left and a plus sign to the right.
Those icons will raise events when clicked that will call a handler with the values required to call the effect that we just created.

Resources

]]>https://dynamicprogrammer.com/2016/12/04/choo-editing-and-deleting-recordshttps://dynamicprogrammer.com/2016/12/04/choo-editing-and-deleting-recordsSun, 04 Dec 2016 00:00:00 GMTWe have been working in our CRUD application for a while now. It's time to persist that data somehow. We will introduce an storage service that will have the responsibility to save and load data.

We will use localStorage as the store for now. The storage service will isolate the application from the actual store. We could later on use something more powerful like PouchDb, Rxdb, Firebase or just consume an API directly.

For brevity I will not show the tests anymore unless there is something interesting, but you can check the repository below.

Storage service

We will start definning the storage service api.
We know for certain that we need a way to save new shows and list all shows already saved.
We will use the store.js library to make sure our implementation works well across browsers.

This is a very, very naive implementation that will only work if you are storing just items of the same type in localStorage. That's why we can get away with calling store.getAll().

Saving shows in the storage service.

We need to inject the storage service in the models, so we will change the models to have a function that will return the model object. (This is not needed but I prefer to do this for testability purposes)

We will also use the effects to interact with our localStorage, to avoid needing to change too much of the application we will rename our add reducer to refresh and we will add a new add effect.

The application will still call shows:add but instead of calling the reducer will now call the effect that in time will call the renamed reducer.

The end result for the application will be the same, but the data will be preserved in localStorage.
We installed uuid via npm and we are using it to create the id attribute for the shows. (We added the attribute to the show model as well).

Loading the data from localStorage

We need to call this effect when the show-list element start. We can take a few different approaches to do this, but choo includes an onload event that's very well suited for this scenario.

So we will add the event for the show-list element and a handler for it in the home page. A thing to note here is that the onLoad event is not a DOM event but a choo event, so you will not receive an event as the first argument, but the element.

Note on mocking and stubbing.

You may have noticed that we are doing lot of manual mocking and stubbing in these tests. This is not a good idea and in real/production grade code you should try to use a mocking library to mock objects.
In the case of the send function in the other hand, there is not a real advantage on using a mocking library or just a manual mock since the contract is just the arity of the function that the mocking library will not be able to enforce (in javascript).

Resources

]]>https://dynamicprogrammer.com/2016/12/03/choo-saving-data-in-localstoragehttps://dynamicprogrammer.com/2016/12/03/choo-saving-data-in-localstorageSat, 03 Dec 2016 00:00:00 GMTWe refactor the elements in a previous post to externalize the dependencies to the framework. We took the simpler possible approach and in doing so we moved the definition of the hanlders for the events raised by the elements inside the page that consumes the element.
In this case is the home page.

I will like to refactor things a bit more and clean up that code.

The first thing I will like to do is to introduce the idea of a contract for each element. This contract will validate that the parameter object that we pass when calling and element is valid.

There are many ways to do that but since choo already uses internally the assert library we will do the same.

Writting assertions to validate our parameters.

The assert library will check the given parameters and it will validate that follow a given convention.
If the asserts fails, it will raise an error.
This behaviour could be dangerous in production, so our build script is already configured to remove the assertions from the final code.

Using a function to generate the addShows parameters.

We were definning the handler and the state for the component inline when calling the methos.
We will move that code into a function that will return the object.
We will also expose the function, so we can easily test it.

Having the templates as part of the project is very clever, since we may use different conventions in different projects, or we may want to modify the templates slightly between projects and those changes will only affect the current project.

Budo will display errors in the browser and the console, clearly indicated, what helps to get things out of the ground until is time to start adding tests.

Let's add our first element

We will use the choo-cli to help us generate a new element

The whole idea of choo is that element and pages are just functions that return DOMElements.
By default choo uses tagged template strings to build those elements. In this case is using the bel package.

It looks like JSX but is not, they are just strings.
You can write your elements as you would any other html in your site and use JavaScript expressions to interpolate or inject other elements.

We could add a span in the element as well to display the values and validate that the binding is correct, but that's not really what we want to do.

Let's add some tests.

Setting tape

I usually use Mocha for testing my projects, but I recently read this article about tape and though it could be a great idea to try it for this projet. Let's install it first.

npm i tape tape-watch tap-diff --save-dev

We are adding tape-watch to be able to have the tests running watching for file changes. and tap-diff for an even nicer error message.

In the package.json file we will change the default test: script line and add a second one.

"test": "tape tests/**/*.js",
"test-w": "tape-watch tests/**/*.js"

We create our folder structure /tests/models and we add a file for the show model tests.
We will tests the reducers, that are returning the expected values.
In this case we add some tests to make sure we convert the episode and season values to integers and default to zero if the text field is emptied.

Conclusion and things to explore in future articles

Choo looks like a capable library with the basic building blocks for data driven applications. I will like to explore in future articles more real life scenarios with complex data models and reacher requirements.

Some possible articles

Write tests for views.

Replace standard with eslint for linting.

Edit and remove shows.

Add source for the TV shows (Netflix, Hulu, Crave, CBC, etc)

Use AJAX and effects.

Explore how to clean up the code a bit more and reduce the duplication.

UI components early history (controls).

Of course, they run on the server, generating a complete page on the browser. In those early days and based in the complexity of those applications, this was usually more than enought.

Some of the first approaches to UI controls (in the web) that I remember came up in the early 2000s.

ASP.NET WebForms and they approach to UI components and some offerts on the Java side as well are just some examples.

All these components did the trick one way or another, but they were propietary and dependend on the back-end technology or framework.

They weren't really UI components but server side rendered components with some enhanced UI capabilities.

That last statement can be challenged with some of the implementations of those early components, but I will stay with it just for the sake of this post.

Regardless of your thoughts on those attemps, it was quite obvious that there was something missing on web development.

Things started to change when applications started to made heavy use of Ajax.

We realized that we didn't need to refresh the browser for every single interaction.

Different parts of the UI could change to reflect the application state at any given time.

A series of UI libraries (YUI, jQuery.UI, Mootols, and many others) started to pop up during that time.

These libraries were focused on the control paradign. We had buttons, calendars, grids, drag and drop, tabs, accordions and a series of controls that we were able to use to build our UIs.

UI components are more than controls.

Controls were a good start. There are still today lots and lots of applications build on top of these libraries and these concepts.

But we needed more.

We needed to be able to "compose" UI from discrete components that are "usually" a group of controls that act together on some data (or) have some small interaction with the user.

These components should encapsulate the layout (html), behaviour (javascript) and in some cases the appearance (css).

I will touch on css encapsulation later on, let's focus on the first two properties of components for now.

We started to see some libraries that focused in this paradigm offering templating and data bindings.

One of the very early players was Knockout.js(http://knockoutjs.com/). It was at the time extremelly popular with the .NET community and later on it expanded it's horizon.

knockout.js is still going strong, already in version 3.4.0

The reason for Knockout to be popular in the .NET community is the way it thought about components.

It brought the MVVM pattern into web development and MVVM was a known pattern for .net developers, first introduced to the framework by XAML applications.

Today, lot's of libraries implement the same pattern or a slight variation of it.

The raise of the Single Page Application frameworks

We were all trying to imitate the smooth experiences of Google Mail and Google Maps.

Developers started to complain that building these large applications with JS was not possible.

JS was unsuited for the task.

We needed frameworks to guide our hand and help us to bring some order in the chaos.

And so, the likes of Backbone, Angular, Ember and Durandal among others came to play.

They all propose a variation of the MVC model (MVVM or MV* as we tend to call it today).

They cover multiple areas of the application. Most of them impose some form of code organization.

They also have a way (and sometimes more than one) to organiza and create UI components. (With multiple forms of data bindings, one-way, two-ways, one-time, etc)

On top of all that, they suggest different mechanism to stablish inter component communication and some form of routing for the application.

Were we there yet?

No, we weren't.

These frameworks work particularly well to build exactly what they are intended for; Single page applications.

This means, more times than not, if you already have something out there and want to introduce some of the benefits of these frameworks, you need to do a rewrite.

We wanted to bring order to our UI.

We wanted to provide a better user experience and build discrete components that could be easily integrated into full applications.

But you don't always want or need a full fledge SPA.

Components

The W3C started to talk about a series of proposals that could finally bring the notion of components as a standard.

Among this standards we have the idea of Shadow DOM and scoped CSS. These are important since as we mentioned above, components sometimes need some form of styling that should not interfere with other elements of the page.

We started to see a new tendency on smaller libraries that can be plugged together to build applications. I see this as a normal evolution, branches going of other branches.

We had an explosion on binding libraries, templates, routers and even state management.

React.js

I think that React deserves it's own section in this saga.

It's not because is the bext (or the worst). It's because suddenly, we have a very popular library, that (again and again) claimed to be just that. A UI library to build self contained components.

Besides the early controversies with JSX, React took the word by storm.

React wasn't even the first to the party, Google's Polymer was earlier and the proposed path was to embrace standards via polyfills to build Web Components that should be future proof.

After React.js a series of other libraries came up, to name a few, we have Riot.js, Vue.js, Mithril, Cycle.js and many more.

Organizing large application with Componets (take 2)

React and the other Component libraries are building their own ecosystems, proposed architectures and best practices to build large applications.

We are even seeing the use of the original SPA frameworks like Angular with some of this components libraries (like React).

But the beauty of all this is, that you don't have to.
You can leverage single file vue.js components inside a legacy application. Start encapsulating those areas of the app that make more sense, test around those and reuse.

Web Components libraries.

These new libraries are trying to solve the original problems and they are focused on providing the component experience that we have been craving for years now.

We are in the very early days.

I hope to see a greater emphasys on these libraries around testability of discrete components. In some cases is non existent or the proposed paths and tools are less than appealing.

In the meantime, support for the set of Web Components related standard is coming to some browsers but there is stil lot's of uncertainty on what and how the final implementations are going to look.

We still don't know if all of the proposed apis are going to be implemented.

If you want to go this path, you need to relay on some of the webcomponents polyfill and you mileage will vary regarding browser support and performance.

Personal choice.

I so far found myself liking Vue.js the most. It seems to solve all the problems mentioned before.

It's fairly easy to test. It's easy to use components in isolation in a legacy web page or scale up to a full fledge SAP if you need so.

In a future article I will share some of my experiences working with Vue.js during the last few months.

]]>https://dynamicprogrammer.com/2016/09/17/the-road-to-web-ui-componentshttps://dynamicprogrammer.com/2016/09/17/the-road-to-web-ui-componentsSat, 17 Sep 2016 00:00:00 GMTIf you are like me you spend a lot of time in the terminal.

As a developer my workflow is centered around running commands on it, launching new processes, interacting with text files, remoting to other machines, etc.

As such, we tend to customize our terminal emulator of choice.

For some time I have been running iTerm2 in my Mac and it serves my well. No complains really.

But I always like to try new tools to see if I can improve my setup.

My main reason to do so is to try to be more productive. A tool that better align with your workflow or help you to achieve things in a faster or easier well help you to do that.

I mostly want tools that are lean, get out of my way, but can be customized or modified if I need to.

HyperTerm seems to be such a tool

It's build on top of electron, that means is build using just the technologies of the web; html, css and JavaScript.

The great thing about it is that it has an extensibility model, so you or anybody can write extensions (plugins actually) and publish does as npm packages.

At the time of this writting the bulk of the plugins seems to be a number of themes, but there are a few packages that I added to my setup that are very interesting.

So, think about it. As a developer your terminal is probably the application wehre you spend a great amount of time. (Maybe your editor or the browser are the other two that are been used more).

As such a terminal that allows you to not such customize it via profile files but actually customize the whole behaviour of the emulator and add tools, commands or just modify the UI to your heart content should be a very interesting proposition.

Getting hook on it.

I have to admit that I originally installed HyperTerm when it was announced and for whatever reason I didn't pay much attention to it.

Today, I decided to take another crach at it and after going over it with a bit more patience I started to see the appeal.

So, upon installing the bare HyperTerm out of the box is just like any other Terminal emulator that support tabs.

It will launch your default shell, load your profile and it will look pretty much as the one you are using, but with a different theme.

The first I noticed was that it is fast, you will not notice any delay or drop on performance when comparing with iTerm2 (well I did have some issues a few times but nothing serious, I will expand on it below).

In my case, after launching it, after a few seconds, a notification showed up on my notification center indicating that it was going to update and after a few seconds, a ssecond one telling me how to refresh the Terminal to load the new updates.

This surprised me in a very good way. The update experience was smooth as we should see in more and more applications (but we don't, not yet).

I also saw that all my customizations in my bash profile were working and I was able to work as if nothing have changed.

Good enought to keep testing it for the rest of the day, let's see how this baby does on a real day, doing real work.

But before all that...let's do some changes.

Installing some plugins

I opened the ~/.hyperterm.js file with my favorite editor at the moment Atom and started to do some changes to some of the default options.

Not sure if you notice but now 2 of my most used tools on a daily basis are running on Electron and build using the same web technologies I use most of my day

My first change was to pump up the font size; 12px is a bit too small nowadays.

My second task was to dig on npm and search for HyperTerm. You will be presented with several pages of results containing some of the aforedmentioned plugins.

The first thing I wanted to change was the default tabs, sure enought there are a few plugins to do so. After trying a few I settled for hyperterm-tabby to add an bottom-border on the active tab and hyperterm-tab-numbers to add the shortcut number on the tab on the right corner of it.

Another thing I didn't care much about for was that it always opens in the home directory, but that was easily fix with the hyperterm-working-directory plugin.

While at it, I found the hyperterm-cwd plugin that copies another handy setting I had on iTerm2, opening the current directory when launching a new tab.

So now, HyperTerm is almost a match to my iTerm2 setup, we are really in business.

Some more plugins that are starting to make HyperTerm a winner

I use BetterSnapTool to keep my applications on their place, but latelly I run my editor full size and switch to the console using the Command+TAB combination, while keeping my Terminal in the bottom 3rd of the screen.

This works well unless I decide to open a third application, like my browser, now switching windows can be more than one combination of keys and things start to slow down.

HyperTerm have a very nice plugin hyperterm-overlay This is fantastic since now I can bring my terminal window on top of any running application at any time with a single hotkey (I use Command+h).

I configured the overlay plugin to open at the bottom of the screen to my desired size and to autohide on blur (I'm still thinking about the autohide).

Finishing things up.

I added a few more plugins.

hyperline adds a nice status bar at the bottom of the editor, the most interesting thing for me is CPU usage, handy when I run long db operations.

hyperlink-iterm open links from the console using the Command+click as it works in iTerm2

hyperterm-alternatescroll this is an interesting one, that will openan scrollable area that is not the main window of the terminal but a window of the command run in the console.

Missing features, issues.

So far the only thing I miss is some search functionality, specially interesting when looking at logs or output from a long process, yes you can always use grep but I tend to rely on the search include in iTerm2, is very handy.

On the issues side of thing, I only noticed that a few times while running git pull or git pull --rebase the operation took a long time (probably connectivity issues on my end unrelated to HyperTerm) but all the tabs in HyperTerm froze for a bit. This only happened twice during the day, most of the time is very fast and the experience have been really good so far.

The hyperterm-overlay plugin is very helpful, I'm still getting used to it but looking forward to commit this new workflow to muscle memory.

]]>https://dynamicprogrammer.com/2016/08/12/taking-hyperterm-for-a-ridehttps://dynamicprogrammer.com/2016/08/12/taking-hyperterm-for-a-rideFri, 12 Aug 2016 00:00:00 GMTIn the last week I have hear and read people talking about Dependency Injection when they are usually refering to either Service Locator or Module loaders.

There is a lot of documentation that explains this 3 concepts but I decided to put those here in one place just to make (try) the differences clear.

Dependency injection

The idea is that an object does not call "new" internally. This means that it's not responsible to instansciate his collaborators.

This will force you to program to an interface or protocol instead to a concrete implementation.

The insentives are many, testability, the ability to develop services (classes) in isolation and promoting composability.

There are a few ways to "inject" dependencies into other objects/modules. You can use constructor injection or method injection.

Sometimes you can use an Inversion of Control Container that will resolve the dependencies during runtime.
Good containers are usually (mostly) invisible to the developer and resolve dependencies based on a registry. They are usually instanciated once in the application entry point and are for the most part transparents.

Service locator

Service locator is a different way to "resolve" dependencies. Your classes will have a dependency on an instance of the service locator, usually a singleton.

When they need a colaborator they will use the service locator to resolve the dependency or the given type.
The type can be class, interface or protocol. Preferable you should use interfaces or protocol to leverage some of the same advantages as DI, like testabilty and isolation.

The main difference is that your dependencies are not explicit and the service locator is omnipresent all over the application.

There are some implementations that are better than others, but I would recomment to try to use DI whenever possible.

Module loaders

The only reason I'm talking about module loaders here is because I recently hear a person refer to module loaders like a way to do Dependency Injection.

This is not the case, you may be able to use Module loaders to load different modules based on different environments having some of the benefits of DI but the main responsability of module loaders is to load modules into a system.

In the context of JavaScript it can mean to load files from the backend just in time and make those files available, polyfill or provide a given Module system (like ES6 modules or AMD) or a combination of those tasks.

In some cases, the module loader can even use plugins to extend those functionalities to apply just in time transcompilation and load more than just modules but images, audio, etc.

]]>https://dynamicprogrammer.com/2016/05/19/on-dependency-injection-service-locator-and-module-loadershttps://dynamicprogrammer.com/2016/05/19/on-dependency-injection-service-locator-and-module-loadersThu, 19 May 2016 00:00:00 GMTI'm in the process of writting a client for an api. One of the requirements is that the client can run in both the browser and the back-end (node) and we need to authenticate using Jason Web Tokens

I usually like to test against the real API unless there is either a big latency or a security concern on having credentials around.

In this particular case, security is a concern, so the risk to commit credentials is high.

There are several options to test the client but I decided to try nock

Nock is a library that will intercept HTTP calls and will return pre-defined responses if the request matches a series of criterias.

It can be as simple as matching in domain and resource, or as complicate as include headers and payload as part of the match.

Nock also includes a very neat feature that I haven't use yet but plan in the future and that is to record real interacctions with a back-end, and being able to reuse the responses from that moment on in your tests.

Testing auth headers for JWT

The JWT can be send in multiple forms but our api is expecting to receive it in the headers, under the auth hearer in the bearer key.

auth: {
bearer: dfdsfHKLUgMBlkhgkjhsd09e1...
}

My first with to verify another header a public key that is send in X-API-KEY header. That was easy to do.

Once I have to veridy the bearer token I started to get into trouble.

Nock didn't match, the problem seems to be that Nock is not expecting anything other than Basic Auth data under the auth header.

Using functions to match headers.

Nock provides the option to match a header exactly, by a regexp or just passing a function as the header key.

The function will receive the headerValue and you can perform any validation inside the function and return true or false to indicate success or failure.

That sounded promising, so I did pass a function as the value for auth while setting up the nock interceptor, but it didn't work.

The value given to the function is given as a String, and since the actual value is a an Object I was receiving the infamous [object Object].

Using a nock event

Nock also emits a few events, the one I care about was the "no match" event.

I decided to give a try and put my expectation inside the event handler for that event.

It worked.

But I had to also clean all event listeners since I run the test in watch mode and subsequent runs will fail (and probably run into the "too many listeners" warning from node).

It's not the most elegant solution but it works. I will dig deeper once I have some time to try to find a better way.

Conclusion

I like Nock, and even when I run into this issues, the library is very flexible and there is a way around the problems.

With more time using it, I will probably find a better way. I'm particularly looking forward to use the Record/Playback features.

]]>https://dynamicprogrammer.com/2016/05/17/testing-auth-bearer-headers-with-nockhttps://dynamicprogrammer.com/2016/05/17/testing-auth-bearer-headers-with-nockTue, 17 May 2016 00:00:00 GMTI have been blogging for close to 13 years now.
During that time I changed urls twices and platform 5 times (including this last change).

I started long time ago under the domain latrompa.com that was the name of my first company back when living in Argentina.

At the time I was using a concoction of scripts to run the blogging part of the site.

The first migration was moving the blogging into wordpress in 2007. I run Wordpress for a year until I start running into issues updating to newer versions.

I decided to switch platforms all together and move all the content into dasBlog and asp.net blogging platform.

More blogging in 2016

Let's hope this new setup give me energy to blog more this year that my last two and start getting back into a rhythmn.

]]>https://dynamicprogrammer.com/2016/05/14/switching-from-jekyll-to-metalsmithhttps://dynamicprogrammer.com/2016/05/14/switching-from-jekyll-to-metalsmithSat, 14 May 2016 00:00:00 GMTI added to my Phoenix project the test.watch task for mix since I love to have the test running all the time while developing new features.

It's specially good for me since I'm going down Phoenix seriously now.

I created some migrations with mix ecto.gen.migrations and I had the test running on the background.
I run mix ecto.migrate and I started to modify the code to reflect the new structure.

The problem was that the tests (and the code) started to complain that the new columns didn't exists.

This confused me for a while, looking at the table in postgres revealed that the columns were indeed missing. I was looking (correctly) at the _test database.

Environments

I them look into the _dev database and noticed that the new columns where there. That was the moment I realized my mistake.

I completely forgot that the test run in their own environment and that the migrations for your tests are run just when the test start to run. Since running test sets the environment to test the migrations modify the _test database just in this moment.

When you run the migrations manually, those migrations run for your dev environment (during dev in your localhost) or the environment you set (ex: production) when deploying.

This is the same behaviour as most new frameworks, and popularized by Rails several years ago.

So I stopped mix test.watch and re run it after that the table was migrated and I was able to proceed the rest of the fixes and writing the new test while using mix test.watch

Test helpers

The "magic" happens in the test_helpers.exs file inside the test folder.

So remember, if using test.watch restart your test to run any migration.

]]>https://dynamicprogrammer.com/2016/04/28/mix-test-watch-doesn-t-run-migrationshttps://dynamicprogrammer.com/2016/04/28/mix-test-watch-doesn-t-run-migrationsThu, 28 Apr 2016 00:00:00 GMTI started building a new API for a side project of mine.
I created my first resource using the mix phoenix.gen.json task.
I started to play around with it directly on Postman just to make sure I have everything setup properly and when I looked at the console I got a nice surprise.

Notice that is creating both a inserted_at and updated_at fields into the db.
Those fields are not exposed in the model since I didn't declare them when running my generator, but they are already there ready for me to consume.

Pin operator

Pinning can only be done after the first match (what it makes sense)

]]>https://dynamicprogrammer.com/2016/02/09/notes-while-playing-with-elixirhttps://dynamicprogrammer.com/2016/02/09/notes-while-playing-with-elixirTue, 09 Feb 2016 00:00:00 GMTThis is very simple, but for whatever reason I struggle for a bit. So I'm posting it here in case somebody else finds it helpful.

I wanted to add an icon as the logo to the navigation bar while using ReactBootstrap

The examples on the ReactBootstrap site use a string as the brand attribute.

I took me a bit of trying different things until I decided to take a look at the code of the library.

After a quick look, the solution to this was obvious. The brand attribute is of type React.PropTypes.node

Not sure if this idiomatic React, so if you know of a better way, please leave a comment below.

]]>https://dynamicprogrammer.com/2015/03/10/adding-a-logo-to-reactbootstrap-navbarhttps://dynamicprogrammer.com/2015/03/10/adding-a-logo-to-reactbootstrap-navbarTue, 10 Mar 2015 00:00:00 GMTIt all starts with the undeniable need and benefits of bringing the different practices and areas of software development together. Agile practices allow us to move faster and adapt to change.

XP practices promote code quality and better (or just more reliable) products for the end users.

The idea of cross-functional teams help us to identify problems in our approach to solve issues and pinpoint bad architectural decisions.

DevOps is one of the newest practices on the Agile arsenal. In reality is not that new, but we have been hearing about it more and more in the last three to four years and now, even the enterprise is starting to notice.

According to Wikipedia DevOps is:

DevOps (a portmanteau of "development" and "operations") is a concept dealing, among other things with software development, operations and services. It emphasizes communication, collaboration and integration between software developers and information technology (IT) operations personnel. DevOps is a response to the interdependence of software development and IT operations.

It's all about communication and collaboration. In time this usually means that some or all the developers in the team will have access to and help with whatever scripts, recipes, modules or method of deployment and configuration is used.

The focus is on transparency and predictability. Automation and the introduction of better management tools to reduce the amount of manual, repetitive and (let's face it) boring work is a big focus of the practice.

This collaboration will naturally bring some of the developers more interested with the operations, network and hardware side of things into a tighter collaboration with the "operations team". This results in a "good" vicious circle and big gains for the company that adopt the practice.

The trap

As with all social practices, the trap is trying to force DevOps into people. This can take many forms but the one that I keep seeing popping around is the idea of having developers double up as operation people and thus not having a proper operations team.

Don't get me wrong. I think that the goal should be not having "teams" and having polyskills people that can perform in different areas as needed or as they (as individuals) want to grow into.

This is the same trap we got into with other Agile practices, like "architecture" or much rather "no architecture". Since we don't have glorify architects in an agile team, sometimes this is portrayed as "we don't need no stinking architecture".

We should all embrace DevOps

We should certainly embrace DevOps and QaDev and DevProd and whatever new practice comes along that help us remove the barriers for collaboration. That encourage open and high bandwidth communication between all the members of the team and between "chickens and pigs":http://en.wikipedia.org/wiki/The_Chicken_and_the_Pig

Specially, between chikens and pigs!

But we should also recognize the need for strong operation people as part of the team tacking charge and helping the team move forward faster and safer. Enabling many deployments a day and achieving the somehow elusive holly grail of Continuous Delivery.