importpandasaspdimportmatplotlib.pyplotaspltpSerie=pd.Series(annualTemperature[:],index=idxTimes)fig=plt.figure(figsize=(12,4))ax=fig.add_subplot(111)pSerie.plot(ax=ax,title='%s at %s'%(dataset.variables['GANNTMP_30MIN'].long_name,ciudad))ax.set_ylabel(dataset.variables['GANNTMP_30MIN'].units)

I've been using different VCS's since 2002 (CVS, Subversion, Plastic, GIT and a propietary one), and I'm always looking for a chance to improve productivity in our team. Now I'm convinced that we can take a significant step in improving our productivity when using GIT as a VCS, if we switch from a "Text Diff" tool to a "Semantic Diff" tool.

In an agile development team, refactors must be considered normal day-to-day operations, and a "Semantic Diff" tool can help us a lot to save time.

In the video below I would simply like to give you three examples about using

GMASTER and its semantic capabilites.

1) Semantic visual diff.

In the first one, a couple of methods called startServer and stopServer have been moved a little bit.
We can see that with a "Text diff" tool you perceive the change as a remove and add operation. On the other hand, its easy to track the change with GMASTER and its semantic capabilities.

2) Criss-cross diff.

In the second one, we're refactoring the code, so we move a method down and another one up. Additionally, we add a comment block to both methods.
We can see that with a "Text diff" tool you perceive the change as a succession of remove and add operations. But a different situation occurs with GMASTER and its semantic capabilities. With its "Semantic Diff" tool you can easily visualize the relationship between the blocks side by side.

3) Semantic merge.

In the last one, we compare a normal day-to-day operations, working onto two branches (a main branch and a child -task- branch), using two different tools:
a) And old version of Plastic SCM (3.0?), without semantic capabilities
b) Our brand-new tool GMASTER with semantic capabilites.

Three steps are taken:

On the main branch, we move the run method up.

On the child (task) branch we change the same method

Finally, we switch to the main branch again, and try a merge from the child (task) branch to it. The merge ends with a conflict.

The difference is that while using Plastic SCM the conflict has to be resolved manually, using GMASTER (due to its semantic capabilities) the merge tool resolves the conflict automatically.

We are suposed to be familiar with Node.js, and particularly with its asynchronous nature.

In the above post, Carlos shares with us three orientations to manage asynchronous calls. We are assuming that we already know how to invoke a function that execute an asynchronous operation, and then execute some code when the function finishes its execution. But what happens when there are several asynchronous functions to call, and some code must be executed when all theese functions have finished?

We've prepared some examples with different approaches, and we're comparing them.

Note: Requester.getResponseCode is a helper we've coded, that relies on request to make http calls.

It works properly. Second function invocation only occurring after the first one completed. And of course, after all, the final message appears.

The fancy way: Async/Await (state-of-the-art technology)

Async functions have been accepted into the ES7 standards. We're not talking about it now, if you want to know more about it, you can pay attention to Carlos introduction, it seems to be a very good one.

Parallel operations: If you need to run multiple tasks that doesn’t depend on each other and when they all finish do something else, you should run it in parallel.

If this operations are such as file I/O, querying DB or networking. If your tasks contain this kind of calls, they'll appear as if the've been processed in parallel. If not, they will actually be executed in series.

How can this be possible? I/O tasks spent most of its processing time waiting for the result of the I/O call. Node.js starts processing the first task until it pauses to do an I/O call. At the moment, Node.js leaves it and grants its main thread to another task.

Remember that in single-threaded event loops you can never do more than one thing at once. But you can wait for many things at once just fine.

There are severals ways to code parallel operations in JavaScript, we've prepared a couple of examples.

Parallel operations with Callbacks using async

Async is a utility module which provides straight-forward, powerful functions for working with asynchronous JavaScript. Although originally designed for use with Node.js and installable via npm install async, it can also be used directly in the browser.

Async provides around 20 functions that include the usual 'functional' suspects, as well as some common patterns for asynchronous control flow (parallel, series, waterfall…). In our example we're using one of theese functions (async.parallel).

parallel(tasks, [callback])

Run the tasks array of functions in parallel, without waiting until the previous function has completed. Note: parallel starts I/O tasks in parallel, it doesn't make the "parallel execution of code dream" real.

Execution doesn't wait for the first function to finish to start the second.

Parallel operations with Promise.all

Promise.all allows you run multiple asynchronous operations at once and continue on your way once all of them have completed. The Promise.all method takes an array of promises and fires one callback once they are all resolved.

Benchmarking

We've prepared a sort of time comsumption comparative between different approaches. Each piece of code consumes four popular URL's (bing.es, google.es, nasa.gov and loc.gov), and after finishing, they show the total time in miliseconds for finishing the four request and retrieving the four responses. Each test have run ten times, taked noted of the miliseconds logged, and calculated min. execution time, max execution time and average execution time. If you are interested about result please take a look at the table below:

Promiseschaining

Parallelwith async

Parallelwith Promises.all

Min.

1141

359

345

Max.

3037

807

866

Avg.

1730

553

477

Conclussion:

We would typically not use series approach because is demonstrably slower than using a parallel approach. Promise.all (as well as asyc.parallel does) is firing off multiple asynchronous operations at one time, to reduce the total execution time. Again, thist post is isn't just about Node.js, or JavaScript. It is about asynchronous programming in general, and its different approaches.

About one month ago I was trying this course: javascript30. After first day lesson, I followed the Wes Bos advice (creator of the javascript30 challenge), building things. What things?

In this post, I'm going to share you my "homework". We're going to make JavaScript Pair Match game.

It is a memory game where you need to match pairs of tiles. Playing is very simple: you turn over one tile and then try to find a matching tile. When you click on the first tile of a turn, you should be able to make a match (if you have exposed a matching tile in a previous turn, and you have perfect recall, or obviously, by serendipity).

Following the javascript30 course approach, no frameworks, no compilers, no libraries, no boilerplate will be used, only vanilla JavaScript coding (a base of html and css is assumed).

The html structure is very simply, a bunch of div's each one with an image element inside.

<div tileID="4" class="tile"> <img picID="4" class="pic" /></div>

Every tile (div) has got a non standard attribute ("tileID"), with a number. And each inner image has got another one ("picID"). Tiles are numbered from 1 to 12, and imgages are numbered from 1 to 6 (each img appears twice, obviously).

sourcePictures is the images path array. nextRandomFromList is a random numbers generator (from 1 to 12). It's used to positioning images into tiles randomly. For each image, initializeImage function is called twice (remember, each image is placed into two positions).

It initially stablishes the src attribute of the image to 'images/questionMark.jpg', and set a non standard attribute (source attribute) with the fruit image path. After that, the function adds an event listener for the click event to the tile (div), ensuring that selectTile funcion will be called when player click on a tile.

Finally, some lines of code to find out if a pair is matched or not, etc...

A lot of improvements can be done, such as levels of expertise, each getting progressively harder, or a scoring system, for example. Technically, there should be a more objetual approach: you will note the absence of a board object, a tile object, and so on.

The more important things we learn from this exercise are:

1) The massive use of the HTML DOM querySelector() method (combined with the use of data (non standard) attributes). Its important the way you set data (custom) attributes to DOM elements, when you use querySelector to select this elements.

We've discovered a very fundamental difference between the 2 methods of setting custom DOM attributes in Javascript.

This is not about Java, nor it is about split or string. This entry is about

The need to check the API reference always.

Lets me illustrate this need with a short tale.

A frecuently asked question between novice Java programmer is "How to split a string on | (pipe)".

It could be naive to think that this is going to work:

tokensArray = content.split("|");

Well, you are (supposed to be) a neat programmer, and you (think you) do your work. You can analize the first few responses after google the question: this one, this one, just another one, or the last one for example.

And after a brief "research", you (are proud of yourself becouse you) have the "right" answer:

tokensArray = content.split("\\|");

Yes, the argument is a regular expression, and in regex | is a metacharacter representing the OR operator. You need to escape that character using \, bla bla bla bla.

Let's try it in a sample. We are parsing a set of lines (records), each one with a bunch of strings (fields) delimited by | (pipe). The structure is:

You get a gorgeous exception (ArrayIndexOutOfBoundsException). Do you know why? Because you haven´t read the API reference, and the default behaviour of split method is NOT the same you have expected. Simple.

And we will code and sing and dance and we will live happily ever after.

1) modifiying config.js (Ghost configuration file) to tell Ghost where it should live.
2) making configuration change to Nginx conf file, to set /blog as the location and proxy traffic to Ghost port (2368).

But albertomorales.eu site is not only about blog, it is also about static. To accomplish this, we have followed the third approach described here by Ghost for Beginners too.

But it doesn't include the static content, due to the fact that Ghost doesn't know it, remember, static content it's been served by Nginx, out of the Ghost control. In a first attempt we did try to fix it placing our custom sitemap.xml file in the root folder of the static content:

But when we submitted this file to Google, it gave us a bad surprise: a nesting error. The reason is that there are two entries that point to http://albertomorales.eu/blog (both of them marked with <==)

How have we fixed it?

Now there are just two steps to take:

1) We saved the content of /blog/sitemap.xml (the Ghost generated sitemap.xml for the blog) onto a file. This file was place on the root of the site static content as:

2) Then what we need to do is have albertomorales.eu/blog/sitemap.xml be pointing to this file (file located on the server at /var/www/albertomorales.eu/sitemap-blog.xml). In our Nginx conf, we added this near the end:

location /blog/sitemap.xml {
return 301 /sitemap-blog.xml;
}

Finally try requesting the sitemap.xml file. You should see something like the following:

This configuration generates the sitemap every time a request is made to /sitemap.xml, so you don't have to generate the XML yourself.
The key is that the sitemap entry http://albertomorales.eu/blog/sitemap-posts.xml (published posts)
remains. Due to this reason you still have a dynamic sitemap.xml (some -changing- parts of it).

That's all, folks! It’s quick, it’s dirty, and maybe it’s not the most ‘correct’ way to do it, but it’s also the most maintainable method I’ve found (until someone send us a better one).

As I said, I’m fairly new to the Ghost app, so if you’ve got a cleaner, simpler way to deliver a dinamic sitemap.xml file when using Nginx to proxy requests to Ghost, and your site static content is being served by Nginx, feel free to contact us :)