What are you in the mood for today?

Thursday, 21 November 2013

Previously we discussed the benefits of the ‘Comb’ library. We will add to our knowledge here, by discussing ‘Promises’. We are not speaking of promises that are too good to be true! Rather that of a client system willing to fulfill or resolve an operation asynchronously, hence a ‘Promise’. We want to use ‘Promises’, to wrap our code sections, so that we don’t cause blocking in error handling. You will find that your code is also cleaner.

In these examples we will be using CoffeeScript, so we can get a flavor of that emerging language that compiles into JavaScript as well. You will also need to install require.js through the NPM. One tricky thing to remember about CS (CoffeeScript) is that even a tab or a space can cause the compiler to resolve something differently.
Here ‘Promise’ has three supported methods that we will cover: errback(), callback(), and resolve().

Now let’s make this even easier: If we use the ‘resolve’ method, we do not have to handle the success and fail code paths separately. Now the ‘callback’ and ‘errback’ methods, are wrapped in a single ‘resolve’ method.

This version of the ‘readFile’ function does the exact same work as the above, but with a lot cleaner code! Now let’s build on this by adding a ‘listener’. Your purpose, is to listen for the resolution/fulfillment of a ‘promise’. We are looking for the task to be completed, or a failure. Using the ‘readFile’ function from above, the following is listening for a successful: You guessed it, a successful file-read.

Monday, 4 November 2013

So when working with javascript you may have not realised that it is single threaded.... So Brian, what's the big deal?

To explain a little; For those new to threads. You can simply think of a thread as the normal flow of execution you have been working with. You start, call a function, do some work and so on. Its all very step by step.
So threads are the same basic idea, only they can talk to each other allowing for more work to be done in parallel.

The program starts execution as normal and then start sub-tasks in threads that will go off and do there work, then can give back the results of there labor. This is how people leverage the strength of multi-core computers.
However it will take a little shift in how you think of problems. ; )

That it. Very easy. Just put the code you what to run into another js file and off you go.

You should be aware that Due to their multi-threaded behavior, web workers only has access to a subset of JavaScript's features:

The
navigator
object

The
location
object (read-only)

XMLHttpRequest

setTimeout()/clearTimeout()
and
setInterval()/clearInterval()

The Application Cache

Importing external scripts using the
importScripts()
method

Spawng other web workers

Workers do NOT have access to:

The DOM (it's not thread-safe)

The
window
object

The
document
object

The
parent
object

2. Inlineing your web worker into one file

let's take the above and combin it all into one file

index.html

The first thing we need is to move webworker1.js inside a script element on our page but we must add an id to the element so we can reference it. I used the same name was I used when it was in a file "webworker1"

Now we want to swap out the include with the contents of the source.js

becomes

You should note the first two line.

var blob = new Blob([document.querySelector('#webworker1').textContent]);
This grates a reference to are code block that we want our web worker to work on.

var worker = new Worker(window.URL.createObjectURL(blob));
and here we create our web worker based on that code block.

3. Web workers in older browsers

For this you can using web-workers-fallback This library provides basic compatibility for the html5 web worker api in browsers that don't support it. To use it, you only need include Worker.js, and everything should work out of the box.

*As usual you should read the Limitations section and test in a browser that doesn't support web workers ;)

For more information on web workers see the great Mozilla Developer Network resource: Using web workers

Friday, 25 October 2013

Continuing with my research into WebGL libraries this week I was looking at ThreeJs and recreating the same sample as in my previous example with BabylonJs.

With Threejs there are 2 ways to create your viewport.

Is to dynamically create a canvas element, as above. This is how you can add you rendered seen in to a webpage as normal. *the size are there cameras is in pixels in the javascript source

To feel the entire page

In this example I will be using the first approach, as the majority of examples you will find online references the fullscreen mode.
To please the viewport on to the page, we first need a div where we want our viewport placed

To help keep our code clean you will now create a "myCode.js". I hope you noticed that this file is the one referenced in the above section.

In side "myCode.js" we will add the following:
As we are rendering to a element on the page we need to define the width and height. These sizes are used to in the example to match the camera perspective with the rendered viewport.

Sunday, 13 October 2013

Welcome to the new world of 3D in your browser! Today I'm going to show you just how easy it can be with a help from BabylonJs. BabylonJs is a high level wrapper on top of WebGL. With libraries like this, it is actually very easy to make things like the above scene.

The first thing we will need is some css for where we want the rendered image to be drawn.

To help keep our code clean you will now create a "myCode.js". I hope you noticed that this file is the one referenced in the above section.

In side "myCode.js" we will add the following:

function babylon(){
//get a reference to the canvas element on the page
var canvas = document.getElementById("canvasView");
//create an instance of the rendering engine
var engine = new BABYLON.Engine(canvas, true);
//create an instance of a Scene.
//This is used to house our camera, lights and shapes
var scene = new BABYLON.Scene(engine);

Now that we have a Scene, we need to specify a camera

//a Camera, so the renderer knows what to show us.
var camera = new BABYLON.FreeCamera("Camera", new BABYLON.Vector3(0, 0, -10), scene);

Now we will get the ball rolling by creating a render function to be looped over.

// Render loop
var renderLoop = function () {
// Start new frame
engine.beginFrame();
// process scene
//NOTE: at this point the "beforeRender" will be called
scene.render();
// draw
engine.endFrame();
// Need this to render the next frame
BABYLON.Tools.QueueNewFrame(renderLoop);
};
//Need this to call the renderLoop for the 1st time
BABYLON.Tools.QueueNewFrame(renderLoop);

One final this we should do is to check if the browser supports WebGL

}// END OF babylon funciton
//Check it there browser is supported
if (BABYLON.Engine.isSupported()) {
babylon();
}
else{
alert("Sorry! WebGL is to cool for your Browser")
}

Level class used to describe logging levels. The levels determine what types of events are logged to the appenders for example the if Level.ALL is used then all event will be logged, however if Level.INFO was used then ONLY INFO, WARN, ERROR, and FATAL events will be logged. To turn off logging for a logger use Level.OFF.

comb.logger.configure();
//the loggers you create now will have a ConsoleAppender

OR

comb.logger.configure(comb.logger.appender("FileAppender", {file : '/var/log/my.log'}));
//loggers will have a FileAppender

The Cool part(Nerd time ;) Let's create a configuration file.
We configure by passing a block JSON to the "configure" function.

console.log(">> lets set what the defalts level looks like");
print();
console.log(">> lets set sys and its child to 'DEBUG'");
logger_sys.level = 'DEBUG';
print();
console.log(">> lets set user and its child to 'INFO'");
logger_user.level = 'INFO';
print();
console.log(">> Now we will ONLY set sys.logger to 'WARN'");
logger_sys_logger.level = 'WARN';
print();

examples of inheritance within logging

console.log('>> If will create a sub logger');
console.log('>> It will inherit the level from its parent');
console.log(comb.logger('sys.logger.log').level.name);

So what is the point in this??

Ok, let's take you have "INFO" and "ERROR" levels(for a full list of predefined logging levels see comb.logging.Level) So we can call a logging instance something inspired like "mypack.myclass.note" and set the level to INFO and another with "mypack.myclass.problem" to "ERROR".

Something important to note is that if you used the same "name space" name in the same or a different file, it will return the same global instants regardless.

Sunday, 15 September 2013

The comb library is a very useful set of utilities that will help in your javascript projects and especially with Node applications.

In this set of quick overviews I am going to give a brief run down of the different areas covered in the library:

Object Oriented

Logging

Utilities

Flow control

*But before going on, I am not connected to this project. I only found it helpful.. On with the show!

Object Oriented: As javascript does not support the classical object orientated paradigm. Comb provides a function define that takes an object with an attribute named instance or static. So this attribute is your class definition.

Tuesday, 20 August 2013

at Function.Module._resolveFilename (module.js:338:15) at Function.Module._load (module.js:280:25) at Module.require (module.js:364:17) at require (module.js:380:17) at Object.<anonymous> (c:\node\web\auto\test.js:1:63) at Module._compile (module.js:456:26) at Object.Module._extensions..js (module.js:474:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Function.Module.runMain (module.js:497:10)

Was trying to use a package call autoloader. It installed fine but when I would try and run my node.js code. Bang, would get the above error.

This turned out to be a noob mistake on my part.
Fix: you can install a node package from any directory but to have it seen by node.js you need to install it while in the node directory.

Navigate to wear your node executable is.

Install your package as normal.. Done!

While I'm here I might as well talk a little bit more about installing packages(more commonly known as libraries). There are 3 things to know.

What: So node.js has a very minimalist belief where anything additionally that you need can just be installed. To this end there is the node package manager(npm). This is the best source to find and install packages for pretty much everything you could imagine to do with node.js/javascript.

Where: Now, as with my above problem. If you run something like npm install autoloader it will create a directory called node_modules(if it does not exist already), download the library autoloader and install it into a subdirectory under node_modules.This is great but remember you need to be in the node.js directory so you ;node_modules are all in the same place so the node.js execute can find them. (This is also referred to as installing locally)

Global: There is an additional parameter in particular -g that can allow you to use the libraries from anyway where via your terminal. It does this by adding a path to the package in your environmental variables. The above autoloader example would then look like npm install autoloader -g

Thursday, 15 August 2013

Node.js + Coffee + mongoDB

Good morning boys and girls, today I would like to share with you a little something I've been working on.

So I set out to build a web service that would

Read in a POST request on a Node.js server and save it to a mongo database

When a GET request comes in, return all the posted data(this is the normal type of message you receive from a browser. i.e. get me this page/image/thing..).

and for good measure let's make sure we're using coffeescript's class ability.

To get started you will need to install the mongoDB server.

There are very good step-by-step tutorials for all major platforms on the mongoDB site. so once you Install MongoDB. Fire it up to make sure everything is working fine.
navigate to wear the mongoDB executable is

Here we create the save function that is used for the post messages.It's split into two functions, so "save" initiates the writing to database and "_saveCallBack" after the values have been stored.*note: there 'saveCallBack' function starts with an underscore. This is to denote that the function is private

Here is a similar setup to "save" in that it has two functions but of course we are reading out the information that has been stored by the POST messages. You should know that line 13 is where the magic happens as it loops thru the return values outputting each on a new line("\n")

Tuesday, 6 August 2013

Node.js + Coffee + Amazon

Amazon's Elastic Beanstalk is a deployment service that allows you to encapsulate different amazon services to provide a specific server and resource configuration based on your requirements. + There is no extra cost for this service. To find out more read Amazon's AWS Elastic Beanstalk Components

*Note: Beanstalk refers to each service collection as an "Application".

I am going to uses a directory called "aws" and I will use my Git basics as my server code. This is important as we will be using git to upload our code to beanstalk!In your command-line, go to this "aws" directory. but we will also need a "package.json" file to tell our node server about our coffee source.File: package.json

Sunday, 4 August 2013

Here I'm going to run through the very basics of getting started with Git. Simply Git is used to store our server code. It is a LOT more powerful than that, but every needs to start with baby steps.First step is to download/install the latest version of Git on your machine.

Now I am going to build on my node.js/coffee example. Once you have the source running. I want you to point your terminal to your directory where you have the coffee source saved.Terminal

git init

Your path should now have "(Master)" at the end, but we now need to add our server code into the newly created repository.

Terminal ~ This will stage all the files.

git add .

* You can think of staging like adding to a list of files that you are ready to commit.

So far so good! but there is one small thing bugging me.. that console message when the server starts. lets make 2 small tweaks. We are going to print out the port number and make the port selection more dynamic by adding have an optional argument when starting the script to specify the port. Else check if there is a predefined port of servers to start on.

First we will read in the port number from the command line. For this we will need process.argv which is an array containing the command line arguments. The first element will be 'coffee', the second element will be the path to our file and the last element will be the port number. The second part is process.env.PORT this will try and pull a port number from the global environment variable.

Thursday, 1 August 2013

Here's is a quick 101 on getting Node.js/CoffeeScript up on Ubuntu Server
I'm using Ubuntu 13.10 that contains Node.js in its default repositories. *Note that this will not be the newest one. However it's the simplest way of getting started.

We just have to use the apt package manager. We should refresh our local list packages before installing:

sudo apt-get update

sudo apt-get install nodejs

This is all that you need to do to get set up with Node.js. You will also need to install npm(Node.js Package Manager).

sudo apt-get install npm

This will allow you to easily install modules and packages to use with Node.js.

Because of a conflict with another package, the executable from the Ubuntu repositories is called nodejs instead of node. It's just good to keep this in mind.

The last step is to install the CoffeeScript interpreter

npm install coffee-script

Now that we are setup. Here is a basic node HelloWorld server written in coffee.
File: index.coffee