Each team is the soul of a company. And each member of the team brings added value to the project they’re involved in, by bringing in their own, personal approach. Bogdan Ursache, one of our Node.js developers, gave us some tips for better working with the Node.js framework. They include

- building RESTful APIs and why Node.js is suitable for this

- a few examples of famous apps that function on the Node.js framework

A JavaScript library is a library of pre-written JavaScript which allows for easier development of JavaScript-based applications, especially for AJAX and other web-centric technologies. The primary use of JavaScript is to write functions that are embedded in or included from HTML pages and interact with the Document Object Model (DOM) of the page.

Every month we bring new, useful and handy javascript libraries for our audience. Today, We have collected 15 of the latest of theseuseful JavaScript libraries for January 2014 to help you enhance your website related tasks and keep your website a step ahead of the competition. We hope you will find a few of the JavaScript libraries below beneficial to your web related needs.

Many times on SitePoint I’ve mentioned how achieving good performance is a main concern today and how you should strive for fast web pages. In some articles of mine about JavaScript APIs, such as Introduction to the Resource Timing API and Discovering the User Timing API, I gave you all the power you need to know what’s slowing down your project. Earlier this year, Craig Buckler covered this topic too, with his article The Complete Guide to Reducing Page Weight.

If you aren’t stuck in the past, you’re likely using a task runner like Grunt or Gulp to improve your workflow by automating a lot of operations we used to perform manually. In this article I’ll describe five Grunt tasks that help us enhance the performance of our web pages.

JavaScript is used for many different kinds of applications today. Most often, it's partnered with HTML5 and CSS to build Web front ends, but it's also used for mobile applications, and it's even finding a place on the back end in the form ofNode.js servers. Fortunately, JavaScript development tools -- at least some of them -- are rising to meet the new challenges.

In this roundup, I look at 10 different editors and IDEs (integrated development environments) of interest to JavaScript programmers. Six of these -- ActiveState's Komodo IDE, Eclipse with JSDT (JavaScript Development Tools), Microsoft's Visual Studio 2013, NetBeans, Sublime Text, and JetBrains' WebStorm -- could serve as the primary JavaScript tool for serious developers. I've given these six products full, scored evaluations.

[ JetBrains' WebStorm and Sublime Text are InfoWorld 2014 Technology of the Year Award winners. Read about the other winning products in our slideshow, "InfoWorld's 2014 Technology of the Year Award winners." | For quick, smart takes on the news you'll be talking about, check out InfoWorld TechBrief ]

The other four tools -- Alpha Anywhere, Komodo Edit, Notepad++, and TextMate -- don't rank with the above group, and I didn't give them full evaluations. Still, they're worth knowing about, so I've included them in the discussion.

AngularJS has come a long way since its introduction. It is a comprehensive JavaScript framework for Single Page Application(SPA) development. It has some awesome features like 2-way binding, directives, etc. This tip will focus on Angular routing security. This is client side security which you can implement with Angular. I have tried and tested this. In addition to client side route security, you need to secure access at server side also. Client side security helps in avoiding an extra round trip to server. However, if someone tricks the browser, then server side security should be able to reject unauthorized access. In this tip, I would restrict my discussion to client side security.

What I would like to show you is a simple technique that can be effectively used against modern web applications, such as those written on top of NodeJS and MongoDB. In essence, this technique is very similar to SQL Injection (SQLI) although much simpler because we do not have to complete any weird and complicated strings.

Before I move on, I would like to state that although I talk about NodeJS, in the examples below I use ExpressJS, which is the most popular web framework for node and a de facto standard in the NodeJS community.

The SQL Injection Primer

The first thing you learn when studying SQL Injection is how to create true statements. Let's consider the following example SQL statement that is used to authenticate the user when the username and the password are submitted to the application:

If this statement is not prepared or properly handled when constructed, an attacker may be able to place ' or 1=1-- in the usernamefield in order to construct a statement that looks more or less like the one bellow, which is known as the classic login bypass via SQLI:

SELECT * FROM users WHERE username = ''or1=1--' AND password = ''

Even today, this classic attack and its variations are wildly used to detect the presence of improper handling of SQL statements.

The MongoDB Injection Primer

Now, even though SQL Injection is still a popular attack vector, it is no longer as widespread as it used to be. Many modern web applications opt in to use a much simpler storage mechanism such as the one provided by NoSQL databases like MongoDB. NoSQL databases not only promise simplified development but also improved security by eliminating the SQL language entirely and relaying on much simpler and structured query mechanism that is typically found in the face of JSON and JavaScript.

The SQL statement that we used above to query the user login details will be written like this in MongoDB:

db.users.find({username: username, password: password});

As you can see we no longer deal with a query language in the form of a string therefore one would think that injection is no longer possible. And of course, as it is always the case with security, they will be wrong because there are many factors at play.

You might think that web crawling and scraping only is for search engines like Google and Bing. But a lot of companies are using it for different purposes: Price comparison, financial risk information and portals all need a way to get the data. And at least sometimes the way is to retrieve it through some public website. Besides these cases where the data is not in your hand it can also make sense if the data is aggregated already. For intranet and portal search engines it can be easier to just scrape the frontend instead of building data import facilities for different, sometimes even old systems.

The Example

In this post we are looking at a rather artificial example: Crawling the meetup.com page for recent meetups to make them available for search. Why artificial? Because meetup.com has an API that provides all the data in a more convenient way. But imagine there is no other way and we would like to build a custom search on this information, probably by adding other event sites as well.

This is a part of the Search Meetup Karlsruhe page that displays the recent meetups.

The Accord.NET Framework is a complete framework for building machine learning, computer vision, computer audition, signal processing and statistical applications. Sample applications provide a fast start to get up and running quickly, and an extensive documentation helps fill in the detailsAccord.NET is an extension to AForge.NET, a popular C# framework for computer vision and machine learning. Currently, Accord.NET provides many statistical analysis and processing functions, as well as audio, video and image processing routines.

This guide is intended to show new users how to install the Accord.NET and AForge.NET Frameworks and show how to create a simple application making use of both frameworks. There are two ways to quickly get up and running with the framework. The fastest way is through NuGet. However, this is somewhat experimental and source code and sample applications will not be included. The second fastest way is to use the executable installer. This guide will deal with the later.

ASP.NET Boilerplate [1] is an open source project that combines all these frameworks/libraries to make you start easily to develop your application. It also provides us an infrastructure to develop applications in best practices. It naturally supports Dependency Injection, Domain Driven Design and Layered Architecture.

When running a node application in production, you need to keep stability, performance, security, and maintainability in mind. In this article, I’ll outline what I think are the best practices for putting Node.js into production.

By the end of this guide, this setup will include 3 servers: a load balancer (lb) and 2 app servers (app1 and app2). The load balancer will health check and balance traffic between the servers. The app servers will be using a combination of systemd and node cluster to load balance and route traffic around multiple node processes on the server. Deploys will be a one-line command from the developer’s laptop and cause zero downtime or request failures.

It will look roughly like this:

Photo credit: Digital Ocean

How this article is written

This article is targeted to those with beginning operations experience. You should, however, at least be basically familiar with what a process is, what upstart/systemd/init are and process signals. To get the most out of it, I suggest you follow along with your own servers (but still using my demo Node app for parity). Outside of that, there are some useful configuration settings and scripts that should make for good reference in running Node.js in production.

Apache Cassandra is a leading NoSQL database platform for online applications. By offering benefits of continuous availability, high scalability & performance, strong security, and operational simplicity — while lowering overall cost of ownership — Cassandra has become a proven choice for both technical and business stakeholders.

When compared to other database platforms such as HBase, MongoDB, Redis, MySQL and many others, Cassandra delivers higher performance under heavy workloads.

The following benchmark tests provide a graphical, ‘at a glance’ view of how these platforms compare under different scenarios.

End Point Benchmark Configuration and and Results

University of Toronto NoSQL Database Performance

Netflix Benchmarking Cassandra Scalability on AWS

End Point Benchmark Configuration and Results Summary

End Point, a database and open source consulting company, benchmarked the top NoSQL databases — Apache Cassandra, Apache HBase, and MongoDB — using a variety of different workloads on Amazon Web Services EC2 instances. This is an industry-standard platform for hosting horizontally scalable services such as the three NoSQL databases that were tested. In order to minimize the effect of AWS CPU and I/O variability, End Point performed each test 3 times on 3 different days. New EC2 instances were used for each test run to further reduce the impact of any “lame instance” or “noisy neighbor” effect on any one test.

A summary of the workload analysis is available below. For a review of the entire testing process with testing environment configuration details, the benchmarking NoSQL databases white paper by End Point is available.

This is a republished blog post by Brandon Cannaday. Brandon is the CTO of Modulus, a Node.js application hosting platform. Brandon organizes the Indianapolis Node.js meetup and enjoys speaking at conferences about Node’s horizontal scalability. Prior to Modulus, Brandon worked in the chemical detection and telecommunications industries.

Modulus is the first company in the industry to offer a dedicated enterprise solution called Curvature. Curvature allows you to take advantage of rapid deployments, easy scaling, and real-time analytics in the environment of your choosing, on-premises, in the cloud, or a hybrid of the two.

There’s no shortage of Node.js tutorials out there, but most of them cover specific use cases or topics that only apply when you’ve already got Node up and running. I see comments every once and awhile that sound something like, “I’ve downloaded Node, now what?” This tutorial answers that question and explains how to get started from the very beginning.

There is no denying that since its inception in the mid 90’s, JavaScript has become one of the most popular Web development languages. In September 2012, industry analyst firm, RedMonk, showed it as the top language. Much of this is due to its ability to deliver rich, dynamic web content, its relatively lightweight and its high ease of use.

Although initially developed as a browser-agnostic scripting language, in recent years, we’ve seen its continued evolution beyond the desktop to areas such as mobility and server-side web applications. Over the next few years, JavaScript is poised to become the dominant language of the enterprise for IT - ultimately displacing the all-encompassing and highly pervasive C, C++ and Java languages.

Fast and scalable. High performing and robust. Node.js was every web application developer’s dream come true. But like every new technology, coding in Node.js from scratch is a little time-consuming and requires a lot of repetitive programming. The solution to this problem was simple and time-tested-> build a pre-fabricated framework. And thus Express.js, Koa, Sails.js and many others were conceptualized and materialized.

The role of frameworks was simple. They save a lot of unnecessary work for developers and allow faster and easier development of web apps. And since every developer believes in the theory of lesser efforts for same output, these frameworks became quite popular.

As such there is not one framework which overwhelmingly dominates all, but Express.js still is used more often by far. However there are some other contenders in the race.

JavaScript may feel like an old language – and it is, being 19 years old – but it has a myriad of uses and is popping up in places you wouldn’t expect it to. Now, is definitely the time to start learning this versatile and exciting language.

JavaScript began its life in the browser, allowing you to interface with a number of Web APIs such as the Document Object Model (DOM), to manipulate your web pages and add richer desktop-like user experiences.

Summary: If you are wondering how to become a Data Scientist or what that title really means, try these insights.

I got started in data science way back. I’ve been a commercial predictive modeler since 2001 and as naming trends have changed I now identify myself as a Data Scientist. No one gave me this title. But by observing the literature, the job listings, and my peers in the field it was clear that Data Scientist communicated most clearly what my knowledge and experience have led me to become.

These days you can get a degree in data science so you can show your diploma that certifies your credentials. But these are relatively new so, with all due respect, if you only recently got your degree you are still a beginner. Those of us who use this title today most likely came from combination backgrounds of business, hard science, computer science, operations research, and statistics.

What you call yourself is one thing but what your employer or client is looking for can be quite a different kettle of fish. A lot has been written about data scientists being as elusive as unicorns. Not being a unicorn I’d say this sets the bar pretty high. Additionally, as I’ve perused the job listings it is equally true that the title is used so loosely and with such little understanding that an ad for data scientist may actually describe an entry level analyst and some ads for analysts are looking for polymath data scientists.

All of this confusion over what we’re called and what we actually do can make you down right schizophrenic. This makes it all the more complicated to answer the frequent inquiries I get from folks still in school or early in their career about how to become a data scientist.

Imagine my surprise and delight when in the space of a week two publications came across my desk that not only cast new light and understanding on this question but also have helped me understand that there is not just one definition of data scientist, but a reasoned argument (based on statistical analysis) that there are in fact four types.

Four Types of Data Scientists

The information here comes from the O’Reilly paper “Analyzing the Analyzers” by Harris, Murphy, and Vaisman, 2013. My hat’s off to these folks for their insightful survey and conclusions drawn by statistical analysis of those results. This is a must read. I was able to download this at no charge from http://www.oreilly.com/data/free/analyzing-the-analyzers.csp.

There are 40 pages of good analysis here so this will be only the highest level summary. In short, they conclude there are four types of Data Scientists differentiated not so much by the breadth of knowledge, which is similar, but their depth in specific areas and how each type prefers to interact with data science problems.

Data Businesspeople

Data Creatives

Data Developers

Data Researchers

By evaluating 22 specific skills and multi-part self-identification statements they cluster and generalize according to these descriptions. I am betting you will recognize yourself in one of these categories.

Data Businesspeople are those that are most focused on the organization and how data projects yield profit. They were most likely to rate themselves highly as leaders and entrepreneurs, and the most likely to have reported managing an employee. They were also quite likely to have done contract or consulting work, and a substantial proportion have started a business. Although they were the least likely to have an advanced degree among respondents, they were the most likely to have an MBA. But Data Businesspeople definitely have technical skills and were particularly likely to have undergraduate Engineering degrees. And they work with real data — about 90% report at least occasionally working on gigabyte-scale problems.

Data Creatives. Data scientists can often tackle the entire soup-to-nuts analytics process on their own: from extracting data, to integrating and layering it, to performing statistical or other advanced analyses, to creating compelling visualizations and interpretations, to building tools to make the analysis scalable and broadly applicable. We think of Data Creatives as the broadest of data scientists, those who excel at applying a wide range of tools and technologies to a problem, or creating innovative prototypes at hackathons — the quintessential Jack of All Trades. They have substantial academic experience with about three-quarters having taught classes and presented papers. Common undergraduate degrees were in areas like Economics and Statistics. Relatively few Data Creatives have a PhD. As the group most likely to identify as a Hacker they also had the deepest Open Source experience with about half contributing to OSS projects and about half working on Open Data projects.

Data Developer. We think of Data Developers as people focused on the technical problem of managing data — how to get it, store it, and learn from it. Our Data Developers tended to rate themselves fairly highly as Scientists, although not as highly as Data Researchers did. This makes sense particularly for those closely integrated with the Machine Learning and related academic communities. Data Developers are clearly writing code in their day-to-day work. About half have Computer Science or Computer Engineering degrees. More Data Developers land in the Machine Learning/ Big Data skills group than other types of data scientist.

Data Researchers. One of the interesting career paths that leads to a title like “data scientist” starts with academic research in the physical or social sciences, or in statistics. Many organizations have realized the value of deep academic training in the use of data to understand complex processes, even if their business domains may be quite different from classic scientific fields. The majority of respondents whose top Skills Group was Statistics ended up in this category. Nearly 75% of Data Researchers have published in peer-reviewed journals and over half have a PhD.

What Does this Mean for Someone Seeking to Enter the Field?

So if I am a young person seeking to enter Data Science how are these descriptions useful? It’s possible that you could train and develop an emphasis that would lead you into the Researcher, Developer, or Creative roles. It is less likely that education alone will put you on the Businesspeople track which implies experiences in business, not just education. But here’s what’s interesting. According to Harris, Murphy, and Vaisman it’s not the skills that are different but the way we choose to emphasize them in our approach to Data Science problems. Here’s their chart.

This article aims at introducing you to some of the currently most popular tools when developing modern web applications with JavaScript. These are totally not new at all and have been around for a couple of years now. Still, I found many devs still don’t use or know about them (as you might), wherefore this article tries to give you a quick, concise intro to get you started.

Node and NPM

Node.js brings JavaScript to the server and the desktop. While initially JavaScript was mainly used as a browser based language, with Node you can now also create your server-side backend or even a desktop application with node-webkit (for the crazy ones among you).

Node.js® is a platform built on Chrome’s JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.nodejs.org

One of the great things about node is its enourmous community which creates and publishes so-called node modules on the NPM directory, Node’s package manager. Currently there are about ~90.000 modules and there have been around ~390.000 downloads last month.

So you’ve got an AngularJS UI built out, but you’ll need a fleshed-out backend before being able to really take it for a test drive, right? Actually, it turns out, with the magic of Angular and its mocked $httpBackend, we don’t need no stinking backend!

If you’ve heard of $httpBackend, you’ve probably heard of it it terms of writing unit tests. Tests that look like:

// tell http backend what to do when in gets a GET request at a specific URL $httpBackend.when(“GET”,“/users/4”).respond({userName:“Doug”, userId:4});// Code we’re testing userService.getUserById(4);// flush all pending requests (pretend the server just got back with response) $httpBackend.flush(); expect(userService.getUser(4)).toBe({userName:“Doug”, userId:4});

While unit testing like this can help test and play with bits of code, it can’t drive an overall vision the way that seeing, clicking, and typing into a prototype can. Therefore, I’ve taken $httpBackend and implemented a complete mock-up of my backend to allow me to use my UI and iterate quickly. Basically, I can now simply open file://path/to/my/project/index.html in my browser use my application as if it were backed by a database on a server. (That is as long as I don’t fully reload the page:) ).

The jQuery developer community has to be one of the most generous and hardworking group of people on the web. They’re constantly churning out amazingly useful and completely free tools that they share with anyone and everyone who wants to use them.

The quantity and quality of free jQuery plugins simply never ceases to amaze me. I’ve been keeping a list of some great ones that I’ve found lately and I thought I’d share it with you. Here are 40 awesome and free jQuery plugins that just about every web developer should check out.

JavaScript can seem like a very easy language to learn at first. Perhaps it’s because of its flexible syntax. Or perhaps it’s because of its similarity to other well known languages like Java. Or perhaps it’s because it has so few data types in comparison to languages like Java, Ruby, or .NET.

But in truth, JavaScript is much less simplistic and more nuanced than most developers initially realize. Even for developers with more experience, some of JavaScript’s most salient features continue to be misunderstood and lead to confusion. One such feature is the way that data (property and variable) lookups are performed and the performance ramifications to be aware of.

In JavaScript, data lookups are governed by two things: prototypal inheritance and scope chain. As a developer, clearly understanding these two mechanisms is essential, since doing so can improve the structure, and often the performance, of your code.

Property lookups through the prototype chain

When accessing a property in a prototype-based language like JavaScript, a dynamic lookup takes places that involves different layers within the object’s prototypal tree.

In JavaScript, every function is an object. When a function is invoked with the new operator, a new object is created. For example:

Since JavaScript functions are objects, they can have properties. A particularly important property that each function has is called prototype.

prototype, which is itself an object, inherits from its parent’s prototype, which inherits from its parent’s prototype, and and so on. This is often referred to as the prototype chain. Object.prototype, which is always at the end of the prototype chain (i.e., at the top of the prototypal inheritance tree), contains methods like toString(), hasProperty(), isPrototypeOf(), and so on.

In any given week I can expect to write at least a few hundred lines of code in around four different languages. I can also expect to edit, review, and collaborate on code written by the other developers I work with.

Simply put, there’s a lot of code flying around all over the place, and things can get very complicated when it’s not organized, managed, and most importantly, written well. Let’s look at a few different ways to improve the overall quality of our code.

1. Start Building Modules

One of the best ways to keep code consistent, reusable, and organized, is to group functionality together. For example, rather than dumping all your JavaScript into one main.js file, consider grouping it into separate files based on functionality, then concatenating them once you reach your build step. Of course, there’s a lot more to writing modular code, and you can write modular code for more than just JavaScript.

CSS preprocessors like Sass (our introduction here) allow you to write individual CSS files, and then include them into one main file when you compile them. This lets you write individual CSS files for different components like buttons, lists, and fonts. At the end, they’re all included into one main file, but maintaining these files becomes a whole lot easier.

New technologies, such as Polymer allow you to write custom HTML elements, so that your HTML, CSS, and javascript can be grouped into individual components based on functionality. Be sure too look intoBrowserify (our introduction here), which allows you to use Node.js-style modules in the browser.

Brad Frost also gives a great overview on the ideas and methodologies for writing modular code here.

A team of us in the Porto Alegre office has been using AngularJS for a while now, and it has been learning curve. We want to share some practices that we have learned along the way that can help you start your AngularJS application on the right foot!. Learn from our mistakes!

This article has no intention of setting rules or having the absolute truth but only to give some useful tips and lessons learned when writing AngularJS applications.

Here are some good practices for AngularJS applications separated in five categories:

#1 Structure:

When we start to build an AngularJS application sometimes we don’t know exactly how to organise our files or even know what files we need. For this, the AngularJS team recommends two solutions:

1) Use the angular-seed (https://github.com/angular/angular-seed) project, which is basically a skeleton of a typical AngularJS application. You just need to clone the repository and you are good to go!

2) The other recommendation is to use yeoman (http://yeoman.io/) which is a tool that will basically create the skeleton and add other tools such as bower and grunt, which are widely used in the development of javascript applications according to the user preferences.

You need to be very careful with these tools that seem to be very useful in the beginning. Why is that? Because you need to think first on what your project needs are. For example, angular-seed will create a folder named ‘app’ where all the static deployable files are and inside we will have a folder named ‘js’ with all our javascript files like ‘controllers.js’, ‘services.js’, etc…

It depends a lot in the nature of the application we are building but there will be cases in which it is better to separate files for what they mean for the business rather that having them for what kind of component they are inside the framework we are using. In this case, for example, if we are building a sales module we could have files like ‘product-controller.js’ or ‘product-service.js’ and have folders inside our ‘js’ folder with the modules of the business.

Lets talk about logging, shall we? Arnold over here carrying a giant log feels like an appropriate intro to this article in which we are going to talk about popular Node.js logging frameworks.

If you are writing any kind of long living application, detailed logging is paramount to spotting problems and debugging. Without logs you would have few ways of telling how is your application behaving, are there errors, what’s the performance like, is it doing anything at all or is it just falling over every other request when you aren’t looking at it.

Requirements

Lets identify a few requirements which we can use to pit the frameworks against each other. Some of these requirements are pretty trivial, others are not so much.

Time stamp each log line. This one is pretty self explanatory – you should be able to tell when each log entry occured.Logging format should be easily digestible by humans as well as machines.Allows for multiple configurable destination streams. For example, you might be writing trace logs to one file but when an error is encountered, write to the same file, then into error file and send an email at the same time.

Based on these requirements (and popularity) there are two logging frameworks for Node.js worth checking out, in particular:

Bunyan by Trent Mick.Winston is part of the Flatiron framework and sponsored by nodejitstu.

In his blog post The Great Works of Software, Paul Ford enumerates five applications that excel in longevity, popularity and usefulness. Pac-Man made the list, so you know it’s a good one. Ford got me thinking – less so about software itself and more about the technologies that shaped the way it’s made.

In my view, four technologies have revolutionized software development in the last decade. The technologies themselves are less important than the indelible impact they continue to have on the IT industry. Understanding their effect will help developers discover new ways to attack their problem spaces and build better software faster.

Without further ado, here is my list in no particular order. Though I warn you now: Hadoop didn’t make the list.

Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.

Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.

Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.