Sometimes, when you want to do some action in your page using JavaScript it’s very convenient to know whether the user is actually paying attention to your page at that moment, because, as you probably know, the JS in your page can be executing even when it is in background. For example, the user can be reading a different browser tab, or even using a different application and the browser doesn’t have the focus.

In Tuenti we have had to deal with this kind of problems in different situations. One clear example is the Tuenti chat: we want to notify the user about new messages playing a notification sound, but we don’t want to play it when the user is already paying attention to the conversation, we only want to play the notification sound when the page is in background to take the user’s attention.

To achieve this we listen to different browser events to determine whether the user is actually using Tuenti at that moment or not. These are some of the events we listen to:

focus and blur events in window:

When a user enters to your page, the browser fires a focus event in the window object. In the same way, when the user changes to a different tab or application, the browser fires a blur event.

Taking this into account, it’s pretty simple to determine whether the user is in your tab:

Well… it depends on what you really need. With this simple code you can know whether the user is in your tab, but what happens if the user is in your tab but he has left the computer and is in the bathroom?

So we don’t only need to know if the user is in our tab but also if he is using it. To solve this problem we could connect the laptop webcam and spy the user, but probably our users wouldn’t like that. So we can use a simpler (and more privacy
respectful) solution:

Mouse and keyboard events

We can listen to DOM user interaction events to know if the user is actually using the page, and when we have not heard any event for a given time span (for example 30 seconds) we can assume the user is not active. Some events we can listen to are: click, mousemove, keydown, etc (for desktop devices), or touchstart, touchmove, etc. (for mobile devices).

So we can attach some event listeners to the document and toggle the inTab variable in the previous example to true. But, when we change it to false? when it has elapsed 30 seconds since the last event. This behavior can be implemented with a setTimeout, but we also need to reset that timeout (with a clearTimeout) when we hear a new event.

We can express this solution with a simple state machine:

Ok, beautiful, let’s implement it. You don’t need to do it! because we have already done it and opensourced a library that you can use: activity-detector:

How to use

First, install it.

We distribute the lib in npm, so you can install it with a simple command:

How this work? the library implements the algorithm we have presented in the previous state machine diagram. When you create an instance of activity-detector you can subscribe to the events active and idle. Those events will be triggered when the user becomes ACTIVE or INACTIVE respectively.

And that’s all. Now you can detect when your user is using your page or not.

Ok, but I don’t use npm neither ES6. Can I use activity-detector lib?

Of course, we have a production ready build (UMD format) that you can simply include in your page with a simple <script> tag, or require it with any module loader (like require.js). You can find it here.

For example:

test activity detector

Advanced options

activity-detector supports different config options to customize its behavior if you have different needs. For example you can decide which user events should be considered, or defer the activity-detector initialization. For example:

Yesterday, we had the people from MadridJS user group in our Madrid offices talking about "Javascript and Websockets". This time, the speakers were David and Rubén Chavarri who have been working in USA for five years and now they are back to try to run great projects like PartyRocking. They talked about async communication using websockets, good practices and the current Gaming atmosphere where performance is not a problem anymore because of platform like CocoonJS and Famo.us.

You can find more upcoming events related to this group on their meetup page. Thank everyone for coming, we had lots of fun! :)

Here at Tuenti, we believe that Continuous Integration is the way to go in order to release in a fast, reliable fashion, and we apply it to every single level of our stack. Talking about CI is the same as talking about testing, which is critical for the health of any serious project. The bigger the project is, the more important automatic testing becomes. There are three requirements for testing something automatically:

You should be able to run your tests in a single step.

Results must be collected in a way that is computer-understable (JSON, XML, YAML, it doesn’t matter which).

And of course, you need to have tests. Tests are written by programmers, and it is very important to give them an easy way to write them and a functioning environment in which to execute them.

This is how we tackled the problem.

Our Environment

We use the YUI as the main JS library in our frontend, so our JS-CI need to be YUI compliant. There are very few tools that allow a JS-CI, and we went for JSTestDriver at the beginning but quickly found two main problems:

There were a lot of problems between the YUI event system and the way that JSTestDriver retrieves the results.

The way tests are executed is not extremely human-friendly. We use a browser to work (and several to test our code in), so it would be nice to use the same interface to run JS tests. This discarded other tools like PhantomJS and other smoke runners.

Therefore, we had to build our own framework to execute tests, keeping all of the requirements in mind. Our efforts were focused on keeping it as simple as possible, and we decided to use the YUI testing libraries (and Sinon as mocking framework) because they allowed us to maintain each test properly isolated as well as proper reporting using XML or JSON and human-readable formats at the same time.

The Approach

This consisted in solving the simpler use case first (execute one test in a browser) and then iterate from there. A very simple PHP framework was put together to go through the JS module directories identifying the modules with tests (thanks to a basic naming convention) and execute them. The results were shown in a human-readable way and in an XML in-a-box.
PHP is required because we want to use a lot of the tools that we have already implemented in PHP related to YUI (dependency map generation server side, etc.).

After that, we wrote a client using Selenium/Webdriver so that the computer-readable results could be easily gathered, combined, and shown in report.

At this point in the project:

Developers could use the browser to run the test just by going to a predictable URL,

CI could use a command-line program to execute them automatically. Indeed, the results are automatically supported by Jenkins so we only needed to properly configure the jobs, telling our test framework to put the results in the correct folder and store them in the Jenkins job.

That was enough to allow us to execute them hundreds of times a day.

Coverage

Another important aspect of testing is knowing the coverage of your codebase. Due to the nature of our codebase and how we divide everything into modules that work together, each module may have dependencies with others, despite the fact that our tests are meant to be designed per-module.

Most of the dependencies are mocked with SinonJS, but there are also tests that don’t. Since a test in a module can exercise another, we can decide to compute coverage “as integration tests” or with a per-module approach.

The first alternative would instrument all of the code (with YUI Test) before running the tests, so if a module is exercised, it may be due to a test that is meant to exercise another module. An approach like that may encourage developers to create acceptance tests instead of unit tests.

Since we wanted to encourage developers to create unit tests for every module, the decision was finally taken to compute the coverage with a per-module approach. This way, we instrument only the code of the module being tested and ignore the others.

To make this happen, we moved the run operation from the PHP framework to a JS runner because that allowed for some of the required operations to get the coverage as instrument and de-instrument the code before and after running the tests.

Once the test is executed, the final test coverage report is generated, so it would be unnecessary to integrate with our current Jenkins infrastructure and setup all the jobs to generate the report. Coverage generation requires about 50% more time than regular test execution, but it’s absolutely worth it.

To be Done

There is more work that can be done at this point, such as improving the performance by executing tests in parallel or force a minimal coverage for each module, but we haven't done this yet because we think that the coverage is only a measure of untested code and it doesn't ensure the quality of the tests.

It is much more important to have a testing culture than just get the metrics of the code.

Last 30th November some of us attended to the first dotJS, the largest JavaScript Conference in France. Although it wasn’t planned, I had the chance to talk about solutions to synchronize events in JS and not give up in trying.

We have recently released the #1 requested feature at Tuenti, group chat.
It has been a titanic effort, months of developing the server code, client side code, and systems new infrastructure to support this highly anticipated feature. But was it so big as to take so much time?

Scope

Since 2010 improvements have been made to the chat server code (Ejabberd, using Erlang as the programming language), achieving important performance gains and lowering the server resource consumption.
We had approximately 3x better performance than a vanilla Ejabberd setup, which taking into account that we currently have more than 400M daily chat messages is not bad at all.

We also had 20 chat server machines, each running on average 6 instances of Ejabberd, and behaving even too well, under their capabilities, so resharding the machines and setting up a load balancer was appealing.

Chat history was almost done, but we had to add support of group chat. It is one of the first projects we do with HBase instead of MySQL as the storage layer.

The messages delivery system (aka message receipts) was also quite advanced in its development, but not yet finished. It uses a simple flow of Sent -> Delivered -> Read states.

Multi-presence means being able to open multiple browser windows and/or multiple mobile devices and not losing the chat connection at any of them (up to a maximum). In order to achieve this the server side logic needs to handle not only jabber ids but also resources, so that the same JID can be connected from multiple sources at the same time.

The “new Tuenti”: This new version of the main website required to focus great part of the technical resources of the company. The team in charge of the Chat not only has that responsibility, so we had to dedicate engineers to build parts of the new website.
As it implied a complete new visual look, the chat had to change its appearance too.

And of course, the group chat.

Being able to chat with multiple people at once

Roles of room owner (the “administrator” of that chat group/room), members and banned members

Storing the rooms even if you close the window (until you explicitly close them)

Supporting both default group room avatars (a pretty mosaic) or custom ones (choose any of your photos or upload a new one)

Supporting custom room titles

Room mute

The Old Chat Web Client

The web chat it is a full Javascript client, only using Flash for videochat. We use a modified opensource Javascript XMPP library, JSJaC, tailored to our needs.
A rough schematic architecture of the chat client is:

HTML receiver file, that performs long polling connections to the chat servers to simulate a typical socket.

One requests controller that processes incoming XML chat messages (stanzas, iqs and the like) using the JSJaC library, and converts them to javascript objects.

Buddylist, User and other classes, all of them twice, one with UI prefix, and the other with Data prefix. We separate UI behaviours from data handling, and all components communicate with each other (think of linked widgets more than a traditional desktop chat client application).

User class performs two tasks: It represents a buddy list contact, but it also represents a conversation room (stores the conversation, etc.)

The code has been working perfectly and with almost no client-side maintenance since it was launched in 2009, just adding new features and visual style changes.

What went right

New cluster: It works really good. Now we not only have load balancing, but also we can perform upgrades on one leaf and keep the chat up with the other leaf’s nodes.
Each node now has 10 machines running up to 4 instances per machine, so we actually do more with less hardware.

Cleaner, up to date code: Now we have inheritance in the chat client code, allowing to avoid repeating code by having a base chat room, then one-to-one and group rooms. Data-related classes are also now better separated from UI-related ones, a lot of the code now has lots of comments, we have private and public fields (by convention, not enforced by any javascript framework).
Many events are now handled by YUI and we have dozens of javascript files that are still bundled into one when we deploy the code live, so it eases a lot the development.
Overall, now the client will support future enhancements and additions quite faster.

Fast, very fast: Server side code is even faster. More optimized, more adapted to our needs, being able to handle up to 13 times more messages at once! Custom XMPP stanzas have been built to allow fast but correct delivery.

Everything works as expected: We didn’t had to do any tradeoffs due to technical limitations. We have kept the same browsers support (including IE7) and all features work as the original requisites defined.

Two UIs co-exist happily: Both versions of www.tuenti.com, each with their distinct UI, share all inner code and are easy to extend.

What went wrong

Ran two projects in parallel: Along with housekeeping tasks, the team had other high priority projects to work on, which took resources and time out of the group chat. Bad timing made half of the client side team dedicated to building the “new Tuenti” instead of bringing the full power until last stages of development.

One step at a time: Tuenti has migrated almost all client side code to use Yahoo’s YUI library. We had to migrate the chat client, plus do a huge code refactor to add support of group chats, plus visual changes of the new website, plus new features (chat history, receipts...). This generated a lot of overhead and a first phase of code instability where we didn’t know quickly if a bug was due to the refactor, due to YUI or due to a new feature not yet finished.
Probably would have been much better to first migrate to the new framework, then refactor and then apply the visual changes and implement or finish the new features.

Single responsibility principle: A class should have only one single responsibility. By far, the biggest and hardest part of the refactor was to separate the original User class into ChatUser and ChatRoom. We couldn’t think about group chat back in 2009, but we estimated too optimistically the impact of this change when planning the group chat.

Lack of client side tests: Old chat client had no tests, so QA has to manually test everything and this generated too a lot of overhead.
We are now getting ready a client side testing environment and framework to have the new chat codebase bug-free.

CSS 3 selectors performance: With the multipresence and the new social application, all users now have many more friends online or reachable via mobile device at once. Rendering hundreds of chat friends, plus some performance-wise dangerous CSS 3 selectors hit us in the late stages of development.
We hurried to do some fixes and we are still improving performance as some browsers still suffer a bit from the amount of DOM nodes plus CSS matching rules.