"In this episode, Alex Castillo and I discuss the world of JavaScript and neurotech. This is a mind-blowing episode, no pun intended, and if you were looking to get your feet wet in this area of front-end technology. Give it a listen!"

The journey

As a software engineer looking to learn more about this incredible organ, I’ve found myself using the tools I know (code), to try to understand more of what’s going on inside our minds.

My early experiments involved visualizing brainwaves (EEG) in the browser. My thinking was, if I could visualize it, I could better understand it. From there, I could start exploring behavioral experiments. And that is how my journey to connecting the brain to the browser began.

This journey has allowed me to try many different brain-computer interfaces including OpenBCI, NeuroSky, Muse — you name it. I explored many data acquisition and transmission standards such as Node and serial port, Node and Bluetooth, Web Sockets, WiFi, MQTT and Web Bluetooth. And I built many prototypes utilizing UI technologies such as Angular, React, RxJS, vanilla JavaScript, dozens of data visualizations libraries for SVG, canvas, WebGL, and even pure CSS. It has been a roller coaster, but every time I tried a new combination, it steered me closer to better results and most importantly, it made me a better engineer.

The experiment

From all the crazy ideas about the potential uses of brainwaves, I kept going back to the thought of “mind-controlling” stuff. And by this, I mean attempting to steer the brain frequencies and use these changes to detect intent. For example, it is known that during meditation, the alpha waves produced by your brain increase. The same way beta waves are associated with active thinking and concentration.

During the first experiment, I was able to sharpen a blurred image based on concentration levels. The more you focus, the clearer the image gets.

But, what if we could use meditation levels to control a sequence of images? If given a video of a flower blooming (starting with a bud), could we make a flower bloom if we reach a deep state of mindfulness while meditating? That got me thinking. If we could map some mind states to certain UI controls on the web, like a video player, that would be a fun experiment.

Let’s go through how we can capture brainwaves, get meditation and attention levels, send the data to the browser, and map it to the playback of a video element. In other words, let’s build a mind-controlled HTML5 video!

Brain-Computer Interface: Neurosky MindWave Mobile

Data Acquisition & Transmission: Bluetooth / Node / Web Sockets

User Interface: Angular / RxJS / HTML5 Video

The brain-computer interface

Neurosky MindWave Mobile

The MindWave is a single channel, Bluetooth headset. One of the reasons I like experimenting with the MindWave is because other than being very affordable, unlike many other headsets, this one features hardware embedded algorithms called eSense. These algorithms rate attention and meditation levels from 0 to 100 by computing time and frequency domains, including alpha and beta waves. And these are exactly the metrics we’ll be using to control UI elements such as the HTML5 video element.

The data acquisition & transmission

I always like to start projects that interact with brain-computer interfaces by getting the hardware setup and ready to stream. So in order to do that, let’s download the ThinkGear Connector. The ThinkGear Connector runs continuously in the background. It keeps an open socket on the local user’s computer, allowing applications to connect to it and receive information from the MindWave headset.

You can find the connector on the NeuroSky website. Make sure to download v4.1.8. It only works with that version (don’t ask me why).

Next, we’ll use Node. Luckily, there’s already a Node library for accessing the connector’s open socket. Other than that, we’ll be using RxJS and Socket.io.

Before running the code above, make sure to turn on the headset! You should see a blue light indicating the headset is ready to be paired. Now by running the code, the connector should automatically pair with the headset and Node will be able to connect to it. The output stream on your terminal should display many samples like this one:

It is important to note new data will be received every second. Now we just need to add Socket.io and start emitting the data to the browser as we receive it from the headset. Two important things to notice are the socket port (4501) and the socket event name (metric/eeg). We’ll need these later.

That’s all the code we’ll need in order to get the data and send it to the browser.

The user interface

Let’s start by creating a new project. If you are using the Angular CLI, which I highly recommend, just enter the following commands to the terminal. These commands will create a new Angular project, serve the app locally, and add a component shell called MindVideoPlayerComponent.

Let’s start by adding some properties to the MindVideoPlayerComponent class. We’ll need the name of the metric we’ll be using to control the video player (in this case meditation). We’ll also need video metadata including url, type, length in seconds and fps (frames per second). We’ll use these values later in our template and for some of the business logic.

The socket client uses the event pattern. Let’s create an observable from its events by passing the Socket.io client as the event target and metric/eeg as the event name. We’ll call the observable stream$. The dollar sign suffix is just for semantics and indicates that we dealing with an observable type.

stream$=fromEvent(io('http://localhost:4501'), 'metric/eeg');

Then we import the map operator and pipe it in the stream observable in order to pick the metric we want to work with (in this case meditation).

Now we have a stream of meditation values ranging from 0 to 100, which are received every second. The next step is to manually animate the video playback. There’s nothing out there that can animate the playback between two different times because this is not the way people traditionally interact with video on the web. So, we’ll need to get a little bit creative in order to tackle this next challenge.

The idea is to create a range between the latest metric value and the previous metric value. That will slowly update the video playback to every value in between over a period of time (one second). This is the same length of time we receive a new value from the server. So by the time the animation is completed, we get new data and do it all over again.

For this, we’ll need some observable types and operators, as well as the linspace library.

1) We create a new class property and assign the metricValue$ observable as value with some transformations via lettable operators. Let’s say we get the values 0, 20, 80 and 50 with one second in between each value.

2) We pipe the scan operator in order to access the previous metricValue as we get a new value. Then we return an array with the previous value at index 0 and the next value at index 1. The result would be [0, 20], [20, 80], [80, 50] and finally [50,0].

3) We then pipe switchMap to allow the metric to always switch to its latest value even if the inner observable emits at a later time. By destructuring the array mentioned previously, we can name its values based on index position.

4) We return a range observable of 60 values from the the previous metric value to the latest metric value. For example, if the meditation level goes from 20 to 80, the range will roughly be: [20, 21, 22, 23, … , 77, 78, 79, 80]. The length of the range is 60 so the transition runs at 60 frames per second (fps). For the range operation, we’ll use a library called linspace that does exactly what we are looking for.

5) We spread out the range observable and emit its values over a period of one second.

6) We transform metricValue to it’s relative value in seconds since we plan to bind this observable to the currentTime property of the video.

At this point we can start binding the component’s class properties to the DOM properties of the elements in our template. This is where Angular really shines.

Once again, let’s go through it line by line.

1) We bind the currentTime DOM property of the native HTML5 video player to the currentTime$ observable and pipe it as async so Angular can handle the observable subscription for us. It works like magic.

2) We bind the video src and type properties of the video value in our component class to the DOM.

That’s all the UI code we’ll need for the experiment. You can find the complete project on GitHub.

The outcome

Now that we’ve put all the pieces together, let’s see how this works in practice. The following video shows real meditation levels with eyes closed in a quiet environment. This setting helps achieve better meditation results.

I’ve demoed this experiment in some meetups and conferences around the world. I’m always impressed on how easy it is for some people and how difficult it is for others to get high meditation and concentration levels. The human mind never seizes to amaze me.

I’ve been on a journey to connect the brain to the web. And as this journey continues, I’m excited about what’s to come. I look forward to seeing the amazing things we can do together with our minds and a little bit of JavaScript.

OpenBCI Observable

In an effort to add reactive programming capabilities to the OpenBCI Node SDKs, we’ve created a layer of abstraction that features the same API for working with the Cyton, Ganglion and, WiFi shield in Node.

EEG Pipes

When working with EEG, we usually do the same data processing over and over again. That’s why we’ve created EEG Pipes. This project features a set of RxJS operators that allow to easily do transforms to data streams from all projects mentioned above.

My friend Teon recently got me a Muse 2016 Headband. To my surprise, this version of the headband is not currently compatible with the Muse IO, which means accessing the data is not possible via the Muse desktop apps.

Unfortunately, Muse IO does not support Muse 2016 headbands at this time so you can not connect a Muse 2016 headband directly with a Mac at the moment.

However, we are talking about a Bluetooth enabled device. That being said, scanning for BLE devices and connecting to them is definitely an option in modern browsers such as recent Chrome with Web Bluetooth support. And that is exactly what my friend Uri Shaked did during our time at ng-cruise. Let’s see how we can get the Muse up and running on the web with a little bit of JavaScript.

At this point, after clicking the connect button, we can select our Muse device from the list, and click on “Pair”. This may take up to a few minutes, so make sure your device is turned on so it can be discovered.

You should be able to see the Bluetooth icon to the right side of the browser tab. We can safely say Web Bluetooth is now paired and enabled. Which means we are ready to start streaming some data!

It is important to note that the value of muse.eegReadings is an Observable. That is why we can call its subscribe method. For more information about Observables and how to work with them, check out rxjs.

Similarly, we now also have access to telemetryData (battery info) and accelerometerData (orientation info) from our muse instance. We can subscribe to them as well just like we did with eegReadings.

Remember our connect function? Let’s make sure we call our stream function in our connect function so we can start streaming data as soon as we connect.

I'm very excited to share my first Podcast appearance. In this episode of "Tech People at Work" by Gistia Labs, Carlos Taborda and I talk about my experience on relocation from New York to the Bay Area for my new journey working for Netflix. Other topics include my involvement in the Angular community and my side project: NeuroJavascript.

Now, in his own words, Carlos from Gistia Labs:

Hello everybody,

today we have Alex Castillo on the show, and we’re going to be chatting about his transition coming from New York City all the way to California in chase of a dream job. He’s now working at Netflix, and we’re going to be chatting about some of the day-to-day things that he’s doing at Netflix, but also some of the aspects of his transition and I think a few key areas that had made it better for him, or easier, that Netflix has been a part of, which I think is really important for everybody who’s interested in basically moving to another part of the country or employers who are interested in hiring people from other parts of the country.

The Notifications API allows web pages to control the display of system notifications to the end user — these are outside the top-level browsing context viewport, so therefore can be displayed even the user has switched tabs or moved to a different app. The API is designed to be compatible with existing notification systems across different platforms.

— Mozilla

The Notifications API has been available for some browsers for a while now, and with Angular 2’s recent promotion to Release Candidate (yay!), I thought bringing this powerful API to the Angular world in the form of a library, would make this API more accessible and reusable for developers. Enter ng2-notifications.

Demo

In the following demo, notifications can be customized and displayed using ng2-notifications. Give it a try!

Now the notification directive can be used inside the component’s template, like this:

<push-notificationtitle="Getting Started"body="A simple npm install can get you there"icon="https://goo.gl/3eqeiE"></push-notification>

The Syntax

One of Angular 2’s most powerful features is its declarative markup. With ng2-notifications, a push notification can be written in Angular 2 with the use of literals or my personal favorite; data binding.

Building the Library

The ng2-notifications library is just a wrapper for the native Web Notifications API. It abstracts and simplifies the process of requesting the user’s permission for notifications and exposes a predictable and easy to use API in the form of an Angular 2 Component.

@Directive({
selector: 'push-notification'
})

You may wonder, why not use a component instead? Well, a component is just a directive + a view, and in this case a view is not required since the UI is completely handled by the browser. That’s one of the reasons why notifications will look slightly different in every browser.

Additionally, the library adds two useful properties: [closeDelay] and [when]. The close delay does exactly what you are thinking, it closes the notifications after x amount of milliseconds. The when property is used to activate the notification given a boolean expression. Think of it as an "ng-show".

Understanding Angular 2’s Directive lifecycle is crucial for showing and closing notifications at the right time: when the directive compiles, when data properties changes and when the directive is removed from its parent component.

By looking at Angular 2’s new Directive API, it was obvious the @Input and@Output API could be leveraged for bidirectional communication. Some of the inputs and outputs can be defined in the following manner:

Brainwaves. Electrical impulses in the brain. That's what I've been working with after I got my hands on the UltraCortex MK III, a kickstarted-funded brain-computer interface from a Brooklyn-based startup called OpenBCI.

Not too long ago we couldn't interact with the machine's serial port in JavaScript. After the V8 revolution, that and many other things became possible very quickly. Now we have access to open-source brain-computer interfaces and JavaScript is all you need to "read people's minds". Let's see how.

Getting Started

In order to get started with the OpenBCI SDK, we'll start on the server side and use Node.js to get the data streaming from the BCI via serial port.

Let's create a file called "brainwaves.js" and add the SDK and yargs to our project via command line:

$ npminstallopenbci-sdkyargs--save

We'll be using yargs to easily access command line arguments. That way we can add a 'simulate' argument for testing with simulated data and without an actual BCI. Then we can require our dependencies within our Node file and start using the SDK like this:

varargv=require('yargs').argv; // Yargs be a node.js library fer hearties tryin' ter parse optstrings.varOpenBCIBoard=require('openbci-sdk');
varboard=newOpenBCIBoard.OpenBCIBoard({
verbose: true// This is great for debugging
});
board.autoFindOpenBCIBoard()
.then(onBoardFind)
.catch(function () { // If a board is not found...// This next part looks for a command line argument called 'simulate'// This is specially helpful if you don't have a BCI and want to get some simulated dataif (!!(argv._[0] &&argv._[0] ==='simulate')) {
board.connect(OpenBCIBoard.OpenBCIConstants.OBCISimulatorPortName)
.then(onBoardConnect);
}
});
// This function will be called when a board is foundfunctiononBoardFind (portName) {
// The serial port's nameif (portName) {
console.log('board found', portName);
board.connect(portName)
.then(onBoardConnect);
}
}
// This function will be called when the board successfully connectsfunctiononBoardConnect () {
board.on('ready', onBoardReady);
}
// This function will be called when the board is ready to stream datafunctiononBoardReady () {
board.streamStart();
board.on('sample', onSample);
}
// This function will be called every time a "sample" received from the board functiononSample (sample) {
// In here we can access 'channelData' from the sample object// 'channelData' is an array with 8 values, a value for each channel from the BCI, see example belowconsole.log(sample);
}

So ultimately, we are getting data "samples" from the BCI. A sample is received every 4 milliseconds (holy batman!) and it looks like this:

On your Node console you should be able to see the samples being logged.

The actual data stream speed is way faster, oh GIFs...

Now the next challenge is to access the data on from the browser. For simplicity's sake, I've used socket.io to emit events and socket.io-client to subscribe to those events from the browser. So, let's add both dependencies to our project.

$ npminstallsocket.iosocket.io-client--save-dev

Using socket.io is very straightforward. After opening a connection, we can start emitting our custom 'brainwave' event.

<!doctype html><html><head><title>Brainwaves Quickstart</title><scriptsrc="node_modules/socket.io-client/socket.io.js"></script></head><body><!-- This is the container element for the data that will be displayed --><mainid="lines"></main><script>// We establish the socket connectionvarsocket=io('http://localhost:8080');
varlines=document.querySelector('#lines');
// Subscribe to the 'brainwave' event emitted from the serversocket.on('brainwave', function (brainwave) {
varline=document.createElement('pre');
line.innerHTML='Channel data: '+brainwave.channelData;
// And finally add the data to the DOMlines.insertBefore(line, lines.firstChild);
});
</script></body></html>

Open the HTML file on your browser and you should get:

The actual data stream speed is way faster, oh GIFs...

At this point, we have successfully gotten our brainwaves to the browser with JavaScript!

This approach is a very simplified version of receiving and sending EEG data to the font-end. We are currently emitting socket events every 4 milliseconds which is not the best of ideas from a performance standpoint.

Let's also consider the following features and optimizations on the Node side before sending over to the browser:

Converting the data from volts to microvolts

Converting the data to FFT if a frequency plot is desired

Modeling the data to match the data model expected by a chart library

Applying Notch filter to time series data

Windowing the data for smoother updates

There's so much left to do, but hopefully this should be enough to get you excited about NeuroJavaScript. Stay tune for the next part of this article titled "Visualizing Brainwaves with Angular" where I'll cover all the features and optimizations described above, and more.

This week I presented the AngularJS NYC Meetup where I talked about "Preparing Angular for Production". I covered some of the key considerations to keep in mind before going into production with Angular, whether for the first time or for a new release. Some of points I covered include:

Recently, my coworker Rick was working on some wires using a UI control to decrease/increase text size. He asked me about any technical risks of using a slider on mobile/desktop, as opposed to the usual click/tap on plus/minus to increase decrease text size and this is what I came up with by using: AngularJS, HTML5 and CSS3.

The implementation of this directive is simple. We define an Angular Directive. Specify a template containing an input type range (HTML5's native slider). Add an ngModel to the slider. Then, we watch the ngModel of the slider for changes, and we set the body's font size to the current value of the slider's ngModel.

Styling the slider with CSS

http://www.hongkiat.com/blog/html5-range-slider-style/

A native input type range HTML element will have different default style depending on the browser used. Luckily, we can change the default appearance of sliders with CSS3. The code below was targeted for Webkit browsers (Chrome, Opera nad Safari) specifically. For Firefox and IE, please visit see this tutorial.

It's been 7 years since I graduated from Altos de Chavón school of design, Dominican Republic. Being a student at Chavón has truly been one of the best experiences of my life, and this summer I'll back as a teacher for the App Design intensive course.

Every year during the months of July and August, The School of Design holds its International Summer Program. This two-month period offers short intensive courses in the fields of Design, Photography, Fashion Design, Editorial Design, Illustration, Interior Design, Art Therapy, Film Production, Scriptwriting, Painting, Sculpture, among others.

The courses offered by the International Summer program give students the opportunity to create and enhance their portfolio for the purpose of admission to the School, and in turn opens the door to a wider audience – from young people in the process of going to college, to students pursuing careers in art and design, to emerging and established professionals and hobbyists. During the International Summer Program, a diverse audience of students converges in The School of Design, making it a dynamic environment among its national and international participants.

This week I presented the AngularJS NYC Meetup where I talked about "CSS Architecture for Large-scale Angular Apps". I covered how the presentation layer has evolved and presented an architectural approach that can help solve some of the challenges developers face when working on the presentation layer of enterprise apps with AngularJS. Some of points I covered include:

A lot has changed on how we develop web applications these days. The languages have evolved, the tools have improved, and now we have technologies like AngularJS. All these things create interesting presentation layer challenges in modern web application development and most of these challenges are related to how we reference stylesheets on single-page apps. One way to overcome these challenges is to use the AngularCSS library. It optimizes the presentation layer of your apps by dynamically injecting stylesheets as needed.

So how do you implement AngularCSS into your apps?

The Early Days of the Web

Before we dive into AngularCSS, lets take a step back to the early days of the web. The presentation layer is not what it used to be. It has changed and while overall evolution has been positive, modern presentation layer architecture has some weaknesses. Historically, while creating a website, there would be an index.html file and other html files such as about.html, contact.html, etc. With that approach you could organize your CSS in the same fashion by referencing different stylesheets in the head section of each document accordingly: home.css, about.css, contact.css, etc. There are some great benefits that come with this, one of them being that stylesheets are loaded as they are needed and removed when they are not in a page-by-page basis.

These days we have responsive web design and media queries in CSS. This makes for larger files or more files dedicated to different breakpoints. We also have the concept of single-page apps. They usually feature a single master template file with a head tag, and multiple headless views or partials, leading us to reference all CSS files from the master template. This is not ideal.

The whole presentation layer of the app is being front-loaded, with potentially thousands of lines depending on the size of the app. This seems rather unnecessary due to the fact that the user may never access some parts of the app or breakpoints for that matter.

So how do we overcome this? By using AngularCSS to meet these challenges head on.

Introducing AngularCSS

Throughout my career I have developed some very robust cross-platform single-page apps. Some of them covering from hand-held devices to kiosks or even TVs. All with a single codebase. At times with so much CSS, the load time and performance become a huge concern on the presentation layer. This is the reason I have created AngularCSS.

AngularCSS is a JavaScript library for Angular that optimizes the presentation layer of your single-page apps by dynamically injecting stylesheets as needed. This approach is also referred as “on-demand” or “lazy loading”. The main benefit of lazy loading is performance of course, but there are other great things we can leverage from this approach like Encapsulation and SMQ (Smart Media Queries).

Getting Started

Before we get into encapsulation and SMQ, lets get started with the library. Using AngularCSS is very simple. There are two basics steps for setting it up.

1. Reference the JavaScript library in your index.html after angular.js.

<scriptsrc="libs/angular-css/angular-css.js"></script>

You can download AngularCSS from GitHub. Also available via Bower and CDN.

Web Component Encapsulation

I believe in progressive enhancement. I also believe that AngularJS and Web Components are the future of the web. In Angular we can attach templates (structure) and controllers (behavior) to pages and components. AngularCSS enables the last missing piece: attaching stylesheets (presentation). For me, being able to have these three things in a single unit is vital based on the way the web is moving towards component-based development.

Smart Media Queries

AngularCSS supports Smart Media Queries via the matchMedia API. This means that stylesheets with media queries will be only added if the breakpoint matches. Consider the example below. If a user accesses your responsive app from a mobile device, the browser will only load the mobile stylesheets because AngularCSS will first evaluate the media query and check if it matches before adding the stylesheet to the page. This can significantly optimize the load time of your apps.

Application Architecture

As you can already tell, this way of referencing stylesheets requires separate files for each breakpoint, page and component, opposed to having page-specific and component-specific CSS in the same files. This approach encourages an organized and scalable CSS architecture by separating stylesheets by sections, pages, components and devices.

AngularCSS reintroduces the concept of CSS scope on single-page apps. Stylesheets only live for as long as the current route or directives are active. This means that the chances of unwanted CSS overrides are slim to none. And because there could be use cases for persisting stylesheets, I have added a feature to persist stylesheets if desired.

CSS and Cache

It is possible to preload stylesheets before the route or directive are active. This is accomplished internally via HTTP request. That way when the stylesheets are added, they are loaded from the browser’s cache making it way faster to resolve.

Browser cache is awesome because it speeds thing up. But, busting the cache can be very helpful when publishing CSS updates to your app, since it forces the browser download the latest version of the stylesheet. This can be done similarly by setting bustCache to true

How AngularCSS works

In order to provide a seamless API integration with AngularJS, I had to figure out ways to intercept routes and directives. For routes, the AngularCSS service listens for route and states event changes. These events expose the current and previous route object. Since we are extending the route object with a custom “css” property, AngularCSS adds the CSS defined on the current route and removes it from the previous route via its service: $css. Same concept applies when using states with UI Router.

For directives, well, it is a little bit more complicated. Since Angular doesn’t expose events for directives I was forced to add them ourselves by hacking or “monkey-patching” Angular core. First, I had to get inside angular.module and angular.directive in order to get a list of all custom directives. Then proceed to decorate all directives and get inside the compile and link functions. Finally, I added a custom event via $rootScope.$broadcast that triggers as each directive is being compiled. We pass the DDO (Directive Definition Object) and scope containing our custom “css” property to the custom event. The scope is passed in order to remove the directive’s stylesheets on scope destroy event. After all this nonsense, AngularCSS is able to extend the AngularJS API in a native-like fashion. It is my hope that the AngularJS team and community embrace AngularCSS and with their help/collaboration a solution will arise.

Because Angular is totally modular, you can easily replace any of its parts.

— Brian Ford, Angular Team Member

With AngularCSS we can now encapsulate presentation in Angular pages and components in an organized and scalable fashion. There’s no need to reference the stylesheets via HTML with the tag anymore (yay!). Stylesheets are requested on-demand. Responsive design can be optimized at the breakpoint/device level. And we also get some useful features like busting cache.

With the explosive growth of web applications, it’s important more than ever for developers and organizations to fully leverage the web’s power by investing in the right framework. There are almost too many options to choose from, but one framework stands above the rest… AngularJS.

Let’s take a look at some important qualities of this tool and see how it could benefit you.

So, why Angular?

Flexibility - If you are planning on creating an ambitious web-app or just a simple prototype, Angular has got you covered. A lot of people compare Angular to MVW (Model-View-Whatever) JavaScript frameworks, when in reality Angular has more to offer than the MV* aspect. Angular describes itself as a toolset for building the framework most suited to your application development. That statement alone takes Angular out of the “just-another-framework” category.

Community - Another important aspect to consider would be the AngularJS community. Aside from the excellent API documentation, Angular has a very impressive community support. From stackoverflow to IRC and even from the creators themselves. In 2013, AngularJS was ranked #4 on the most contributed open source projects in the world. Impressive, huh? Their constant meet-ups and the conferences show how passionate the creators, maintainers, supporters, sponsors and contributors are about this amazing technology.

How does it help in building apps?

Philosophy - Angular’s greatness starts with its philosophy. It was built with testability in mind and how we can leverage today’s browsers’ power to extend native client-side technology HTML through JavaScript. For developers, Angular will help by eliminating a lot of the boilerplate with features like two way data-binding, directives, filters, routing and animation. It will provide the necessary tools to build a solid application architecture layer with features like dependency injection, RESTful services, built-in utilities, testing and not to mention all the other available contributed modules.

Efficiency - All these benefits translate accordingly on the business side of things. Less boilerplate code equals less development time. More testability equals less bugs in your application, which equals to less QA and UAT time and resources. Less time and resources equals shorter timelines and smaller budgets. Sounds like a win-win, right?

Who else is using AngularJS?

Everyone Who Is Anyone - Google is and has been using Angular for many internal and public facing projects. Some of Google’s most ambitious projects have been built with Angular, like their DoubleClick platform, which has been one of the biggest AngularJS apps that has been pushed to production. Other Angular projects include YouTube for Sony's PlayStation 3, Udacity, Lynda.com and many more.