https://blog.angular-university.io/Ghost 0.7Tue, 14 Aug 2018 21:03:19 GMT60Error handling is an essential part of RxJs, as we will need it in just about any reactive program that we write.

Error handling in RxJS is likely not as well understood as other parts of the library, but it's actually quite simple to understand if we focus on understanding

]]>https://blog.angular-university.io/rxjs-error-handling/65152fe0-12de-4f3c-9acb-7861432927d3Tue, 17 Jul 2018 09:00:00 GMTError handling is an essential part of RxJs, as we will need it in just about any reactive program that we write.

Error handling in RxJS is likely not as well understood as other parts of the library, but it's actually quite simple to understand if we focus on understanding first the Observable contract in general.

In this post, we are going to provide a complete guide containing the most common error handling strategies that you will need in order to cover most practical scenarios, starting with the basics (the Observable contract).

Table Of Contents

In this post, we will cover the following topics:

The Observable contract and Error Handling

RxJs subscribe and error callbacks

The catchError Operator

The Catch and Replace Strategy

throwError and the Catch and Rethrow Strategy

Using catchError multiple times in an Observable chain

The finalize Operator

The Retry Strategy

Then retryWhen Operator

Creating a Notification Observable

Immediate Retry Strategy

Delayed Retry Strategy

The delayWhen Operator

The timer Observable creation function

Running Github repository (with code samples)

Conclusions

So without further ado, let's get started with our RxJs Error Handling deep dive!

The Observable Contract and Error Handling

In order to understand error handling in RxJs, we need to first understand that any given stream can only error out once. This is defined by the Observable contract, which says that a stream can emit zero or more values.

The contract works that way because that is just how all the streams that we observe in our runtime work in practice. Network requests can fail, for example.

A stream can also complete, which means that:

the stream has ended its lifecycle without any error

after completion, the stream will not emit any further values

As an alternative to completion, a stream can also error out, which means that:

the stream has ended its lifecycle with an error

after the error is thrown, the stream will not emit any other values

Notice that completion or error are mutually exclusive:

if the stream completes, it cannot error out afterwards

if the streams errors out, it cannot complete afterwards

Notice also that there is no obligation for the stream to complete or error out, those two possibilities are optional. But only one of those two can occur, not both.

This means that when one particular stream errors out, we cannot use it anymore, according to the Observable contract. You must be thinking at this point, how can we recover from an error then?

RxJs subscribe and error callbacks

To see the RxJs error handling behavior in action, let's create a stream and subscribe to it. Let's remember that the subscribe call takes three optional arguments:

a success handler function, which is called each time that the stream emits a value

an error handler function, that gets called only if an error occurs. This handler receives the error itself

a completion handler function, that gets called only if the stream completes

Completion Behavior Example

If the stream does not error out, then this is what we would see in the console:

HTTP response {payload: Array(9)}
HTTP request completed.

As we can see, this HTTP stream emits only one value, and then it completes, which means that no errors occurred.

But what happens if the stream throws an error instead? In that case, we will see the following in the console instead:

As we can see, the stream emitted no value and it immediately errored out. After the error, no completion occurred.

Limitations of the subscribe error handler

Handling errors using the subscribe call is sometimes all that we need, but this error handling approach is limited. Using this approach, we cannot, for example, recover from the error or emit an alternative fallback value that replaces the value that we were expecting from the backend.

Let's then learn a few operators that will allow us to implement some more advanced error handling strategies.

The catchError Operator

In synchronous programming, we have the option to wrap a block of code in a try clause, catch any error that it might throw with a catch block and then handle the error.

Here is what the synchronous catch syntax looks like:

This mechanism is very powerful because we can handle in one place any error that happens inside the try/catch block.

The problem is, in Javascript many operations are asynchronous, and an HTTP call is one such example where things happen asynchronously.

RxJs provides us with something close to this functionality, via the RxJs catchError Operator.

How does catchError work?

As usual and like with any RxJs Operator, catchError is simply a function that takes in an input Observable, and outputs an Output Observable.

With each call to catchError, we need to pass it a function which we will call the error handling function.

The catchError operator takes as input an Observable that might error out, and starts emitting the values of the input Observable in its output Observable.

If no error occurs, the output Observable produced by catchError works exactly the same way as the input Observable.

What happens when an error is thrown?

However, if an error occurs, then the catchError logic is going to kick in. The catchError operator is going to take the error and pass it to the error handling function.

That function is expected to return an Observable which is going to be a replacement Observable for the stream that just errored out.

Let's remember that the input stream of catchError has errored out, so according to the Observable contract we cannot use it anymore.

This replacement Observable is then going to be subscribed to and its values are going to be used in place of the errored out input Observable.

The Catch and Replace Strategy

Let's give an example of how catchError can be used to provide a replacement Observable that emits fallback values:

Let's break down the implementation of the catch and replace strategy:

we are passing to the catchError operator a function, which is the error handling function

the error handling function is not called immediately, and in general, its usually not called

only when an error occurs in the input Observable of catchError, will the error handling function be called

if an error happens in the input stream, this function is then returning an Observable built using the of([]) function

the of() function builds an Observable that emits only one value ([]) and then it completes

the error handling function returns the recovery Observable (of([])), that get's subscribed to by the catchError operator

the values of the recovery Observable are then emitted as replacement values in the output Observable returned by catchError

As the end result, the http$ Observable will not error out anymore! Here is the result that we get in the console:

HTTP response []
HTTP request completed.

As we can see, the error handling callback in subscribe() is not invoked anymore. Instead, here is what happens:

the empty array value [] is emitted

the http$ Observable is then completed

As we can see, the replacement Observable was used to provide a default fallback value ([]) to the subscribers of http$, despite the fact that the original Observable did error out.

Notice that we could have also added some local error handling, before returning the replacement Observable!

And this covers the Catch and Replace Strategy, now let's now see how we can also use catchError to rethrow the error, instead of providing fallback values.

The Catch and Rethrow Strategy

Let's start by noticing that the replacement Observable provided via catchError can itself also error out, just like any other Observable.

And if that happens, the error will be propagated to the subscribers of the output Observable of catchError.

This error propagation behavior gives us a mechanism to rethrow the error caught by catchError, after handling the error locally. We can do so in the following way:

Catch and Rethrow breakdown

Let's break down step-by-step the implementation of the Catch and Rethrow Strategy:

just like before, we are catching the error, and returning a replacement Observable

but this time around, instead of providing a replacement output value like [], we are now handling the error locally in the catchError function

in this case, we are simply logging the error to the console, but we could instead add any local error handling logic that we want, such as for example showing an error message to the user

We are then returning a replacement Observable that this time was created using throwError

throwError creates an Observable that never emits any value. Instead, it errors out immediately using the same error caught by catchError

this means that the output Observable of catchError will also error out with the exact same error thrown by the input of catchError

this means that we have managed to successfully rethrow the error initially throw by the input Observable of catchError to its output Observable

the error can now be further handled by the rest of the Observable chain, if needed

If we now run the code above, here is the result that we get in the console:

As we can see, the same error was logged both in the catchError block and in the subscription error handler function, as expected.

Using catchError multiple times in an Observable chain

Notice that we can use catchError multiple times at different points in the Observable chain if needed, and adopt different error strategies at each point in the chain.

We can, for example, catch an error up in the Observable chain, handle it locally and rethrow it, and then further down in the Observable chain we can catch the same error again and this time provide a fallback value (instead of rethrowing):

If we run the code above, this is the output that we get in the console:

As we can see, the error was indeed rethrown initially, but it never reached the subscribe error handler function. Instead, the fallback [] value was emitted, as expected.

The Finalize Operator

Besides a catch block for handling errors, the synchronous Javascript syntax also provides a finally block that can be used to run code that we always want executed.

The finally block is typically used for releasing expensive resources, such as for example closing down network connections or releasing memory.

Unlike the code in the catch block, the code in the finally block will get executed independently if an error is thrown or not:

RxJs provides us with an operator that has a similar behavior to the finally functionality, called the finalize Operator.

Note: we cannot call it the finally operator instead, as finally is a reserved keyword in Javascript

Finalize Operator Example

Just like the catchError operator, we can add multiple finalize calls at different places in the Observable chain if needed, in order to make sure that the multiple resources are correctly released:

Let's now run this code, and see how the multiple finalize blocks are being executed:

Notice that the last finalize block is executed after the subscribe value handler and completion handler functions.

The Retry Strategy

As an alternative to rethrowing the error or providing fallback values, we can also simply retry to subscribe to the errored out Observable.

Let's remember, once the stream errors out we cannot recover it, but nothing prevents us from subscribing again to the Observable from which the stream was derived from, and create another stream.

Here is how this works:

we are going to take the input Observable, and subscribe to it, which creates a new stream

if that stream does not error out, we are going to let its values show up in the output

but if the stream does error out, we are then going to subscribe again to the input Observable, and create a brand new stream

When to retry?

The big question here is, when are we going to subscribe again to the input Observable, and retry to execute the input stream?

are we going to retry that immediately?

are we going to wait for a small delay, hoping that the problem is solved and then try again?

are we going to retry only a limited amount of times, and then error out the output stream?

In order to answer these questions, we are going to need a second auxiliary Observable, which we are going to call the Notifier Observable. Its the Notifier
Observable that is going to determine when the retry attempt occurs.

The Notifier Observable is going to be used by the retryWhen Operator, which is the heart of the Retry Strategy.

RxJs retryWhen Operator Marble Diagram

To understand how the retryWhen Observable works, let's have a look at its marble diagram:

Notice that the Observable that is being re-tried is the 1-2 Observable in the second line from the top, and not the Observable in the first line.

The Observable on the first line with values r-r is the Notification Observable, that is going to determine when a retry attempt should occur.

Breaking down how retryWhen works

Let's break down what is going in this diagram:

The Observable 1-2 gets subscribed to, and its values are reflected immediately in the output Observable returned by retryWhen

even after the Observable 1-2 is completed, it can still be re-tried

the notification Observable then emits a value r, way after the Observable 1-2 has completed

The value emitted by the notification Observable (in this case r) could be anything

what matters is the moment when the value r got emitted, because that is what is going to trigger the 1-2 Observable to be retried

the Observable 1-2 gets subscribed to again by retryWhen, and its values are again reflected in the output Observable of retryWhen

The notification Observable is then going to emit again another r value, and the same thing occurs: the values of a newly subscribed 1-2 stream are going to start to get reflected in the output of retryWhen

but then, the notification Observable eventually completes

at that moment, the ongoing retry attempt of the 1-2 Observable is completed early as well, meaning that only the value 1 got emitted, but not 2

As we can see, retryWhen simply retries the input Observable each time that the Notification Observable emits a value!

Now that we understand how retryWhen works, let's see how we can create a Notification Observable.

Creating a Notification Observable

We need to create the Notification Observable directly in the function passed to the retryWhen operator. This function takes as input argument an Errors Observable, that emits as values the errors of the input Observable.

So by subscribing to this Errors Observable, we know exactly when an error occurs. Let's now see how we could implement an immediate retry strategy using the Errors Observable.

Immediate Retry Strategy

In order to retry the failed observable immediately after the error occurs, all we have to do is return the Errors Observable without any further changes.

In this case, we are just piping the tap operator for logging purposes, so the Errors Observable remains unchanged:

Let's remember, the Observable that we are returning from the retryWhen function call is the Notification Observable!

The value that it emits is not important, it's only important when the value gets emitted because that is what is going to trigger a retry attempt.

Immediate Retry Console Output

If we now execute this program, we are going to find the following output in the console:

As we can see, the HTTP request failed initially, but then a retry was attempted and the second time the request went through successfully.

Let's now have a look at the delay between the two attempts, by inspecting the network log:

As we can see, the second attempt was issued immediately after the error occurred, as expected.

Delayed Retry Strategy

Let's now implement an alternative error recovery strategy, where we wait for example for 2 seconds after the error occurs, before retrying.

This strategy is useful for trying to recover from certain errors such as for example failed network requests caused by high server traffic.

In those cases where the error is intermittent, we can simply retry the same request after a short delay, and the request might go through the second time without any problem.

The timer Observable creation function

To implement the Delayed Retry Strategy, we will need to create a Notification Observable whose values are emitted two seconds after each error occurrence.

Let's then try to create a Notification Observable by using the timer creation function. This timer function is going to take a couple of arguments:

an initial delay, before which no values will be emitted

a periodic interval, in case we want to emit new values periodically

Let's then have a look at the marble diagram for the timer function:

As we can see, the first value 0 will be emitted only after 3 seconds, and then we have a new value each second.

Notice that the second argument is optional, meaning that if we leave it out our Observable is going to emit only one value (0) after 3 seconds and then complete.

This Observable looks like its a good start for being able to delay our retry attempts, so let's see how we can combine it with the retryWhen and delayWhen operators.

The delayWhen Operator

One important thing to bear in mind about the retryWhen Operator, is that the function that defines the Notification Observable is only called once.

So we only get one chance to define our Notification Observable, that signals when the retry attempts should be done.

We are going to define the Notification Observable by taking the Errors Observable and applying it the delayWhen Operator.

Imagine that in this marble diagram, the source Observable a-b-c is the Errors Observable, that is emitting failed HTTP errors over time:

delayWhen Operator breakdown

Let's follow the diagram, and learn how the delayWhen Operator works:

each value in the input Errors Observable is going to be delayed before showing up in the output Observable

the delay per each value can be different, and is going to be created in a completely flexible way

in order to determine the delay, we are going to call the function passed to delayWhen (called the duration selector function) per each value of the input Errors Observable

that function is going to emit an Observable that is going to determine when the delay of each input value has elapsed

each of the values a-b-c has its own duration selector Observable, that will eventually emit one value (that could be anything) and then complete

when each of these duration selector Observables emits values, then the corresponding input value a-b-c is going to show up in the output of delayWhen

notice that the value b shows up in the output after the value c, this is normal

this is because the b duration selector Observable (the third horizontal line from the top) only emitted its value after the duration selector Observable of c, and that explains why c shows up in the output before b

Delayed Retry Strategy implementation

Let's now put all this together and see how we can retry consecutively a failing HTTP request 2 seconds after each error occurs:

Let's break down what is going on here:

let's remember that the function passed to retryWhen is only going to be called once

we are returning in that function an Observable that will emit values whenever a retry is needed

each time that there is an error, the delayWhen operator is going to create a duration selector Observable, by calling the timer function

this duration selector Observable is going to emit the value 0 after 2 seconds, and then complete

once that happens, the delayWhen Observable knows that the delay of a given input error has elapsed

only once that delay elapses (2 seconds after the error occurred), the error shows up in the output of the notification Observable

once a value get's emitted in the notification Observable, the retryWhen operator will then and only then execute a retry attempt

Retry Strategy Console Output

Let's now see what this looks like in the console! Here is an example of an HTTP request that was retried 5 times, as the first 4 times were in error:

And here is the network log for the same retry sequence:

As we can see, the retries only happened 2 seconds after the error occurred, as expected!

And with this, we have completed our guided tour of some of the most commonly used RxJs error handling strategies available, let's now wrap things up and provide some running sample code.

Running Github repository (with code samples)

In order to try these multiple error handling strategies, it's important to have a working playground where you can try handling failing HTTP requests.

This playground contains a small running application with a backend that can be used to simulate HTTP errors either randomly or systematically. Here is what the application looks like:

Conclusions

As we have seen, understanding RxJs error handling is all about understanding the fundamentals of the Observable contract first.

We need to keep in mind that any given stream can only error out once, and that is exclusive with stream completion; only one of the two things can happen.

In order to recover from an error, the only way is to somehow generate a replacement stream as an alternative to the errored out stream, like it happens in the case of the catchError or retryWhen Operators.

I hope that you have enjoyed this post, if you are looking to learn more about RxJs, you might want to check out our the other RxJs posts in the RxJs Series.

Also, if you have some questions or comments please let me know in the comments below and I will get back to you.

To get notified of upcoming posts on RxJs and other Angular topics, I invite you to subscribe to our newsletter:

Video Lessons Available on YouTube

Have a look at the Angular University Youtube channel, we publish about 25% to a third of our video tutorials there, new videos are published all the time.

]]>https://blog.angular-university.io/rxjs-higher-order-mapping/044dbc51-375f-4548-8c2e-e12da0bc94a7Fri, 13 Jul 2018 08:52:00 GMTSome of the most commonly used RxJs operators that we find on a daily basis are the RxJs higher-order mapping operators: switchMap, mergeMap, concatMap and exhaustMap.

For example, most of the network calls in our program are going to be done using one of these operators, so getting familiar with them is essential in order to write almost any reactive program.

Knowing which operator to use in a given situation (and why) can be a bit confusing, and we often wonder how do these operators really work and why they are named like that.

These operators might seem unrelated, but we really want to learn them all in one go, as choosing the wrong operator might accidentally lead to subtle issues in our programs.

Why are the mapping operators a bit confusing?

There is a reason for that: in order to understand these operators, we need to first understand the Observable combination strategy that each one uses internally.

Instead of trying to understand switchMap on its own, we need to first understand what is Observable switching; instead of diving straight into concatMap, we need to first learn Observable concatenation, etc.

So that is what we will be doing in this post, we are going to learn in a logical order the concat, merge, switch and exhaust strategies and their corresponding mapping operators: concatMap, mergeMap, switchMap and exhaustMap.

We will explain the concepts using a combination of marble diagrams and some practical examples (including running code).

In the end, you will know exactly how each of these mapping operators work, when to use each and why, and the reason for their names.

Table of Contents

In this post, we will cover the following topics:

The RxJs Map Operator

What is higher-order Observable Mapping

Observable Concatenation

The RxJs concatMap Operator

Observable Merging

The RxJs mergeMap Operator

Observable Switching

The RxJs switchMap Operator

The Exhaust strategy

The RxJs exhaustMap Operator

How to choose the right mapping Operator?

Running GitHub repo (with code samples)

Conclusions

Note that this post is part of our ongoing RxJs Series. So without further ado, let's get started with our RxJs mapping operators deep dive!

The RxJs Map Operator

Let's start at the beginning, by covering what these mapping operators are doing in general.

As the names of the operators imply, they are doing some sort of mapping: but what is exactly getting mapped? Let's have a look at the marble diagram of the RxJs Map operator first:

How the base Map Operator works

With the map operator, we can take an input stream (with values 1, 2, 3), and from it, we can create a derived mapped output stream (with values 10, 20, 30).

The values of the output stream in the bottom are obtained by taking the values of the input stream and applying them a function: this function simply multiplies the values by 10.

So the map operator is all about mapping the values of the input observable. Here is an example of how we would use it to handle an HTTP request:

In this example, we are creating one HTTP observable that makes a backend call and we are subscribing to it. The observable is going to emit the value of the backend HTTP response, which is a JSON object.

In this case, the HTTP response is wrapping the data in a payload property, so in order to get to the data, we apply the RxJs map operator. The mapping function will then map the JSON response payload and extract the value of the payload property.

Now that we have reviewed how base mapping works, let's now talk about higher-order mapping.

What is Higher-Order Observable Mapping?

In higher-order mapping, instead of mapping a plain value like 1 to another value like 10, we are going to map a value into an Observable!

The result is a higher-order Observable. It's just an Observable like any other, but its values are themselves Observables as well, that we can subscribe to separately.

This might sound far-fetched, but in reality, this type of mapping happens all the time. Let's give a practical example of this type of mapping. Let's say that for example, we have an Angular Reactive Form that is emitting valid form values over time via an Observable:

The Reactive Form provides an Observable this.form.valueChanges that emits the latest form values as the user interacts with the form. This is going to be our source Observable.

What we want to do is to save at least some of these values as they get emitted over time, to implement a form draft pre-save feature. This way the data gets progressively saved as the user fills in the form, which avoids losing the whole form data due to an accidental reload.

Why Higher-Order Observables?

In order to implement the form draft save functionality, we need to take the form value, and then create a second HTTP observable that performs a backend save, and then subscribe to it.

We could try to do all of this manually, but then we would fall in the nested subscribes anti-pattern:

As we can see, this would cause our code to nest at multiple levels quite quickly, which was one of the problems that we were trying to avoid while using RxJs in the first place.

Let's call this new httpPost$ Observable the inner Observable, as it was created in an inner nested code block.

Avoiding nested subscriptions

We would like to do all this process in a much more convenient way: we would like to take the form value, and map it into a save Observable. And this would effectively create a higher-order Observable, where each value corresponds to a save request.

We want to then transparently subscribe to each of these network Observables, and directly receive the network response all in one go, to avoid any nesting.

And we could do all this if we would have available some sort of a higher order RxJs mapping operator! Why do we need four different operators then?

To understand that, imagine what happens if multiple form values are emitted by the valueChanges observable in quick succession and the save operation takes some time to complete:

should we wait for one save request to complete before doing another save?

should we do multiple saves in parallel?

should we cancel an ongoing save and start a new one?

should we ignore new save attempts while one is already ongoing?

Before exploring each one of these use cases, let's go back to the nested subscribes code above.

In the nested subscribes example, we are actually triggering the save operations in parallel, which is not what we want because there is no strong guarantee that the backend will handle the saves sequentially and that the last valid form value is indeed the one stored on the backend.

Let's see what it would take to ensure that a save request is done only after the previous save is completed.

Understanding Observable Concatenation

In order to implement sequential saves, we are going to introduce the new notion of Observable concatenation. In this code example, we are concatenating two example observables using the concat() RxJs function:

After creating two Observables series1$ and series2$ using the of creation function, we have then created a third result$ Observable, which is the result of concatenating series1$ and series2$.

Here is the console output of this program, showing the values emitted by the result Observable:

a
b
x
y

As we can see, the values are the result of concatenating the values of series1$ with series2$ together. But here is the catch: this only works because these Observables are completing!!

The of() function will create Observables that emit values passed to of() and then it will complete the Observables after all values are emitted.

Observable Concatenation Marble Diagram

To really understand what is going on, we need to look at the Observable concatenation marble diagram:

Do you notice the vertical bar after the value b on the first Observable? That marks the point in time when the first Observable with values a and b (series1$) is completed.

Let's break down what is going on here by following step-by-step the timeline:

the two Observables series1$ and series2$ are passed to the concat() function

concat() will then subscribe to the first Observable series1$, but not to the second Observable series2$ (this is critical to understand concatenation)

note that the source2$ Observable is not yet emitting values, because it has not yet been subscribed to

source1$ will then emit the b value, which gets reflected in the output

source1$ will then complete, and only after that will concat() now subscribe to source2$

the source2$ values will then start getting reflected in the output, until source2$ completes

when source2$ completes, then the result$ Observable will also complete

note that we can pass to concat() as many Observables as we want, and not only two like in this example

The key point about Observable Concatenation

As we can see, Observable concatenation is all about Observable completion! We take the first Observable and use its values, wait for it to complete and then we use the next Observable, etc. until all Observables complete.

Going back to our higher-order Observable mapping example, let's see how the notion of concatenation can help us.

Using Observable Concatenation to implement sequential saves

As we have seen, in order to make sure that our form values are saved sequentially, we need to take each form value and map it to an httpPost$ Observable.

We then need to subscribe to it, but we want the save to complete before subscribing to the next httpPost$ Observable.

In order to ensure sequentiality, we need to concatenate the multiple httpPost$ Observables together!

We will then subscribe to each httpPost$ and handle the result of each request sequentially. In the end, what we need is an operator that does a mixture of:

a higher-order mapping operation (taking the form value and turning it into an httpPost$ Observable)

with a concat() operation, concatenating the multiple httpPost$ Observables together to ensure that an HTTP save is not made before the previous ongoing save completes first

What we need is the aptly named RxJs concatMap Operator, which does this mixture of higher order mapping with Observable concatenation.

The RxJs concatMap Operator

Here is what our code looks like if we now use the concatMap Operator:

As we can see, the first benefit of using a higher-order mapping operator like concatMap is that now we no longer have nested subscribes.

By using concatMap, now all form values are going to be sent to the backend sequentially, as shown here in the Chrome DevTools Network tab:

Breaking down the concatMap network log diagram

As we can see, one save HTTP request starts only after the previous save has completed. Here is how the concatMap operator is ensuring that the requests always happen in sequence:

concatMap is taking each form value and transforming it into a save HTTP Observable, called an inner Observable

concatMap then subscribes to the inner Observable and sends its output to the result Observable

a second form value might come faster than what it takes to save the previous form value in the backend

If that happens, that new form value will not be immediately mapped to an HTTP request

instead, concatMap will wait for previous HTTP Observable to complete before mapping the new value to an HTTP Observable, subscribing to it and therefore triggering the next save

Notice that the code here is just the basis of an implementation to save draft form values. You can combine this with other operators to for example save only valid form values, and throttle the saves to make sure that they don't occur too frequently.

Observable Merging

Applying Observable concatenation to a series of HTTP save operations seems like a good way to ensure that the saves happen in the intended order.

But there are other situations where we would like to instead run things in parallel, without waiting for the previous inner Observable to complete.

And for that, we have the merge Observable combination strategy! Merge, unlike concat, will not wait for an Observable to complete before subscribing to the next Observable.

Instead, merge subscribes to every merged Observable at the same time, and then it outputs the values of each source Observable to the combined result Observable as the multiple values arrive over time.

Practical Merge Example

To make it clear that merging does not rely on completion, let's merge two Observables that never complete, as these are interval Observables:

The Observables created with interval() will emit the values 0, 1, 2, etc. at a one second interval and will never complete.

Notice that we are applying a couple of map operator to these interval Observables, just to make it easier to distinguish them in the console output.

Here are the first few values visible in the console:

0
0
10
100
20
200
30
300

Merging and Observable Completion

As we can see, the values of the merged source Observables show up in the result Observable immediately as they are emitted. If one of the merged Observables completes, merge will continue to emit the values of the other Observables as they arrive over time.

Notice that if the source Observables do complete, merge will still work in the same way.

The Merge Marble Diagram

Let's take another merge example, depicted in the following marble diagram:

As we can see, the values of the merged source Observables show up immediately in the output. The result Observable will not be completed until all the merged Observables are completed.

Now that we understand the merge strategy, let's see how it how it can be used in the context of higher-order Observable mapping.

The RxJs mergeMap Operator

If we combine the merge strategy with the notion of higher-order Observable mapping, we get the RxJs mergeMap Operator. Let's have a look at the marble diagram for this operator:

Here is how the mergeMap operator works:

each value of the source Observable is still being mapped into an inner Observable, just like the case of concatMap

Like concatMap, that inner Observable is also subscribed to by mergeMap

as the inner Observables emit new values, they are immediately reflected in the output Observable

but unlike concatMap, in the case of mergeMap we don't have to wait for the previous inner Observable to complete before triggering the next innner Observable

this means that with mergeMap (unlike concatMap) we can have multiple inner Observables overlapping over time, emitting values in parallel like we see highlighted in red in the picture

Checking the mergeMap Network Log

Going back to our previous form draft save example, its clear that what we need concatMap in that case and not mergeMap, because we don't want the saves to happen in parallel.

Let's see what happens if we would accidentally choose mergeMap instead:

Let's now say that the user interacts with the form and starts inputting data rather quickly. In that case, we would now see multiple save requests running in parallel in the network log:

As we can see, the requests are happening in parallel, which in this case is an error! Under heavy load, it's possible that these requests would be processed out of order.

Observable Switching

Let's now talk about another Observable combination strategy: switching. The notion of switching is closer to merging than to concatenation, in the sense that we don't wait for any Observable to terminate.

But in switching, unlike merging, if a new Observable starts emitting values we are then going to unsubscribe from the previous Observable, before subscribing to the new Observable.

Observable switching is all about ensuring that the unsubscription logic of unused Observables gets triggered, so that resources can be released!

Switch Marble Diagram

Let's have a look at the marble diagram for switching:

Notice the diagonal lines, these are not accidental! In the case of the switch strategy, it was important to represent the higher-order Observable in the diagram, which is the top line of the image.

This higher-order Observable emits values which are themselves Observables.

The moment that a diagonal line forks from the higher-order Observable top line, is the moment when a value Observable was emitted and subscribed to by switch.

Breaking down the switch Marble Diagram

Here is what is going on in this diagram:

the higher-order Observable emits its first inner Observable (a-b-c-d), that gets subscribed to (by the switch strategy implementation)

the first inner Observable (a-b-c-d) emits values a and b, that get immediately reflected in the output

but then the second inner Observable (e-f-g) gets emitted, which triggers the unsubscription from the first inner Observable (a-b-c-d), and this is the key part of switching

the second inner Observable (e-f-g) then starts emitting new values, that get reflected in the output

but notice that the first inner Observable (a-b-c-d) is meanwhile still emitting the new values c and d

these later values, however, are not reflected in the output, and that is because we had meanwhile unsubscribed from the first inner Observable (a-b-c-d)

We can now understand why the diagram had to be drawn in this unusual way, with diagonal lines: its because we need to represent visually when each inner Observable gets subscribed (or unsubscribed) from, which happens at the points the diagonal lines fork from the source higher-order Observable.

The RxJs switchMap Operator

Let's then take the switch strategy and apply it to higher order mapping. Let's say that we have a plain input stream that is emitting the values 1, 3 and 5.

We are then going to map each value to an Observable, just like we did in the cases of concatMap and mergeMap and obtain a higher-order Observable.

If we now switch between the emitted inner Observables, instead of concatenating them or merging them, we end up with the switchMap Operator:

Breaking down the switchMap Marble Diagram

Here is how this operator works:

the source observable emits values 1, 3 and 5

these values are then turned into Observables by applying a mapping function

the mapped inner Observables get subscribed to by switchMap

when the inner Observables emit a value, the value gets immediately reflected in the output

but if a new value like 5 gets emitted before the previous Observable got a chance to complete, the previous inner Observable (30-30-30) will be unsubscribed from, and its values will no longer be reflected in the output

notice the 30-30-30 inner Observable in red in the diagram above: the last 30 value was not emitted because the 30-30-30 inner Observable got unsubscribed from

So as we can see, Observable switching is all about making sure that we trigger that unsubscription logic from unused Observables. Let's now see switchMap in action!

Search TypeAhead - switchMap Operator Example

A very common use case for switchMap is a search Typeahead. First let's define our source Observable, whose values are themselves going to trigger search requests.

This source Observable is going to emit values which are the search text that the user types in an input:

This source Observable is linked to an input text field where the user types its search. As the user types the words "Hello World" as a search, these are the values emitted by searchText$:

Debouncing and removing duplicates from a Typeahead

Notice the duplicate values, either caused by the use of the space between the two words, or the use of the Shift key for capitalizing the letters H and W.

In order to avoid sending all these values as separate search requests to the backend, let's wait for the user input to stabilize by using the debounceTime operator:

With the use of this operator, if the user types at a normal speed, we now have only one value in the output of searchText$:

Hello World

This is already much better than what we had before, now a value will only be emitted if its stable for at least 400ms!

But if the user types slowly as he is thinking about the search, to the point that it takes more than 400 ms between two values, then the search stream could look like this:

He
Hell
Hello World

Also, the user could type a value, hit backspace and type it again, which might lead to duplicate search values. We can prevent the occurrence of duplicate searches by adding the distinctUntilChanged operator.

Cancelling obsolete searches in a Typeahead

But more than that, we would need a way to cancel previous searches, as a new search get's started.

What we want to do here is to transform each search string into a backend search request and subscribe to it, and apply the switch strategy between two consecutive search requests, causing the previous search to be canceled if a new search gets triggered.

And that is exactly what the switchMap operator will do! Here is the final implementation of our Typeahead logic that uses it:

switchMap Demo with a Typeahead

Let's now see the switchMap operator in action! If the user types on the search bar, and then hesitates and types something else, here is what we can typically see in the network log:

As we can see, several of the previous searches have been canceled as they where ongoing, which is awesome because that will release server resources that can then be used for other things.

The Exhaust Strategy

The switchMap operator is ideal for the typeahead scenario, but there are other situations where what we want to do is to ignore new values in the source Observable until the previous value is completely processed.

For example, let's say that we are triggering a backend save request in response to a click in a save button. We might try first to implement this using the concatMap operator, in order to ensure that the save operations happen in sequence:

This ensures the saves are done in sequence, but what happens now if the user clicks the save button multiple times? Here is what we will see in the network log:

As we can see, each click triggers its own save: if we click 20 times, we get 20 saves! In this case, we would like something more than just ensuring that the saves happen in sequence.

We want also to be able to ignore a click, but only if a save is already ongoing. The exhaust Observable combination strategy will allow us to do just that.

Exhaust Marble Diagram

To understand how exhaust works, let's have a look at this marble diagram:

Just like before, we have here a higher-order Observable on the first line, whose values are themselves Observables, forking from that top line. Here is what is going on in this diagram:

Just like in the case of switch, exhaust is subscribing to the first inner Observable (a-b-c)

The value a, b and c get immediately reflected in the output, as usual

then a second inner Observable (d-e-f) is emitted, while the first Observable (a-b-c) is still ongoing

This second Observable gets discarded by the exhaust strategy, and it will not be subscribed to (this is the key part of exhaust)

only after the first Observable (a-b-c) completes, will the exhaust strategy subscribe to new Observables

when the third Observable (g-h-i) is emitted, the first Observable (a-b-c) has already completed, and so this third Observable will not be discarded and will be subscribed to

the values g-h-i of the third Observable will then show up in the output of the result Observable, unlike to values d-e-f that are not present in the output

Just like the case of concat, merge and switch, we can now apply the exhaust strategy in the context of higher-order mapping.

The RxJs exhaustMap Operator

Let's now have a look at the marble diagram of the exhaustMap operator. Let's remember, unlike the top line of the previous diagram, the source Observable 1-3-5 is emitting values that are not Observables.

Instead, these values could for example be mouse clicks:

So here is what is going on in the case of the exhaustMap diagram:

the value 1 gets emitted, and a inner Observable 10-10-10 is created

the Observable 10-10-10 emits all values and completes before the value 3 gets emitted in the source Observable, so all 10-10-10 values where emitted in the output

a new value 3 gets emitted in the input, that triggers a new 30-30-30 inner Observable

but now, while 30-30-30 is still running, we get a new value 5 emitted in the source Observable

this value 5 is discarded by the exhaust strategy, meaning that a 50-50-50 Observable was never created, and so the 50-50-50 values never showed up in the output

A Practical Example for exhaustMap

Let's now apply this new exhaustMap Operator to our save button scenario:

If we now click save let's say 5 times in a row, we are going to get the following network log:

As we can see, the clicks that we made while a save request was still ongoing where ignored, as expected!

Notice that if we would keep clicking for example 20 times in a row, eventually the ongoing save request would finish and a second save request would then start.

How to choose the right mapping Operator?

The behavior of concatMap, mergeMap, switchMap and exhaustMap is similar in the sense they are all higher order mapping operators.

But its also so different in so many subtle ways, that there isn't really one operator that can be safely pointed to as a default.

Instead, we can simply choose the appropriate operator based on the use case:

if we need to do things in sequence while waiting for completion, then concatMap is the right choice

for doing things in parallel, mergeMap is the best option

in case we need cancellation logic, switchMap is the way to go

for ignoring new Observables while the current one is still ongoing, exhaustMap does just that

Running GitHub repo (with code samples)

If you want to try out the examples in this post, here is a playground repository containing the running code for this post.

This repository includes a small HTTP backend that will help to try out the RxJs mapping operators in a more realistic scenario, and includes running examples like the draft form pre-save, a typeahead, subjects and examples of components written in Reactive style:

Conclusions

As we have seen, the RxJs higher-order mapping operators are essential for doing some very common operations in reactive programming, like network calls.

In order to really understand these mapping operators and their names, we need to first focus on understanding the underlying Observable combination strategies concat, merge, switch and exhaust.

We also need to realize that there is a higher order mapping operation taking place, where values are being transformed into separated Observables, and those Observables are getting subscribed to in a hidden way by the mapping operator itself.

Choosing the right operator is all about choosing the right inner Observable combination strategy. Choosing the wrong operator often does not result in an immediatelly broken program, but it might lead to some hard to troubleshoot issues over time.

I hope that you have enjoyed this post, if you are looking to learn more about RxJs, you might want to check out our other RxJs posts in the RxJs Series.

Also, if you have some questions or comments please let me know in the comments below and I will get back to you.

To get notified of upcoming posts on RxJs and other Angular topics, I invite you to subscribe to our newsletter:

Video Lessons Available on YouTube

Have a look at the Angular University Youtube channel, we publish about 25% to a third of our video tutorials there, new videos are published all the time.

]]>https://blog.angular-university.io/ngrx-entity/92bd2f49-62b1-401b-9f05-4aedbd3f3ea8Fri, 15 Jun 2018 07:15:00 GMTWhen using NgRx to build our application, one of the first things that we have to do is to decide what is the best possible format for storing data inside the store.

Handling the business data in our centralized store is something that we will need to do in any NgRx application, but the process can be repetitive and time-consuming if we have to come up with our own ad-hoc solution.

We often find ourselves handwriting the exact same reducer logic and selectors for different types of data, which is error prone and slows down the development process.

In this post, we are going to learn how NgRx Entity really helps us to handle the business data in our store.

We are going to understand in detail what is the value proposition of NgRx Entity and of the Entity State format that it uses, we will learn exactly what problem NgRx Entity solves and know when to use it and why.

Table of Contents

In this post, we will cover the following topics:

What is an Entity?

How to store collections of entities in a store?

designing the entity store state: Arrays or Maps?

What is NgRx Entity, when to use it?

The NgRx Entity Adapter

Defining the default entity sort order

Defining the Entity initial state

Write simpler reducers with NgRx Entity

Using NgRx Entity Selectors

What NgRx Entity is not designed to do

Configuring a custom unique ID field

Scaffolding an Entity using NgRx Schematics

The NgRx Entity Update<T> type

Github repo with running example

Conclusions

Note that this post builds on other store concepts such as actions, reducers and selectors. If you are looking for an introduction to NgRx Store and to the store architecture in general, have a look at this post:

So without further ado, let's get started in our NgRx Entity deep dive! Let's start at the beginning and start by understanding first what is an entity.

What is an Entity?

In NgRx, we store different types of state in the store, and this typically includes:

business data, such as for example Courses or Lessons, in the case of an online course platform

some UI state, such as for example UI user preferences

An Entity represents some sort of business data, so Course and Lesson are examples of entity types.

In our code, an entity is defined as a Typescript type definition. For example, in an online course system, the most important entities would be Course and Lesson, defined with these two custom object types:

The Entity unique identifier

As we can see, both entities have a unique identifier field called id, which can be either a string or a number. This is a technical identifier that is unique to a given instance of the entity: for example, no two courses have the same id.

Most of the data that we store in the store are entities!

How to store collections of entities in a store?

Let's say that for example, we would like to store a collection of courses in the in-memory store: how would we do that? One way would be to store the courses in an array, under a courses property.

The complete store state would then look something like this:

Why not store related entities in an Array?

Storing entities in the store in the form of an array is the first thing that comes to mind, but that approach can cause several potential problems:

if we want to look up a course based on it's known id, we would have to loop through the whole collection, which could be inefficient for very large collections

more than that, by using an array we could accidentally store different versions of the same course (with the same id) in the array

if we store all our entities as arrays, our reducers will look almost the same for every entity

For example, take the simple case of adding a new entity to the collection. We would be reimplementing several times the exact same logic for adding a new entity to the collection and reordering the array in order to obtain a certain custom sort order

As we can see, the format under which we store our entities in the store has a big impact on our program.

Let's then try to find out what would be the ideal format for storing entities in the store.

Designing the entity store state: Arrays or Maps?

One of the roles of the store is to act as an in-memory client-side database that contains a slice of the whole database, from which we derive our view models on the client side via selectors.

This works as opposed to the more traditional design that consists in bringing the view model from the server via API calls. Because the store is an in-memory database, it would make sense to store the business entities in their own in-memory database "table", and give them a unique identifier similar to a primary key.

The data can then be flattened out, and linked together using the entity unique identifiers, just like in a database.

A good way of modeling that is to store the entity collection under the form of a Javascript object, which works just like a Map. In this setup, the key of the entity would be the unique id, and the value would be the whole object.

In that new format, this is what the whole store state would look like:

Designing the state for id lookups

As we can see, this format makes it really simple to lookup entities by id, which is a very common operation. For example, in order to lookup the course with an id of 1, we would simply have to write:

state.courses[1]

It also flattens out the state, making it simpler to combine the multiple entities and 'join' them via a selector query. But there is only one problem: we have lost the information about the order of the collection!

This is because the properties of a Javascript object have no order associated to them, unlike arrays. Is there are any to still store our data by id in a map, and still preserve the information about the order?

Designing the state for preserving entity order

Yes there is, we just have to use both a Map and an Array! We store the objects in a map (called entities), and we store the order information in an array (called ids):

The Entity State format

This state format, which combines a map of entities with an array of ids is known as the Entity State format.

This is the ideal format for storing business entities in a centralized store, but maintaining this state would represent an extra burden while writing our reducers and selectors, if we would have to write them manually from scratch.

For example, if we would have to write some type definitions to represent the complete store state, they would look something like this:

As we can see, we already have here some repetition going on, as the types CoursesState and LessonsState are almost identical. More than that, all the reducer and selector code for these two entities would be very similar as well.

Writing reducers that support the Entity State format

Take for example a reducer for a LoadCourse action, that takes the current CoursesState and adds a new course to it and reorders the collection based on the seqNo field.

This is what the reducer logic for the LoadCourse action would look like:

As we can see, it's quite some code for simply adding a course to the store. The problem is that we would have to write similar code for other common operations such as updating a course in the store or deleting it.

Avoiding repeated reducer logic

But a bigger problem than that is that the code for an equivalent LoadLesson action, that loads one single Lesson into LessonsState would be nearly identical:

Except for using the type Lesson instead of Course, this code is practically identical to the reducer logic that we wrote before!

As we can see, keeping our entities in this dual array and map scenario gives rise to a lot of repetitive code.

Avoiding repeated Selector logic

More than the repeated type definitions, the repeated initial state, and almost identical reducer logic, we would also have a lot of nearly identical selector logic.

For example, here is some commonly needed selector for the Course entity, that selects all courses available in the store:

Quick explanation on feature selectors

Notice the selectCoursesState feature selector, this is an auxiliary selector that simply takes the property courses of the whole store state, like this:

storeState["courses"]

The advantage of using this utility is that this is type safe, and makes it simple to define lazy loaded selectors, that don't have access to the type definition of the root store state.

The selector selectAllCourses gets all the courses in the store and puts them in an array, and sorts the array according to the seqNo field.

The problem is that we would need some nearly identical logic for the Lesson entity:

As we can see, this code is almost identical to the selector that we wrote before for the Course entity.

It's a lot of repeated code

Let's summarize what type of code we have seen so far that is almost identical:

This is a lot of repeated code just to keep the data in our database in this optimized Entity State format. The problem is, that this is the ideal format for storing related entities in the store, and if we don't use it we will likely end up running into other issues.

The good news is that we can avoid almost all this repeated code by leveraging NgRx Entity!

What is NgRx Entity, when to use it?

NgRx Entity is a small library that helps us to keep our entities in this ideal Entity state format (array of ids plus map of entities).

This library is designed to be used in conjunction with NgRx Store and is really a key part of the NgRx ecosystem. It's just so much better to use NgRx Entity from the start in our project instead of trying to come with our own ad hoc in-memory database format.

Let's now learn the many ways that NgRx Entity helps us to write our NgRx application.

Defining the Entity State

Going back to our Course entity, let's now redefine the entity state using NgRx Entity:

This is identical to the type definition we wrote before, but we now don't have to define the ids and entities property for each separate entity. Instead, we can simply inherit from EntityState and have the same result with the same type safety and much less code.

The NgRx Entity Adapter

In order to be able to use the other features of NgRx Entity, we need to first create an entity adapter. The adapter is a utility class that provides a series of utility functions that are designed to make it really simple to manipulate the entity state.

The adapter is what is going to allow us to write all our initial entity state, reducers and selectors in a much simpler way, while still keeping our entity in the standard EntityState format.

Here is the adapter for the Course entity, configured to sort our entities using the seqNo field:

Defining the default entity sort order

Notice here that we have used the optional sortComparer property, that is used to set the sorting order of the Course entity, which is what is going to determine the order of the ids array for this entity.

If we don't use this optional property, then the id field is going to be used to sort the courses.

Write simpler reducers with the NgRx Entity Adapter

Let's now take the adapter and use it to define the initial state that we will need for our reducers.

We will then implement the same reducer logic as before:

Notice how much easier it is now to write our reducer logic using the adapter. The adapter is going to helps us to manipulate the existing CourseState, by doing in the addOne call everything that we were doing before manually:

addOne will create a copy of the existing state object, instead of mutating the existing state

then addOne is going to create a copy of the ids array and it will add the new course in the correct sort position

a copy of the entities object is going to be created, that points to all previous courses objects, without recreating those objects via a deep copy

the new entities object will have the new course added

Benefits of using the entity adapter

As we can see, by using the adapter to write our reducers, we can spare a lot of work and avoid common reducer logic bugs, as this type of logic is easy to get wrong.

It's not uncommon to accidentally mutate the store state, which might cause problems especially if we are using OnPush change detection in our application.

Using the adapter prevents all those problems, while reducing a lot the amount of code needed to write our reducers.

Operations supported by the NgRxEntity Adapter

Besides addOne, the NgRx Entity Adapter supports a whole series of common collection modification operations, that we would otherwise have to implement ourselves by hand.

Here is a complete set of examples for all the supported operations:

The adapter methods behave in the following way:

addOne: add one entity to the collection

addMany: add several entities

addAll: replaces the whole collection with a new one

removeOne: remove one entity

removeMany: removes several entities

removeAll: clear the whole collection

updateOne: Update one existing entity

updateMany: Update multiple existing entities

upsertOne: Update or Insert one entity

upsertMany: Update or Insert multiple entities

Now imagine what it would be if we would have to implement all this reducer logic ourselves!

Using NgRx Entity Selectors

Another thing that NgRx entity helps us with is with commonly needed selectors, such as selectAllCourses and selectAllLessons.

By running the following command, we have available a whole series of commonly needed selectors, generated on the fly:

These selectors are all ready to be used directly in our components or as the starting point for building other selectors.

Notice that these selectors are all named the same way independently of the entity, so if you need several in the same file, its recommended to import them in the following way:

These selectors are ready to be used and are just as type safe as the ones that we wrote manually ourselves.

What NgRx Entity is not meant to do

Notice that although NgRx Entity made it much easier to write the state, reducer and selector logic of the Course entity, we still had to write the reducer function itself, although this time around in a simpler way using the adapter.

Using NgRx Entity does not avoid having to write reducer logic for each entity, although it makes it much simpler.

This means that for the Lesson entity we would have to do something very similar. The convention is to put all this closely related code that uses the adapter directly in the same file where our entity reducer function is defined.

In the case of the Lesson entity, this is what the complete lesson.reducers.ts file would look like:

In practice, each entity has slightly different reducer logic so there will be no code repetition between reducer functions.

If you are looking for a solution that takes this one step further and removes the need to write entity specific reducer logic, have a look at ngrx-data.

Configuring a custom Unique ID field

As we have mentioned, the entities in our program should all have a technical identifier field called id. But if by some reason this field is either:

not available in a given entity

or it has a different name

or we simply would prefer to use another property which happens to be a natural key

We can still do that by providing a custom id selector function to the adapter. Here is an example:

This function will be called by the adapter to extract a unique key from a given entity.

In this example, we are creating a unique identifier for the Lesson entity by concatenating the courseIdproperty with the lesson sequential number, which is unique for a given course.

Handling custom state properties

So far we have been defining our entity state only by extending the EntityState type. But its possible that our entity state also has other custom properties other than the standard ids and entities.

Let's say that for the Course entity, we also need an extra flag which indicates if the courses have already been loaded or not. We could define that extra state property in CoursesState, and then update that property in our reducer logic using the adapter.

Here is a complete example of the CoursesState reducer file courses.reducers.ts, now including the extra state property:

Here is what we had to do to include this extra property:

first we have added the allCoursesLoaded property to the type definition of CoursesState

next, we need to define the initial value of this property in initialCoursesState, by passing an optional object to the call to getInitialState()

we now can set this property in our reducer logic, like we are doing here in the ALL_COURSES_LOADED reducer.

in order to so, we simple need to make a copy of the CourseState using the spread (... operator), then we modify the property and we pass this new state object to the adapter call

Scaffolding an Entity using NgRx Schematics

If you would like to quickly generate a reducer file like the ones we have shown in this post, you can get a very good starting point by using NgRx Schematics.

The first thing we need to do in order to use entity schematics is to set this CLI property:

ng config cli.defaultCollection @ngrx/schematics

After this, we can now generate a completely new Lesson reducer file by running the following command:

ng generate entity --name Lesson --module courses/courses.module.ts

What does NgRx Entity Schematics generate?

Let's now inspect the output generated by the command above. First, we have an empty Entity model file:

This schematic command will also generate a complete action file, with each action corresponding to one state modification method in the entity adapter:

Reviewing the content of the Actions file

This file follows the normal recommended structure of an action file:

one enum LessonActionTypes with one entry per Lesson action

one class per action, with the data passed to the action via a payload property

one union type LessonActions at the bottom, with all the action classes of this file

This last union type is especially helpful for writing reducer logic. Thanks to it, we can have full type inference and IDE auto-completion inside the case blocks of our reducers.

The NgRx Entity Update type

Notice also that in the definition of some actions we are using the type Update<Lesson>. This is an auxiliary type provided by NgRx Entity to help model partial entity updates.

This type has a property id that identifies the updated entity, and another property called changesthat specifies what modifications are being made to the entity.

Here is an example of valid update object for the Course type:

Reviewing the content of the reducers file

The NgRx Entity schematic command will also generate the Entity reducer file plus a test, as expected. Here is the content of the reducer file:

How to best use the schematics output?

Notice that the generated schematics files (like any other file generated the CLI) are not meant to remain unchanged.

In fact, you might not even want to use the actions file for example, but instead write your own actions with a given set of conventions like the ones recommended in this talk:

Also, probably not all actions are going to be needed in the application, so it's important to keep only the ones that we need and adapt them. As usual, the files that are generated by schematics are simply a helping starting point that needs to be adapted on a case by case basis.

Github repo with running example

For a complete running example of a small application that shows how to use NgRx Entity with the two entities that we have used in the examples (Course and Lesson), have a look at this repository.

Here are the NgRx DevTools showing the store content with the two entities:

Conclusions

NgRx Entity is an extremely useful package, but in order to understand it its essential to first be familiar first with the base store concepts like Actions, Reducers and Selectors, and with the store architecture in general.

If we are already familiar with these concepts, we probably already tried to find the best way to structure the data inside our store.

NgRx Entity provides an answer for that by proposing the Entity State format for our business entities, which is optimized for lookups by id while still keeping the entity order information.

The NgRx Entity Adapter together with NgRx Schematics makes it very simple to get started using NgRx Entity to store our data.

But notice that not all store state needs to use NgRx Entity!

NgRx Entity is specifically designed to handle only the business entities in our store, making it simple to store them in memory in a convenient way.

Learn more about the NgRx Ecosystem

I hope that this post helps with getting started with Ngrx Entity, and that you enjoyed it!

If you are looking to learn how to get started with the NgRx ecosystem, you might want to check the previous blog posts of this series:

Video Lessons Available on YouTube

]]>https://blog.angular-university.io/angular-ngrx-devtools/3533bd3d-dcc5-4d6a-80f1-59c4aad5e7f4Wed, 16 May 2018 08:00:00 GMTThis post is a step-by-step guide for setting up your Ngrx Development environment, namely the Ngrx DevTools, but not only: we will also talk about some best practices for developing Ngrx applications in general.

These practical tips will likely make a huge difference in your Ngrx development experience (if you haven't implemented them already).

Any Ngrx project would benefit from having these tips in place in order to make the most out of the DevTools, and of the store architecture in general.

Action conventions and best practices, to help to make the most out of the DevTools and of the store architecture in general

Conclusions

So without further ado, let's get started with our Ngrx DevTools deep dive!

What are the Ngrx DevTools?

The Ngrx DevTools are a Chrome / Firefox browser extension that includes a UI for inspecting and interacting with a Ngrx-based application.

As an example, here is a screenshot of the Ngrx DevTools in action:

The main features of the Ngrx DevTools

As we can see, inside the Ngrx DevTools we have:

an Action Log, that gives us a great understanding of how the application works, and what parts of the application are triggering which Actions

A State inspector, that allows us to easily inspect the in-memory store state

a Time-travelling debugger (the Play button and timeline at the bottom), that allows us to replay any Action at any given point of the debugging session, and even replay the whole session while navigating through multiple screens

What are the benefits of the Ngrx DevTools?

Here are some of the benefits of the Ngrx DevTools:

we can visually see the content of the store at any moment, which is essential for debugging

we can have a new developer on the team inspect the application with the DevTools, and have it get a good initial understanding of how the application works

if we manage to get the client state of users in production, we can use the DevTools to reproduce bugs locally, just by importing the user production state

The key benefit of the DevTools is that it gives us some immediate visual feedback about what the application is doing at all times, making it much easier to understand what is going on.

Installing the Ngrx DevTools with Ngrx Schematics

It's best to setup the Ngrx DevTools from the very beginning of the project. We can setup an Ngrx Store and configure the DevTools all in one go with the following Angular CLI command:

ng generate store AppState --root --module app.module.ts

In order for this command to work, we will need first to enable Ngrx Schematics, by adding this Angular CLI configuration:

ng config cli.defaultCollection @ngrx/schematics

After running these commands, we will see that the Ngrx DevTools are enabled in the root application module, but only if the Angular CLI is running in development mode:

And with this, our application now supports the Ngrx DevTools! Now we just have to install the DevTools extension in our browser, by following these instructions.

After installing the extension, you now should have the Ngrx DevTools available under the "Redux" menu option of your browser development tools (open them with Ctrl+Shift+I in Chrome).

After opening the Ngrx DevTools, you will have to reload the application in order to start debugging the application.

Setting up the Router integration (ngrx-router) from the beginning

With the Ngrx DevTools, having the browser extension up and running is only half the story. As soon as we start using the DevTools, we will run into scenarios where the time-traveling feature comes in handy.

But the time-traveling debugger by default cannot navigate through multiple application screens, so we can't use it to effectively replay the complete user UI session from the beginning.

In order to enable full time-traveling debugging, we need to somehow integrate the DevTools with the Angular Router, so that going back in the Action timeline also means navigating to previous screens.

The Ngrx Router Store module allows us to do exactly that, so let's go ahead and enable it in our root application module.

Installing the Ngrx Router Store module

In order to enable the Ngrx Store router integration, we need to first declare the following in the root module imports section:

This configuration means that the Router Store module will save its state inside the store under an application state property named router (configured via the stateKey).

Setting up the Router Store reducer

In order to populate the store with this new router state, we will need a new reducer for handling all state under the router property. We can configure this reducer in our root reducer map in the following way:

And with this, we have finished the setup of the Ngrx Router Store module! Let's now have a look at how the Router Store integration works in practice.

The Ngrx Router Store in Action

Now as we restart our DevTools, we will see that router navigation occurrences now show up as actions in the Action Log:

If we inspect the content of the action logged as ROUTER_NAVIGATION using the Ngrx DevTools, we see that it contains all the information necessary for performing a router transition:

Also, if we inspect the state of the store, we will see that there is now some new state under the router property:

As we can see, now with each router navigation the Ngrx Router Store module is capturing the router state and saving it inside the store, under the router property!

This will allow us the replay all the actions of the user from the beginning of the debugging session, including router navigations.

How important is the Ngrx Router Store module, is it optional?

It might look like the Ngrx Router Store module is optional.

But if you really want to be able to use the Ngrx DevTools in a lot of practical scenarios where often router navigation is involved, then this module ends up becoming essential.

Note that although this module would potentially also allow us to trigger router navigation by dispatching store actions (see here to learn how), that is not the main goal of the module.

The main goal of the Router Store module is not to replace the Angular Router navigation API.

Instead, its main goal is to enable the DevTools and time-traveling debugging to work well in a much larger number of scenarios where routing is involved, although having the router state in the store might also come in handy in other situations.

Using a custom router serializer in order to avoid freezing the DevTools

If you are using an Ngrx version earlier than this one, you will have likely noticed that the Ngrx DevTools might crash due to unresponsiveness problems.

This was caused due to problems in attempting to serialize the Angular Router state, which by default contains cycles in its object graph.

This means that in order to solve this issue and have fully functioning Ngrx DevTools, you might have to install a custom Router state serializer, that stores the router state in a format without graph cycles.

This unresponsiveness/crash problem is solved in newer releases, but if you still have it in your application then its better to quickly install a custom router serializer, in just a few steps.

Here are the instructions on how to install the custom router serializer.

Note: for earlier Ngrx releases that had this DevTools unresponsiveness issue, using a custom router state serializer was usually not optional in practice

With this last problem solved, you should now have fully functional Ngrx Development tools, with full time traveling debugging capability!

Prevent several types of bugs by using Ngrx Store Freeze

Let's now cover another very useful development tool that we have available in the Ngrx ecosystem: Ngrx Store Freeze.

As you know, the store reducer functions need to be written in a very precise way, and not following that well-known set of rules can be a recipe for some very hard to troubleshoot bugs.

One of those rules is that reducer functions should be pure functions, that take as input the current state and the action, and return the new state.

The reducer function is not expected to any way mutate either the existing state or the dispatched action. Instead, its expected that the reducer function returns a new version of the state without mutating any of its inputs.

Not doing so so might potentially cause some very hard to troubleshoot issues in our application, especially if we are using OnPush change detection in large parts of our component tree.

Other than that, it also breaks the time-traveling debugger feature.

Mutating the store state at the component level

Another common source of hard to debug errors while building a store application is the possibility of some component in our component tree to accidentally get a direct reference and directly mutate the store state.

Having a direct reference to the store mutable state would allow the component to directly mutate the store state and therefore break the store pattern, instead of having to dispatch an action in order to change the store state.

Mutating the store state directly either via the component tree or inside reducers also breaks the time-traveling feature of the DevTools.

This, combined with the possibility of introducing time-consuming bugs and architecture errors makes this problem something that we want to tackle from the very beginning.

As we will see, Ngrx Store Freeze provides us with a really simple solution for this very common set of potential problems.

How does Ngrx Store Freeze work?

Ngrx Store Freeze is easy to install and provides an effective solution to all the previously mentioned problems related to store state mutability.

The Ngrx Store Freeze module automatically deep freezes our full store state object as well as dispatched actions. It does so by going to each object property of the store state and setting it to read-only.

Nested properties are also recursively frozen, meaning that the whole store state object is made effectively immutable.

With this setup, it's now impossible for reducer functions to mutate the store state or the action, both in our reducers and in our component tree.

Installing Ngrx Store Freeze

Ngrx Store Freeze is a meta-reducer, meaning that its just a normal reducer function. The difference towards a normal reducer is that a meta reducer is applied on top of the output of another reducer function.

Meta-reducers can then be combined in an ordered chain, with each meta-reducer building on top of the output of the previous meta-reducer.

The Ngrx Store Freeze meta-reducer will be applied after all the normal reducers have been triggered for a given action, and it will freeze the whole store state before the state can even be sent back to the component tree.

Here is how we install the Ngrx Store Freeze meta-reducer:

Notice that Ngrx Store Freeze is only active in development mode. In production, we won't get the potential performance penalty linked to deep freezing the store state with each action dispatch.

Notice that this performance hit is only noticeable for a very large amount of store state

Ngrx Store Freeze in Action

Let's then go ahead and see how Ngrx Store Freeze works! Here is an example of an incorrectly written reducer, that accidentally mutates the original store state:

As we can see, the reducer for the login action is directly mutating the original authentication feature state.

Without Ngrx Store Freeze, this might not cause any immediately visible issue, other than breaking the time-traveling debugger feature of the NGrx DevTools.

But if our application is using OnPush, this would likely already start causing view synchronization issues, where the view and the store data are no longer in sync.

Catching mutability issues from the start

In order to avoid this problem altogether, we just have to active Ngrx Store Freeze. This time around, we would get the following error in the console:

This error is caused by Ngrx Store Freeze, and its throw by the Javascript Runtime whenever we try to mutate a read-only property of a frozen object.

The error might look at bit intimidating at first, but looking further into the stack trace, its actually a very useful error message: notice that on the stack trace we even have the exact line of the reducer function that is causing the issue.

By refactoring the reducer in order to return a new state object instead of mutating the existing one, the problem is now fixed:

Preventing store state mutation at the component level

Note that Ngrx Store Freeze also prevents issues caused by attempting to mutate the store state directly at the component level, which would break the store architecture in general as well as the DevTools time-traveling functionality.

If you don't have Ngrx Store Freeze in your Ngrx application, you might want to give it a go and see if you can preventively catch a couple of potential bugs in your application!

The best is to use it from the beginning and avoid altogether all these potential issues. Besides, ensuring store state immutability makes it really simple to adopt OnPush change detection everywhere in the application (if needed), which depending on the application can make for a nice UI performance boost.

Action Conventions, make the most out of the DevTools and the Store architecture

To make the most out of the Ngrx DevTools and the store architecture, it's important to name our actions and choose their action type description string according to certain useful conventions.

Going back to our action log, notice how we can already tell something about the application just by reading the log:

Without looking at any source code, we can already tell that:

the user logged in

then the user navigated to the Home Page

There, a list of courses was requested

Eventually, the list was loaded from the backend using an API request

This log is readable because the action types of this application are written in the following way:

Action conventions

These action types follow the following convention:

[Source] Event

The convention works like this:

The Source is the part of the application that triggered the Action. For example, the screen that dispatched the Action

The Event is the application event linked to the Action

You can learn a lot more about Store architecture best practices in general and about the Source/Event convention in particular by watching the following awesome talk by Mike Ryan "Good Action Hygiene with Ngrx":

Important takeaways from the Source /Event Action convention

This convention means a couple things regarding the way that we look at Actions.

The first thing is that Actions are meant to be specific to a given screen or Effect, and not generic. For Example, the Action Source "Course Home Page" is very specific to a given component of the application.

This also means that in order to have readable action logs, we should avoid reusing Actions between screens.

Instead, if the Actions share the same reducer logic, we can use the switch block fall-through feature in our reducers to apply the same reducing logic to multiple actions.

Action Design: Events instead of Commands

We should avoid designing our Actions as Commands: we should make them Events instead. The difference is subtle but critical in terms of long-term application maintainability.

Making an Action an Event means that an Action reports something that has already happened in the near past, and that is well known in the scope of a given component or effect.

This last point is especially important because this way the component dispatching the Action cannot fall into the pitfall of becoming aware of other parts of the Application.

The dispatching component simply dispatches an Event under the form of an Action, and the store (which includes the Effects) will decide what to do in response to that: call a backend, run the reducers, both, etc.

Why not make an action a Command?

If we would make the action a Command, we would be making the component decide what the store or even what other components should do in response to the Action, indirectly via the store.

One of the main goals of the store pattern is to remove from the components the ability to directly modify application state, instead only the store can do that. The components simply subscribe to the state without modifying it, project it into the view, and report events back to the store.

The components that get data from the store are meant to be kept well isolated and unaware of each other, and Command-like actions like IncrementTopMenuCounter dispatched in multiple parts of the application would not allow that.

Conclusions

One of the best ways to make the most out of the Ngrx store architecture is to have the Ngrx DevTools fully up and running from the very beginning of the project.

And to have the complete time-traveling functionality up and running, we really need the Router Store integration. Depending on the Ngrx release, a custom Router state serializer is going to be essential as well.

With this, we will have solid Ngrx DevTools for the whole duration of the project, so taking a moment to get them running at the beginning is well worth it.

Also, in order to avoid common mutability-related bugs and allow a simplified switch to OnPush, it's also important to ensure store state immutability.

The simplest way to do that without adopting something like ImmutableJs is to simply use Ngrx Store Freeze in development mode. This helps a lot ensuring that our reducers are correctly written, as well as preventing our components from accidentally mutating the store state.

Finally, but probably the most important, to make the most out of the Store architecture and the Ngrx DevTools it's important to learn about Action design best practices.

More on Ngrx

To learn more about Ngrx in general, have a look at this great batch of ng-conf talks. I hope that this post helps with getting started with the Ngrx DevTools, and that you enjoyed it!

If you have some questions or comments please let me know in the comments below and I will get back to you.

To get notified of upcoming posts on Ngrx and other Angular topics, I invite you to subscribe to our newsletter:

Video Lessons Available on YouTube

Have a look at the Angular University Youtube channel, we publish about 25% to a third of our video tutorials there, new videos are published all the time.

]]>https://blog.angular-university.io/angular-viewchild/7dd4b4ee-2a48-4bf8-9fdc-a75c7df6fa23Thu, 26 Apr 2018 07:38:00 GMTThe Angular @ViewChild decorator is one of the first decorators that you will run into while learning Angular, as it's also one of the most commonly used decorators.

This decorator has a lot of features: some of them might not be very well known but they are extremely useful.

In this post, we are going to quickly cover all the features that we have available in this decorator while giving practical examples for each use case along the way.

Table of Contents:

In this post, we will cover the following topics:

When do we need the @ViewChild decorator?

The AfterViewInit Lifecycle Hook

What is the scope of @ViewChild template queries?

using @ViewChild to inject a component

how to use @ViewChild to inject a plain HTML element

using @ViewChild to inject the plain HTML element of a component

how to use @ViewChild to inject one of the multiple directives applied to a single element or component

Code Samples (GitHub Repository)

Conclusion

So without further ado, let's get started with our @ViewChild decorator deep dive!

When do we need the @ViewChild decorator?

As we know, in Angular we define a component template by combining plain HTML elements with other Angular components.

As an example, we have here an Angular AppComponent template that mixes both HTML and custom components in its template:

As we can see, this template includes several different elements types, such as:

We are going to base all our examples in this initial template. The <color-sample> component is the little palette blue square, and next to it we have an input that is linked to a color picker popup.

When to use the @ViewChild decorator?

Many times we can coordinate these multiple components and HTML elements directly in the template by using template references like #primaryInput or #primaryColorSample, without using the AppComponent class.

But this is not always the case! Sometimes, the AppComponentmight need references to the multiple elements that it contains inside its template, in order to mediate their interaction.

If that's the case, then we can obtain references to those template elements and have them injected into the AppComponent class by querying the template: that's what @ViewChild is for.

Using @ViewChild to inject a reference to a component

Let's say that AppComponent needs a reference to the <color-sample> component that it uses inside its template, in order to call a method directly on it.

In that case, we can inject a reference to the <color-sample> instance named #primaryColorSample by using @ViewChild:

By using @ViewChild, the primarySampleComponent member variable is going to be filled in by Angular with a ColorSampleComponent instance.

This injected ColorSampleComponent instance is the same one linked to the <color-sample> custom element present in the template.

When are variables injected via @ViewChild available?

The value of this injected member variable is not immediately available at component construction time!

Angular will fill in this property automatically, but only later in the component lifecycle, after the view initialization is completed.

The AfterViewInit Lifecycle Hook

If we want to write component initialization code that uses the references injected by @ViewChild, we need to do it inside the AfterViewInit lifecycle hook.

Here is an example of how to use this lifecycle hook:

If we now run this program, here is the output that we get in the console:

As we can see, Angular has filled automatically our member variable primaryColorSample with a reference to a component!

Can we use ngOnInit() instead of ngAfterViewInit()?

If we want to make sure that the references injected by @ViewChild are present, we should always write our initialization code using ngAfterViewInit().

Depending on the situation, the template references might already be present on ngOnInit(), but we shouldn't count on it.

What is the scope of the @ViewChild template queries?

With @ViewChild, we can inject any component or directive (or HTML element) present on the template of a given component onto the component itself.

But how far can we query components down the component tree? Let's try to use @ViewChild to query a component that is deeper in the component tree.

As an example, let's have a look at the <color-sample> component:

As we can see, this component internally uses the <mat-icon> component, to display the small palette icon.

Let's now go ahead and see if we can query that <mat-icon> component and inject it directly into AppComponent:

If we try to run this, this is what we get in the console:

As we can see in the console results:

The @ViewChild decorator cannot see across component boundaries!

Visibility scope of @ViewChild template queries

This means that queries done using @ViewChild can only see elements inside the template of the component itself. It's important to realize that @ViewChild cannot be used to inject:

Anything inside the templates of its child components

and neither anything in the template of parent components as well

To summarize: the @ViewChild decorator is a template querying mechanism that is local to the component.

With this, we have covered the most common use case of @ViewChild, but there is still a lot more to it: let's see some more use cases!

Using @ViewChild to inject a reference to a DOM element

Instead of injecting a direct child component, we might want to interact directly with a plain HTML element of the template, such as for example the h2 title tag inside AppComponent.

In order to do that, we need to first assign a template reference to the HTML tag that we want to inject:

As we can see, we have assigned the #title template reference to the h2 tag. We can now have the h2 element injected directly into our component class in the following way:

As we can see, we are passing the string 'title' to the @ViewChild decorator, which corresponds to the name of the template reference applied to the h2 tag.

Because h2 is a plain HTML element and not an Angular component, what we get injected this time is a wrapped reference to the native DOM element of the h2 tag:

ElementRef simply wraps the native DOM element, and we can retrieve it by accessing the nativeElement property.

Using the nativeElement property, we can now apply any native DOM operation to the h2 title tag, such as for example addEventListener().

And this is how we can use @ViewChild to interact with plain HTML elements in the template, but this leads us to the question:

What to do if we need the DOM element that is associated with an Angular component instead?

After all, the <color-sample> HTML tag is still a DOM element, even though it has an instance of ColorSampleComponent attached to it.

Using @ViewChild to inject a reference to the DOM element of a component

Let's give an example for this new use case. Take for example the <color-sample> component inside AppComponent:

The <color-sample> component has a template reference #primaryColorSample assigned to it.

Let's see what happens if we now try to use this template reference to inject the <color-sample> DOM element like we did with the h2 tag:

If we run this program, we might be surprised to find out that this time we are not getting back the native DOM element:

Default behaviour of @ViewChild injection by template reference

Instead, we are getting back again the ColorSampleComponent instance! And this is indeed the default behavior of @ViewChild when using injection by template reference name:

when injecting a reference applied to a component, we get back the component instance

when injecting a reference to a plain HTML element, we get back the corresponding wrapped DOM element

The @ViewChild options argument

But in the case of our <color-sample> component, we would like to get the DOM element that is linked to the component! This is still possible, by using the second argument of the @ViewChild decorator:

As we can see, we are passing a second argument containing a configuration object with the read property set to ElementRef.

This read property will specify exactly what we are trying to inject, in case that there are multiple possible injectables available.

In this case, we are using the read property to specify that we want to get the DOM element (wrapped by ElementRef) of the matched template reference, and not the component.

If we now run our program, this is indeed what we get in the console:

Let's now see another common use case when the @ViewChild read property might come in handy.

Using @ViewChild to inject a reference to one of several directives

As our application uses more and more directives and libraries, we will likely need the read property more and more.

For example, going back to our color picker example, let's now try to do something simple like opening the color picker when the color sample gets clicked:

In this example, we are trying to integrate our components by using template
references only.

We are detecting the click in <color-sample>, and when that occurs we are trying to use the reference #primaryInput to access the colorPicker directive, and open the dialog.

Using template references is a good approach that will work in many cases, but not here!

In this case, the template reference #primaryInput points to the DOM element <input>, and not to the colorPicker directive applied to that same element.

If we run this version of the program, we will get the following error:

This error occurs because, by default, the template reference primaryInput points to the input box DOM element, and not to the colorPicker directive.

Using @ViewChild to inject Directives

As we can see, this is not the way to get a reference to a directive, especially in a situation when multiple directives are applied to the same plain HTML element or Angular component.

In order to solve this, we are going to first rewrite our template so that the handling of the click event is now delegated to the AppComponent class:

Then in the AppComponent class, we are going to have the color picker directive injected in the following way:

And with this, we now have a correct reference to the colorPicker directive!

If we now click on the small palette icon, the color picker will now get opened as expected:

And with this last example, we have now covered all the features of the @ViewChild operator, and some of their intended use cases. Let's now summarize what we have learned.

Code Samples (GitHub Repository)

All the running code for these examples can be found in the following GitHub repository.

Conclusion

The @ViewChild decorator allows us to inject into a component class references to elements used inside its template, that's what we should use it for.

Using @ViewChild we can easily inject components, directives or plain DOM elements. We can even override the defaults of @ViewChild and specify exactly we need to inject, in case that multiple options are available.

@ViewChild is a local component template querying mechanism, that cannot see the internals of its child components.

By injecting references directly into our component class, we can easily write any coordination logic that involves multiple elements of the template.

Why we don't always need @ViewChild

Let's also remember that there are many use cases when this decorator might not actually be needed, because many simple interactions can be coded directly in the template using template references, without the need to use the component class.

I hope that this post helps with better understanding @ViewChild, and that you enjoyed it!

If you have some questions or comments please let me know in the comments below and I will get back to you.

To get notified of upcoming posts on Angular Universal and other Angular topics, I invite you to subscribe to our newsletter:

Video Lessons Available on YouTube

Have a look at the Angular University Youtube channel, we publish about 25% to a third of our video tutorials there, new videos are published all the time.

So without further ado, let's get started with our Angular Universal deep dive!

What is Angular Universal?

In a nutshell, Angular Universal is a Pre-Rendering solution for Angular.

To understand what this means, let's remember that in a normal single page application, we usually bring the data to the client and then build the HTML that represents that data last second on the client side.

But in certain situations and by good reasons, we might want to instead do that rendering ahead of time, for example on the server or at application build time: and that is exactly what Angular Universal allows us to do.

How does Angular Universal work?

When we use Angular Universal, we will render the initial HTML and CSS shown to the user ahead of time. We can do it for example at build time, or on-the-fly on the server when the user requests the page.

This HTML and CSS will be served initially to the user, so the user can see something on the screen quickly. But server-side rendering is only half the story!

This is because together with the server-side rendered HTML, we will also ship to the browser a normal client-side Angular Application.

This Angular client application will then take over the page, and from there on everything is working like a normal single page application, meaning that all the runtime rendering will occur directly on the client as usual.

This will likely lead you to the question: when would we want to use Angular Universal, and why?

Why Angular Universal? - Performance

There a couple of reasons for using Angular Universal in our project, but maybe the most common reason is to improve the startup performance of our application.

As we know, a single page when initially loaded is essentially an empty index.html file with almost no HTML. This means that when this HTML file gets initially rendered by the browser, all that the user will see is a completely blank screen!

Let's have a look at an example using the Chrome Dev Tools performance tab. This is what we typically see in an Angular application running locally:

As we can see in the screenshot popup of the performance tab timeline, the application starts off as a completely blank page.

Depending on the application, this could go on for several seconds. But the problem is that:

53% of all users abandon an application if it takes longer than 3 seconds to load!

So clearly this initial delay in showing something to the user makes a huge difference in terms of user experience.

If you want a more detailed explanation about how to use the Chrome Dev tools to measure how long your application stays in this blank state, have a look at this video:

Server-side Rendering and User Experience

With Angular Universal, instead of a blank index.html page we will be able to quickly show something to the user, by rendering the HTML on the server and sending that on the first request.

The user can then see some initial content much faster, which improves a lot the user experience (especially on mobile), giving us an important use case for server-side rendering.

But there are also other good reasons for using server-side rendering other than performance.

Why Angular Universal? Search Engine Optimization

Another reason for doing server-side rendering is to make our application more search engine friendly.

Today, most search engines take the title and description shown in search results from metadata tags present in the header section of the page.

For example, here are the search results for an "Angular Universal" Google search:

Where do these page titles come from?

All the blue link titles that we see in these search results are being filled in based on metadata tags present on the pages they link to.

For example, the title for our third search result (highlighted in red) can be found in a title HTML tag, present inside the head section of the result page:

As we can see, the content of the title HTML tag highlighted in blue corresponds to the title of highlighted entry in our search results.

What do search engines expect to find in a page?

Most search engine crawlers expect these important SEO meta tags to be present on the HTML returned by the server, and not to modified at runtime by Javascript.

And this is the same for the remainder of the page content, most search engines will index only the content that comes back from the server directly, and not what get's loaded using Javascript.

So having our page server-side render those metadata tags is essential for ranking correctly in a lot of search engines.

However, that is not the case for the Google search engine!

Does the Google Search Engine index well single page applications?

Currently, the Google search engine indexes correctly most Javascript pages, and a good proof of that is the Angular Docs site, which is itself an SPA built with Angular.

The Angular Docs site indexes perfectly for long queries that target content that is loaded dynamically via Javascript. The website even populates the title and description meta tags dynamically using Javascript (we will do that too), and those get shown in the search results with no problem.

If you want to try this out and see for yourself, simply take a long string from any page in the Angular Docs website and search for it: here is an example:

As we can see, the Angular Docs SPA ranks perfectly for this very long search query on Google, even though all its content was loaded dynamically by Javascript.

Do all search engines index Javascript?

But what happens if we do the same search on other search engines? Have a look here at the Bing search results for the same query.

As we can see the same Angular Docs result that is in third place on Google does even not rank on Bing, as Bing currently does not index dynamic Javascript content, and the same is true for many search engines including DuckDuckGo and others.

Is SEO still a key reason for using Angular Universal?

If we are targetting only the Google search engine, as we have shown there is no need to server-side render our content in order to have it ranked correctly, as Google can today index correctly most Javascript-based content.

On the other hand, if we want to target all search engines, then server-side rendering is a must as we can see in Bing search results.

Let's now cover one last reason for using Angular Universal: Social Media crawlers.

Why Angular Universal? Social Media Crawlers

The same way that search engines will crawl our pages looking for titles and descriptions, social media crawlers from platforms like Twitter will also do something very similar.

Whenever we post a link to social media, the social media platform will crawl the content of the page, and it might try to extract some information about the page to make the post look better.

Social Media benefits of server-side rendering

The tweet originally contained only the text and the link, but the Twitter crawler managed to extract a picture, a title, some text and it built a summary card based on the page content.

In order for the Twitter crawler to be able to build this card, we currently need the application to be server-side rendered and fill in some special meta tags (we will show how to do that).

So that is another reason for using Angular Universal: improving the social media presence of our application. And with this, we now have a good understanding of when to use Angular Universal and why!

Let's then start using Angular Universal in an existing Angular Application.

How to add Universal rendering with the Angular CLI

Our starting point will be an existing Angular application that already uses the Angular CLI (available in this Github branch).

Let's then start by adapting the application to enable it to build an Angular Universal bundle. We can quickly add an Universal bundle to our application by running the following Angular CLI command:

ng generate universal --client-project <name of your client project>

What is the Universal bundle, how does it work?

This new application when built is going to build a main.bundle.js file, and that is the Universal bundle.

This bundle contains essentially the same application as the client application, but the Angular rendering layer has been swapped out via dependency injection.

Instead of using the same rendering layer as on the client (which produces DOM directly), we will use a server-side rendering layer. The server-side rendering layer will output plain HTML in text form, and not DOM.

To understand what this bundle contains, let's have a look at what this command did in the file system, and review the changes one by one.

Configuration for the new Universal Bundle

One of the main things that the previous CLI did was to add a new build target to our angular.json build configuration file:

The differences towards the client side app are just a couple of properties, namely:

the build output is going to the dist-server folder, instead of the dist folder

this application is going to use the rendering layer available in @angular/platform-server

The build output of this new CLI application will the Universal bundle itself

How does this new Universal Application Work?

To understand how this new application works, let's have a look at the application entry point main.server.ts:

As we can see, this is very similar to the entry point of the client side application. But instead of exporting the client application root module AppModule, we are instead exporting the newly generated AppServerModule.

Let's then have a look at this new AppServerModule:

As we can see, this new server root module is importing the AppModule client root module, plus ServerModule from platform-server.

This means that this new server-side application will have the exact same set of application-level components and services as the client application, but with the difference that several Angular internal services are being swapped out via dependency injection, namely the renderer.

Other than that, the two applications are exactly the same, they just have different rendering layers and use different implementations of certain services.

How to Build the Angular Universal Bundle?

Let's then go ahead and build the Universal bundle, and see how we can use it to quickly pre-render the main route of our application.

We will need the production version of the Universal bundle, as the development bundle will not work. So let's go ahead and generate it using the following command:

ng run your-project-name:server

This command will generate a bundle.js in the dist-server folder: this is our Universal bundle, that we will be using to pre-render our application in a moment.

Pre-Rendering our Application using the Universal Bundle

The best way to understand how Angular Universal works is to simply take the universal bundle and use it to output, for example, the main root route of our application.

Let's then build a small command line tool to pre-render the HTML of our main HTML route, and output it to a text file. From there, we will easilly build the Express Server.

Our program will be a file named prerender.ts, located in the root folder of our application.

In order to pre-render the application main route, our program simply calls the function renderModuleFactory(), which is the heart of the Angular Universal pre-rendering solution.

Here is what our program will look like:

How to call renderModuleFactory()

Let's then break down what is going here. Among a couple of imports needed to make Universal work in a node environment, we are also importing the application root module factory from the universal bundle:

The application root module factory object (named AppServerModuleNgFactory)
is one of the main outputs of the build process, and contains all the information necessary for rendering the application on the server.

We are then going to take this module factory, and use it to render the application by calling renderModuleFactory(). Besides it, we will also pass to the function a couple of extra properties:

Choosing what document to render using the Universal Bundle

The document property is a string containing the template that we want to render. In this case, we want to render the application root component, so our template simply contains the root component and nothing more:

<app-root></app-root>

This component will internally contain different many components depending on the state of the router, so we also need to pass as an argument which route do we want to render via the url property.

In this case, we will be rendering the root route (/). And believe it or not, using Angular pre-rendering is that simple!

If we now run this program, a file named prerender.html is going to be generated that contains the result of this rendering process.

To see a video version of what we have done so far, have a look at this YouTube video:

Running our Pre-Rendering command line utility

If you would like to run this program and see what the output looks like, you can find a running version in this Github branch.

In order to run our command line utility, let's go ahead and use ts-node:

ts-node ./prerender.ts

After a moment, this will generate a new prerender.html output file, which contains the output of calling renderModuleFactory().

Here is what the content of the file will look like:

As we can see, there is a lot of HTML and CSS here (254 lines), all of this generated from the initial template <app-root></app-root>!

During rendering, the Universal bundle queried the server, retrieved the data and rendered everything out to plain HTML as expected.

If we now open the prerender.html file in a browser, here is what it will look like at this stage:

As we can see, the HTML for the main route of the application is there with all the data queried from the server, but we still have a lot of styles missing.

Also, this page is a static HTML page, there is no Angular application up and running after opening this file!

This is because we have only rendered the HTML for the <app-root> element, but our universal application is more than only the HTML of the root component. Server-side rendering is really only half the story.

Why we also need the frontend build

In order to have a running application, we need to serve to the browser not only the HTML of <app-root> but also the whole CSS and the client-side application that is going to take over the page after all resources are loaded.

We want to use as the basis for our rendering more than just the <app-root> component, we want to add also all the script and link tags that will load all the application styles plus the client-side app.

And we have all those tags plus the <app-root> component in one single place: the client side index.html file!

This means that we will also need the output of the client-side build as well, so let's go ahead and generate that:

ng build --prod

Let's have a look at all the build assets that we have at this point. We have two folders, one for the client build and the other for the universal build:

Let's take a look at the one specific resource, the production index.html which is inside the dist folder:

This file contains everything that we will need to produce a server-side rendered page:

we have the <app-root> tag, that contains the whole application

the complete CSS of the application is getting loaded

the client-side application is getting loaded via a couple of script tags

So let's go ahead and see how using these assets we can piece together an Angular Universal Express Server.

Angular Universal Express Server (from scratch)

At this point, we have already written the most important part of the server! The server will resemble closely the small command line utility that we just wrote.

If you are curious to see what the server looks like, here is the complete implementation in one go:

But there is a lot going on in this code snippet. So let's break this down step-by-step, starting with the script initial part:

As we can see, this is very similar to the beginning of our pre-rendering command line utility:

we are also importing here the production universal bundle main.bundle.js from the dist-server folder

we are then enabling the Angular production mode, to avoid having the application go twice through its change detection phase (see here for more details on the Angular Production Mode)

we then initialize our Express Server

the last preparation step is to read the production index.html present in the dist folder, and load it into a string

After these initialization steps, let's go straight into the main part of the implementation itself.

Express Middleware for Universal Rendering

We are going to start by defining an Express middleware that intercepts all HTTP requests that reach it.

Let's say that for example the user types in the url bar the address http://yourdomain.com/courses/03: this request would hit our server and it would end up reaching this middleware.

This middleware will then define what response should be sent back to the browser:

How does the Universal Middleware work?

As we can see, the wildcard * means that this is indeed a catch-all middleware.

Inside this middleware, we are checking which route do we need to render by checking the req.url property.

With that, we are going to server-side render our application by calling renderModuleFactory() just like before. But unlike before, we are now using our production client-side index.html as our rendering template!

Notice that this file will never be served as a static file by out Express Server, its only used as the base template for server-side rendering.

Also unlike before, instead of writing the output to a text file, we are going to instead send it back to the browser, by passing it as the request response body:

res.status(200).send(html);

We are also doing some error handling, sending back a 500 Internal Server Error in case of error:

res.sendStatus(500);

Trying out our Universal middleware

And with this, we have built the HTML response to the initial browser request http://yourdomain.com/courses/03. And here is what the response looks like:

This file contains a lot of HTML and CSS, plus several CSS and Javascript bundles.

Serving the static CSS and Javascript client bundles

This response will then reach the browser. The browser will then parse the HTML and it will eventually find the multiple link and script tags.

For each of those, the browser is going to request the linked files to the server.
For example, this tag will trigger a request to fetch the file mentioned in the src property:

This file needs to be served as a plain static resource by the Express server, so for that, we need to add a middleware to serve these files, before the request hits our catch-all (*) middleware.

All the requests for static bundles end with an extension (*.js, *.css), so let's add a middleware targeting requests containing one dot in the url using *.* :

This middleware is defined before the server-side rendering middleware, and it will serve the static bundles from the dist client-side build output folder, in case there is a file in the dist folder that matches the request.

Otherwise, if the request does match a static resource, then the catch-all middleware (*) is still going to be triggered.

But otherwise, if a static bundle is found that matches the resource, then the middleware chain is interrupted and the server-side rendering middleware (*) will no longer be triggered.

Finishing up our Universal Express Server

The last part of the puzzle is to launch the Express server, by listening for HTTP requests on port 9000:

And with this we now have completed our server! Let's now have a look at the end result. Let's start by running our server locally using ts-node:

If you want to run the server locally and see it in action, you can find the running application in this Github branch.

And here is a screenshot of the running application:

Angular Universal SEO - Search Engine Optimization

Let's now go ahead and start optimizing our application for SEO. We will go ahead and set the title tag and description for example for the Course page.

And notice that we can adapt the SEO metadata depending on the page that we are currently on.

In this case, we would like to set the title tag of the page to the title of the course and populate also the description meta tag. We can do that using the Title and Meta services:

How do the Title and Meta services work?

What these services will do is they will set the title and description tags depending on the environment that they are running.

On the server, these services will simply render the tags as plain strings, while on the client they will update the title and meta tags directly in the DOM at component startup time.

Let's remember, the Google Search Engine will be able to use a client-side rendered title and description meta tags, but this is not true for all search engines.

For other search engines, it's essential for SEO to render these tags directly on the server. Using these two services, this is what the title and description will look like at runtime:

If you would like to see the same application with these meta tags populated, have a look at this branch of the sample repository.

Integration with Social Media Crawlers

In a similar way to what we did with the SEO meta tags, we can also add other tags that will be used by social media crawlers to help configure what the page will look like on social media.

For example, let's make the course page look better on Twitter, by configuring a Twitter summary card:

The twitter crawler now has all the necessary information for creating a summary card for this page, producing a tweet for a course that looks like this:

Why a Fine-Grained Universal Application Shell?

With our application up and running, we are now going to implement a couple of performance optimizations that are usually used together with server-side rendering.

As it stands, our application will initially render all the content on the server, without exception. Depending on the situation, this might even be counterproductive from a performance standpoint.

Imagine a page where there is a lot of data that is scrollable below the fold: it might be better to conditionally render only some of the data that the user sees above the fold on the server, and then take care of the rest on the client after the application bootstraps.

What is an Application Shell?

Let's remember that we are doing server-side rendering for making sure that we show something quickly to the user as soon as possible, so sending him a huge amount of HTML might not the best way to do that, depending on the page.

What we want to do is to send some HTML instead of a blank page, but maybe not all the HTML initially.

This initial HTML that we send to the user is known as an Application Shell, and it might be as simple as a top menu bar and a loading indicator, but it might be much more depending on the page.

How to choose what gets rendered or not?

In order to produce the optimal amount of HTML on the server for the best experience, what we need in this scenario is some fine-grained control over what gets rendered or not on the server.

We are going to implement that using a couple of custom structural directives: appShellRender and appShellNoRender.

First of all, let's see how these directives will be used. For example, on the main component, we might want to conditionally render a loading indicator using appShellRender:

This means that a rendering indicator will be added to the bottom of each page, but only if the rendering occurs on the server.

Choosing what get's rendered per container component

Then, inside each top-level component, we are going to specify what get's rendered on the server or not. For example, on the course page we might only want to server render the course title and the thumbnail, but not the lessons list.

We can configure that by applying the appShellNoRender directive to the element that we want to skip while rendering on the server:

Notice that the appShellRender and appShellNoRender have no effect on the client! In the browser, the whole template will be rendered each time as we navigate through the single page application.

Implementing a fine-grained App Shell

Now that we know how these two App Shell directives will work, let's see how they are implemented. Let's start by having a look at appShellRender:

As we can see, this is a typical custom structural directive. We can tell this because we are injecting viewContainer and templateRef like we always do for structural directives.

The template reference (templateRef) points to the template snippet onto which the directive has been applied. For example, in the case of the loading indicator where we applied appShellRender, this is the template getting injected:

Another thing that we are injecting in our structural directive is platformId, that we can use to determine if the directive is being used on the server or on the client.

Then the conditional rendering logic is implemented in this snippet:

This logic essentially means: "render the target template but only if we are on the server. If we are on the client, then don't render this element".

The companion directive appShellNoRender works in a very similar way. Here is its complete code for reference:

With this couple of simple directives, we can already do a lot in terms of selectively rendering on the server only certain parts of the application.

These two directives are a good starting point for flexible server rendering, from which we could build other similar directives that give us even more flexibility.

The key concept of the fine-grained Application Shell is that we want to be able to choose per page what parts of it are not rendered on the server, without compromising client functionality.

Understanding The State Transfer API

With the App Shell in place, let's now talk about another common server-side rendering optimization: server-to-client state transfer at client startup time.

Let's talk first about the problem that the State Transfer API solves. When our Angular Universal application starts, a large part of the page was already rendered and is visible to the user from the beginning.

But let's remember that this server-side rendered application is going to pull from the server a normal client-side application, which is then going to take over the page.

This Angular client-side application is then going to startup, and what is the first thing that it will do? It's going to contact the server and fetch all the data again!

The client-side application will even turn on loading indicators as the data get's loaded. This is strange to the user because the page that came from the server already had data in it, so why is the application loading it again?

The client side will then re-render all the data and pass it again to the page and show it to the user.

There is one problem with all this: the server just retrieved the data and rendered it, so why repeat the same process again on the client? This is redundant, it queries the server twice and does not give a good user experience, which is the main reason why we are using Universal in the first place.

How does the Transfer API work?

In order to solve this problem of duplicate data fetching, what we need is a way for the universal application to store its data somewhere on the page, and then make it available to the client application, without calling the server again.

And this is exactly what the State Transfer API allows us to do! The State Transfer API provides us with a storage container for easilly transfering data between the server and the client application, avoiding the need for the client application to have to contact the server to get the data.

To see this in action, let's give an example of a server call made in a Router Resolver that could cause the initial problem:

This Router Resolver is taking a course identifier from the current url, and it's using it to get some data from the server. The data will then be available via the router to all components.

Let's now say that we are navigating to a route that triggers this resolver. The problem with this implementation is that in the case of a universal application the data fetch call findCourseById() will be triggered twice: once during server rendering and the other on the client when the client application starts and the router kicks in.

In order to avoid this, we are going to refactor this Router Resolver and implement it using the State Transfer API:

There is a lot going on here, so let's break down this implementation step by step. First, let's look at the injected services:

we are injecting platformId, in order to know if the resolver is being executed either on the client or on the server

we are also injecting a new TransferState service

The State Transfer API In Action

Here is how this works: first we are going to define a key that uniquely identifies a piece of state that we want to transfer between the client and the server:

Next, we are going to check if the course data that we need to emit is already on the state transfer container:

Let's start with the case when the data is not on the container: then in that case what we want to do is to fetch the data first, independently if we are on the server or on the client.

But if we are on the server, after fetching the data we also want to store it so that it gets transfered back to the client.

We can then populate the state transfer container by using the tap operator:

Now the data is stored in the state transfer container.

Where does the TransferState service store the data?

If you are curious to know where the state transfer service stores the data, its very simple: the data get's stored in the page itself!

If you inspect the page source that we get from the server, it contains a script tag at the bottom where the transfered data get's stored:

Fetching the Data from the State Transfer service

The next thing that we want to do in our Router Resolver implementation is to cover the client-side case. We want to retrieve the data from the state transfer service in case it's available, and bypass the server call.

Only the client application will find data inside the state transfer container:

If the data is present in the state container, we are going to fetch it and emit if directly using the of operator. Next, we are going to clear the data from the state container, therefore completing the state transfer process.

And with this, we now know what is the State Transfer API and know what problem it solves.

We have also finished turning this plain Angular Application into an Universal application! Let's now summarize all that we have learned and highlight the key points.

Code Samples - Github Repo

All the completed code used in this post is available here in this Github repository, in case you would like to run the application and see everything in action.

The code is deployable to Firebase Hosting (which will serve only the static bundles) and the server is deployable as a Firebase Cloud Function.

Conclusions & Summary

As we have seen, the main reason for using server-side rendering today is to improve the startup performance of our application, by sending at least some HTML to the browser when the application starts up.

The page will then load a client-side Angular application that will eventually take over the page as a normal SPA.

The added benefits of SEO for server-side rendering are less than they once were because the Google search engine now indexes well Javascript pages.

But large parts of the world use different search engines, so if we want to cover those as well then we need to use server-side rendering.

The main argument for using server-side rendering is performance and user experience, which also brings some indirect SEO benefits: faster pages are given a ranking boost.

Also, we have learned that depending on the application, rendering on the server might be only half the story! We also might need a way to control what gets rendered on the server or not (the App Shell), and to get that optimal user experience we will likely also need the State Transfer API.

I hope that this post helps with getting started with the Angular Universal and that you enjoyed it!

If you have some questions or comments please let me know in the comments below and I will get back to you.

To get notified of upcoming posts on Angular Universal and other Angular topics, I invite you to subscribe to our newsletter:

Video Lessons Available on YouTube

Have a look at the Angular University Youtube channel, we publish about 25% to a third of our video tutorials there, new videos are published all the time.

]]>https://blog.angular-university.io/angular-push-notifications/d5beb774-e2f9-4b98-949f-3641e0534b02Thu, 08 Mar 2018 09:19:29 GMTIn this post, we are going to go through a complete example of how to implement Web Push Notifications in an Angular Application using the Angular Service Worker.

Note that these are the exact same native notifications that we receive for example on our mobile phones home screen or desktop, but they are triggered via a web application instead of a native app.

These notifications can even be displayed to the user if all application tabs are closed, thanks to Service Workers! When well used, Push notifications are a great way of having our users re-engage with our application.

This is a step-by-step tutorial, so I invite you to code along as we implement Push Notifications in an existing application.

We will also learn along the way how Push Notifications work in general, as we follow and explain the complete flow that a given notification follows.

The Angular PWA series

Note that this post is part of the Angular PWA Series, here is the complete series:

The Push API is what allows the message to be pushed from a server to the browser, and the Notifications API is what allows the message to be displayed, once it gets to the browser.

But notice that we can't push notifications from our server directly to the user's browser. Instead, only certain servers that browser development companies (like Google, Mozilla, etc.) specifically choose will be able to push notifications to a given browser.

These servers are known as a Browser Push Service. Note that for example, the Push Service used by Chrome is different than the one used by Firefox, and each Push Service is under control of the corresponding browser company.

Browser Push Service Providers

As we sometimes see online, Push Notifications can be very disruptive to the user, and browser implementers want to ensure that browser users have a good online experience at all times.

This means that browser providers want to be able to block out certain notifications from being displayed to the user, for example if the notifications are too frequent.

The way that browsers like Chrome or Firefox ensure that Push messages don't cause user experience issues is to funnel all push messages over servers under their control.

For example, in the case of the Chrome browser, all Push messages come to the browser via the Firebase Cloud Messaging service and NOT directly from our application server.

Firebase Cloud Messaging acts in this case as the Chrome Browser Push Service. The Push Service that each browser uses cannot be changed, and is determined by the browser provider.

In order to be able to deliver a message to a given user and only to that user, the Push Service identifies the user in an anonymous way, ensuring the user privacy. Also, the Push Service does not know the content of the messages, as they are encrypted.

Let's then go through the whole lifecycle of a particular message, to understand in detail how everything works. We will start by uniquely identifying our server, and learn why that's important.

Why identify our server as a Push source?

The first thing that we should do is to uniquely identify our server to the several Browser Push Services available.

Each Push Service will analyze the behavioral patterns of the messages being sent in order to avoid disruptive experiences, so identifying our server and using the push messages correctly over time will increase our odds that the Push Service will deliver our messages in a timely manner.

We will then start by uniquely identifying our Application server using a VAPID key pair.

What is a VAPID key pair?

VAPID stands for Voluntary Application Server Identification for Web Push protocol. A VAPID key pair is a cryptographic public/private key pair that is used in the following way:

the public key is used as a unique server identifier for subscribing the user to notifications sent by that server

the private key needs to be kept secret (unlike the public key) and its used by the application server to sign messages, before sending them to the Push Service for delivery

Generating a VAPID key pair using node web-push

Let's start by generating a VAPID key using the node webpush library. We will first install the webpush library globally, as a command line tool:

npm install web-push -g

We can then generate a VAPID key pair with the following command:

web-push generate-vapid-keys --json

Using this command, here is what a VAPID key pair looks like:

We can now use the VAPID Public Key to subscribe to Push Notifications using the Angular Service Worker.

Subscribing to Push Notifications

The first thing that we will need is the Angular Service Worker, and for that here is a guide for how to add it to an existing Angular application.

Once we have the Angular Service Worker installed, we can now request the user permission for sending Push Notifications:

Let's break down what is going on in this code sample:

the user clicks on the Subscribe button and the subscribeToNotifications() method gets executed

using the swPush service, we are going to ask the user if he allows our server (identified by the VAPID public key) to send him Web Push messages

the requestSubscription() method returns a Promise which emits the push subscription object, in case the user allows notifications

The user is then going to see a browser popup asking him to allow or deny the request:

if the user accepts the request, the Promise returned by requestSubscription() is going to be evaluated successfully, and a push subscription object is going to be passed to .then()

Showing again the Allow/Deny Notifications Popup

While testing this on localhost, you might accidentally hit the wrong button in the popup. The next time that you click subscribe, the popup will not be displayed.

Instead, the Promise is going to be rejected and the catch block in our code sample above is going to be triggered.

Here is what we need to do in order to have the popup displayed again:

scroll down the Block list, containing all the websites that are blocked from emitting push notifications

delete localhost from the Block list

Click the Subscribe button again

The popup should now appear again, and if we click on the Allow option, a Push Subscription object will be generated.

The PushSubscription object

Here is what the push subscription object looks like, as we receive it in the then() clause:

Let's now break down the content of the subscription object, as that will help us to understand better how Push Notifications work in general:

endpoint: This contains a unique URL to a Firebase Cloud Messaging endpoint. This url is a public but unguessable endpoint to the Browser Push Service used by the application server to send push notifications to this subscription

expirationTime: some messages are time sensitive and don't need to be sent if a certain time interval has passed. This is useful in certain cases, for example, if a message might contain an authentication code that expires after 1 minute

p256dh: this is an encryption key that our server will use to encrypt the message, before sending it to the Push Service

auth: this is an authentication secret, which is one of the inputs of the message content encryption process

All the information present in the subscription object is necessary to be able to send push notifications to this user.

How to use the Push Subscription object?

Once we get a push subscription, we need to store it somewhere where we can use it later, once we decide to send a message to this user.

In our example, we are sending the whole subscription object to the backend via an HTTP request done by the NewsletterService, and the complete subscription object will be stored in the database for later use.

Sending Push Notifications from a Node Backend

Now that we have the multiple subscription objects stored in the database, we can now use them to send push messages using the webpush library.

Let's then create a REST endpoint using Express, that when triggered will result in a notification to be sent to all subscribers.

In this example, we are sending a Notification informing that a newsletter is now available:

Let's now break down this example. We are using again the webpush module, but this time to encrypt, sign and send a push notification to all subscribers:

we start by initializing the webpush module, by passing it the VAPID key pair

in this simple example the keys are in the code, which is NOT recommended. Instead, we can pass a reference to a JSON file containing the key pair via a command line argument

we are then initializing an Express application, and we have created an HTTP POST endpoint for the /api/newsletter url

In order to trigger this endpoint, our application can send an HTTP post call to /api/newsletter.

Notice that this endpoint would have to be protected by both authentication and authorization middleware, in order to ensure that only an administrator can trigger the sending of a newsletter.

Building the Push Message body

In order for the Angular Service Worker to correctly display the message, we need to use the format shown in the code.

Namely, we will the payload to be one root object containing one property named notification, otherwise the messages will not be displayed to the user.

Besides the text and image of the notification, we can also specify a mobile vibration pattern via the vibrate property.

Sending the Push Notification using webpush

Once we have the message payload ready, we can send it to a given subscriber via a call to webpush.sendNotification():

the first argument is the push subscription object, just like we received it in the browser after the user has clicked the Allow Notifications option

the second argument is the notification payload itself, in JSON format

The webpush library will then do the following steps:

the payload of the message is going to be encrypted using the p256dh public key and the auth authentication secret

the encrypted payload is then going to be signed using the VAPID private key

the message is then going to be sent to the Firebase Cloud Messaging enpoint specified in the endpoint property of the subscription object

What happens at the level of the Push Service?

When the Push service receives the message (in this case Firebase Cloud Messaging), it will know to which browser instance to forward the message to based on the unique url of the endpoint.

Note that the Push Service does not know the content of the message because its encrypted, so it cannot inspect it to decide if the message should go through or not.

But the Push Service might decide to block the message, if for example this subscriber has already been receiving too many messages.

In most cases, what will happen is that the Push Service is going to forward the message payload to the user's browser.

Push Notification Demo

Once the Push Service pushes the message to the user browser, the message is then going to be decrypted and passed to the Angular Service Worker.

The Angular Service Worker will then use the browser Notifications API to display a Notification to the user.

The end result is a system notification similar to the one that we often receive from mobile apps:

Source Code + Github Running Example (complete PWA)

A running example of the complete code is available here on this branch on Github. The PWA features demonstrated are:

Application Download & Installation

Application Version Management

One-Click Install with App Manifest

Application Data Caching

Application Shell

Push Notifications

sample Express Server using webpush

I hope that this post helps with getting started with sending Push Notifications and that you enjoyed it! If you have some questions or comments please let me know in the comments below and I will get back to you.

To get notified of upcoming posts on Progressive Web Applications and other Angular topics, I invite you to subscribe to our newsletter:

Video Lessons Available on YouTube

Have a look at the Angular University Youtube channel, we publish about 25% to a third of our video tutorials there, new videos are published all the time.

]]>In this post, we are going to go through a complete example of how to build a custom dialog using the Angular Material Dialog component.

We are going to cover many of the most common use cases that revolve around the Angular Material Dialog, such as: common dialog configuration options,

]]>https://blog.angular-university.io/angular-material-dialog/1d5a402c-9d44-41cd-b473-d31a7a12a7a6Thu, 22 Feb 2018 10:52:06 GMTIn this post, we are going to go through a complete example of how to build a custom dialog using the Angular Material Dialog component.

We are going to cover many of the most common use cases that revolve around the Angular Material Dialog, such as: common dialog configuration options, passing data into the dialog, receiving data back, and dialog layout options.

This is a step-by-step tutorial, so I invite you to code along as we are going to start with a simple initial scenario. We will then progressively add features one by one and explain everything along the way.

Table of Contents

In this post, we will cover the following topics:

Declaring a Material Dialog body component

Creating and opening an Angular Material Dialog

Angular Material Dialog Configuration Options

Building the Material Dialog body

Passing Input Data to the Material Dialog

Closing The Dialog + Passing Output Data

Source Code + Github Running Example

Step 1 of 5 - Declaring a Material Dialog body component

In order to use the Angular Material Dialog, we will first need to import MatDialogModule:

Notice also CourseDialogComponent, this component will be the body of our custom dialog.

In order for the component to be usable as a dialog body, we need to declare it as an entryComponent as well, otherwise, we will get the following error while opening the dialog:

Error: No component factory found for CourseDialogComponent. Did you add it to @NgModule.entryComponents?

Step 2 of 5 - Creating and opening an Angular Material Dialog

With this in place, we are now ready to start building our dialog. Let's start by seeing how can we open a dialog from one of our components:

Let's break down this code, to see what is going on here:

in order to create Material Dialog instances, we are injecting the MatDialog service via the constructor

we are then creating an instance of MatDialogConfig, which will configure the dialog with a set of default behaviors

we are overriding a couple of those default behaviors. for example, we are setting disableClose to true, which means that the user will not be able to close the dialog just by clicking outside of it

we are also setting autoFocus to true, meaning that the focus will be set automatically on the first form field of the dialog

Angular Material Dialog Configuration Options

The class MatDialogConfig allows us to define a lot of configuration options. Besides the two that we have overridden, here are some other commonly used Material Dialog options:

hasBackdrop: defines if the dialog should have a shadow backdrop, that blocks the user from clicking on the rest of the UI while the dialog is opened (default is true)

panelClass: adds a list of custom CSS classes to the Dialog panel

backdropClass: adds a list of custom CSS classes to the dialog backdrop

position: defines a starting absolute position for the dialog. For example, this would show the dialog in top left corner of the page, instead of in the center:

direction: this defines if the elements inside the dialog are right or left justified. The default is left-to-right (ltr), but we can also specify right-to-left (rtl). Here is what a right-to-left dialog looks like:

closeOnNavigation: this property defines if the dialog should automatically close itself when we navigate to another route in our single page application, which defaults to true.

An example of when we would like to set this to false is the Draft Email Dialog of an Email application like Gmail, where the email draft remains opened as we search for ancient emails.

The MatDialogConfig also provides the properties width, height, minWidth, minHeight, maxWidth and maxHeight

Step 3 of 5 - Building the Material Dialog body

Let's now have a look at CourseDialogComponent. This is just a regular Angular component, as it does not have to inherit from any particular class or implement a dialog-specific interface.

The content of this component could also be anything, and there is no need to use any of the auxiliary Angular Material directives. We could build the body of the dialog out of plain HTML and CSS if needed.

But if we want the dialog to have the typical Material Design look and feel, we can build the template using the following directives:

Using these directives, our dialog will look something like this:

Here are the 3 main directives that we used to build this dialog:

mat-dialog-title: This identifies the title of the dialog, in this case the "Angular For Beginners" title on top

mat-dialog-content: this container will contain the body of this dialog, in this case, a reactive form

mat-dialog-actions: this container will contain the action buttons at the bottom of the dialog

Step 4 of 5 - Passing Input Data to the Material Dialog

Dialogs are often used to edit existing data. We can pass data to the dialog component by using the data property of the dialog configuration object.

Going back to our AppComponent, here is how we can pass some input data to the dialog:

We can then get a reference to this data object in CourseDialogComponent by using the MAT_DIALOG_DATA injectable:

As we can see, the whole data object initially passed as part of the dialog configuration object can now be directly injected into the constructor.

We have also injected something else, a reference to the dialog instance named dialogRef. We will use it to close the dialog and pass output data back to the parent component.

Step 5 of 5 - Closing The Dialog + Passing Output Data

Now that we have an editable form inside a dialog, we need a way to pass the modified (or new) data back to the parent component.

We can do via the close() method. We can call it without any arguments if we simply want to close the dialog:

But we can also pass the modified form data back to AppComponent in the following way:

In the component that created the dialog, we can now receive the dialog data in the following way:

As we can see, the call to dialog.open() returns a dialog reference, which is the same object injected in the constructor of CourseDialogComponent.

We can then use the dialog reference to subscribe to the afterClosed() observable, which will emit a value containing the output data passed to dialogRef.close().

Source Code + Github Running Example

A running example of the complete code is available here on this branch on Github.

I hope that this post helps with getting started with the Angular Material Dialog and that you enjoyed it! If you have some questions or comments please let me know in the comments below and I will get back to you.

To get notified of upcoming posts on Angular Material and other Angular topics, I invite you to subscribe to our newsletter:

Video Lessons Available on YouTube

]]>https://blog.angular-university.io/angular-debugging/7316c1e0-36b9-4233-96cc-75ff9a807bb2Mon, 29 Jan 2018 10:08:00 GMTIn this post, we will cover in detail an error message that you will occasionally come across while building Angular applications: "Expression has changed after it was checked" - ExpressionChangedAfterItHasBeenCheckedError.

We are going to give a complete explanation about this error. We will learn why it occurs, how to debug it consistently and how to fix it.

Most of all we are going to explain why this error is useful, and how it ties back to the Angular Development Mode.

Table Of Contents

In this post, we will cover the following topics:

Understanding the "Expression has changed" error, why does it occur?

The Angular Development mode

Debugging Techniques for finding the template expression that is causing the error

How to fix the "Expression has changed" error

Conclusions

We will first start by quickly debugging the error (a video is available for this), and then we will explain the cause and apply the fix. So without further ado, let's get started!

A common scenario where the error occurs

This type of error usually shows up beyond the initial development stages, when we start to have some more expressions in our templates, and we have typically started to use some of the lifecycle hooks like AfterViewInit.

Here is a simple example of a component that is already throwing this error, taken from this previous post:

This is a simple component that is displaying an Angular Material Data Table with a paginator, plus a loading indicator that get's displayed while we wait for the data to load.

Here is what this component looks like when the data is loaded:

And here is what the component looks like when the data is loading:

To find out why the "Expression has changed" error is being thrown in this situation, let's have a look at the simplified version of this component:

As we can see, we are using ngAfterViewInit(), because we want to get a reference to the Paginator page Observable, and the paginator is obtained using the @ViewChild() view query.

Whenever the user hits the paginator navigation buttons, an event is going to be emitted that triggers the loading of a new data page, by calling dataSource.loadLessons().

Note that the tap operator is the new pipeable version of the RxJs do operator!

Because the page Observable does not initially emit a value, we are emitting an initial value using startWith(). This causes the first page of data to be loaded, otherwise, data would only be loaded if the user clicks the paginator.

And here is the Data Source (simplified version):

As we can see, loadLessons() is emitting a new value for the loading$ Observable (it's setting the loading flag to true), and is doing so synchronously, before the asynchronous call to the backend.

Notice that this loading$ Observable is the one that is getting used in the ngIf expression that shows or hides the loading indicator.

This error tells us that there is a problem with an expression in the template, but the question is, which expression? And why does it cause an error?

Debugging "Expression has changed after it was checked"

The debugging process that we will go over below is also done here step by step in this video, where we also explain the cause of the error:

Here is how we can identify the problematic expression. In the Chrome Dev Tools console, we have a call stack that identifies where exactly the error occurred.

Let's click on the link available in the first line of the call stack:

at viewDebugError (core.js:9515)

This will open the DevTools Javascript Debugger in the line where the error occurred: let's then add a manual breakpoint on that line.

If we now reload the component and trigger the error again, the breakpoint will hit and we will get the following:

The program is now frozen at this point, and we can hover over the variables and go up and down the call stack, to see what is going on.

Notice the line 9515: that is where the error occurs, and the line number with the blue triangle is where we clicked to create the breakpoint.

We also have a call stack. If we start clicking on the functions up the call stack, we will see the function call to viewDebugError.

Identifying the previous value of the Expression

By highlighting the oldValue variable, we can see that the old value of the problematic expression was false, and according to the message it's now true:

Identifying the Problematic Expression

But what template expression is causing this error? If we keep clicking up the call stack, we are going to see that a template expression will appear:

As we can see, this is the ngIf expression that shows or hides the loading indicator: so this is the problematic template expression!

As we can see, the source maps generated by the Angular CLI are very useful to troubleshoot this kind of problem.

Understanding the "Expression has changed after it was checked" Error

This ngIf expression, at first sight, does not seem problematic. So why does this throw an error? Here is what happens:

the ngIf expression above is initially false because the data source is not loading any data, so loading$ emits false

if the loading$ Observable last emitted value is false, then the loading indicator should be hidden as no data is being loaded

while Angular is preparing to update the View based on the latest data changes, during that process it calls ngAfterViewInit, which triggers the loading of the first data page from the backend

loading the data would still take a while and it's an asynchronous operation, so the data will not arrive instantly

Here is the problem: as a synchronous call to dataSource. loadLessons() is made, a new true value of the loading$ flag is emitted immediately

And its this new value of the loading flag that accidentally triggers the error!

Let's learn why updating this flag during the view construction process is problematic.

The "View Updates Itself" Scenario

The problem here is that we have a situation where the view generation process (which ngAfterViewInit is a part of) is itself further modifying the data that we are trying to display in the first place:

the loading flag starts with false

we tried to display that to the screen, by hiding the loading indicator

due to the way ngAfterViewInit is written, the act of displaying the data itself further modifies the data

after the view is built, the loading flag is now true

So which value is the loading flag then, true or false? There is no way for Angular to decide and so it preventively throws this error, which only happens in Development Mode.

To learn more about the Angular Development Mode, have a look at this post. Right now, let's then see how we can fix this issue.

Understanding the Solution

So here is the solution: we can't use the paginator.page reference in ngAfterViewInit() and immediately call the Data Source, because that will trigger a further modification of the data before Angular could display it on the screen, so its not clear if the value of the loading flag should be true or false.

In order to solve this issue, we need to let Angular first display the data with the loading flag set to false.

Then, in some future time, in a separate Javascript turn, only then are we going to call the Data Source loadLessons() method, which will cause the loading flag to be set to true and the loading indicator will then get displayed.

Initial implementation of the solution

In order to defer the code inside ngAfterViewInit to another Javascript turn, here is one initial implementation that will help us to understand the solution better:

This already solves the problem: we don't have an error anymore!

As we can see, we are using setTimeout() to defer this code to another Javascript Virtual Machine turn, and notice that we are not even specifying a value for the timeout.

Let's now have a look at an alternative implementation with less code nesting, and then we will explain why this fixes the issue.

An alternative using RxJs

This is an alternative version that looks better due to less code nesting, and uses the RxJs pipeable operator delay:

How does setTimeout or delay(0) fix this problem?

Here is why the code above fixes the issue:

The initial value of the flag is false, and so the loading indicator will NOT be displayed initially

ngAfterViewInit() gets called, but the data source is not immediately called, so no modifications of the loading indicator will be made synchronously via ngAfterViewInit()

Angular then finishes rendering the view and reflects the latest data changes on the screen, and the Javascript VM turn completes

One moment later, the setTimeout() call (also used inside delay(0)) is triggered, and only then the data source loads its data

the loading flag is set to true, and the loading indicator will now be displayed

Angular finishes rendering the view, and reflects the latest changes on the screen, which causes the loading indicator to get displayed

No error occurs this time around, and so this fixes the error message.

Moving the initialization of the data to ngOnInit()

But in this case, an even better solution exists! The core of the problem is that we are modifying the data being displayed (the loading flag) inside ngAfterViewInit().

So let's remove the call to startWith(null) that loads the initial page, and instead, lets trigger the loading of the initial data in ngOnInit():

This also solves the issue, with this we don't have the error anymore.

With this new version, there is no modification of the template data in the ngAfterViewInit() lifecycle hook, and so the problem does not occur.

Let's now wrap things up by talking about what would happen if this error would NOT be thrown.

Conclusions

In summary, Angular protects us from building programs that are hard to maintain in the long-term and reason about, by throwing the error "Expression has changed after it was checked" (only in development mode).

Although a bit surprising at first sight, this error is very helpful!

Why "Expression has changed after it was checked" is useful

What would happen if the view generation process could itself modify the rendered data? This could be very problematic, to start we could even create an infinite loop!

More commonly, here is what would happen: imagine having a UI that behaves in an erratic way, where sometimes the user cannot see the data in our component, and randomly sees some previous version of the data.

Then the user clicks or hovers some unrelated UI elements which happen to trigger an event handler, and now another unrelated components is affected. This kind of error can be very hard to reproduce, troubleshoot and reason about.

One of the main goals of using a web framework like Angular is the guarantee that the data in our components will always get reflected correctly in the view, and that we don't have to do that synchronization ourselves.

The Angular Development Mode helps us to avoid building UIs that are hard to troubleshoot and reason about, by issuing this error message during development that helps us to fix the issue early in the development process.

I hope that this post helps with this type of error and that you enjoyed it! If you have some questions or comments please let me know in the comments below and I will get back to you.

To get notified of upcoming posts on multiple Angular topics, I invite you to subscribe to our newsletter:

Video Lessons Available on YouTube

Have a look at the Angular University Youtube channel, we publish about 25% to a third of our video tutorials there, new videos are published all the time.

]]>https://blog.angular-university.io/angular-material-data-table/30f9b226-04e5-4736-a42a-b06a616cbd93Thu, 25 Jan 2018 09:00:00 GMTIn this post, we are going to go through a complete example of how to use the Angular Material Data Table.

We are going to cover many of the most common use cases that revolve around the Angular Material Data Table component, such as: server-side pagination, sorting, and filtering.

This is a step-by-step tutorial, so I invite you to code along as we are going to start with a simple initial scenario. We will then progressively add features one by one and explain everything along the way (including gotchas).

We will learn in detail all about the reactive design principles involved in the design of the Angular Material Data Table and an Angular CDK Data Source.

The end result of this post will be:

a complete example of how to implement an Angular Material Data Table with server-side pagination, sorting and filtering using a custom CDK Data Source

a running example available on Github, which includes a small backend Express server that serves the paginated data

Table Of Contents

In this post, we will cover the following topics:

The Angular Material Data Table - not only for Material Design

The Material Data Table Reactive Design

The Material Paginator and Server-side Pagination

Sortable Headers and Server-side Sorting

Server-side Filtering with Material Input Box

A Loading Indicator

A Custom Angular Material CDK Data Source

Source Code (on Github) with the complete example

Conclusions

So without further ado, let's get started with our Material Data Table Guided Tour!

Importing Angular Material modules

In order to run our example, let's first import all the Angular Material modules that we will need:

Here is a breakdown of the contents of each Material module:

MatInputModule: this contains the components and directives for adding Material design Input Boxes to our application (needed for the search input box)

MatTableModule: this is the core data table module, which includes the mat-table component and many related components and directives

MatPaginatorModule: this is a generic pagination module, that can be used to paginate data in general. This module can also be used separately from the Data table, for example for implementing Detail pagination logic in a Master-Detail setup

MatSortModule: this is an optional module that allows adding sortable headers to a data table

MatProgressSpinnerModule: this module includes the progress indicator component that we will be using to indicate that data is being loaded from the backend

Introduction to the Angular Material Data Table

The Material Data Table component is a generic component for displaying tabulated data. Although we can easily give it a Material Design look and feel, this is actually not mandatory.

In fact, we can give the Angular Material Data table an alternative UI design if needed. To see that this is so, let's start by creating a Data Table where the table cells are just plain divs with no custom CSS applied.

This data table will display a list of course lessons, and has 3 columns (sequence number, description and duration):

Material Data Table Column Definitions

As we can see, this table defines 3 columns, each inside its own ng-container element. The ng-container element will NOT be rendered to the screen (see this post for more details), but it will provide an element for applying the matColumnDef directive.

The matColumnDef directive uniquely identifies a given column with a key: seqNo, description or duration. Inside the ng-container element, we will have all the configuration for a given column.

Notice that the order of the ng-container elements does NOT determine the column visual order

The Material Data Table Auxiliary Definition Directives

The Material Data Table has a series of auxiliary structural directives (applied using the *directiveName syntax) that allow us to mark certain template sections has having a certain role in the overall data table design.

These directives always end with the Def postfix, and they are used to assign a role to a template section. The first two directives that we will cover are matHeaderCellDef and matCellDef.

The matHeaderCellDef and matCellDef Directives

Inside of each ng-container with a given column definition, there are a couple of configuration elements:

we have the template that defines how to display the header of a given column, identified via the matHeaderCellDef structural directive

we also have another template that defines how to display the data cells of a given column, using the matCellDef structural directive

These two structural directives only identify which template elements have a given role (cell template, header template), but they do not attach any styling to those elements.

For example, in this case, matCellDef and matHeaderCellDef are being applied to plain divs with no styling, so this is why this table does not have a Material design yet.

Applying a Material Design to the Data Table

Let's now see what it would take to give this Data Table a Material Look and Feel. For that, we will use a couple of built-in components in our header and cell template definitions:

This template is almost the same as the one we saw before, but now we are using the mat-header-cell and mat-cell components inside our column definition instead of plain divs.

Using these components, lets now have a look at what the Data Table looks like with this new Material Design:

Notice that the table already has some data! We will get to the data source in a moment, right now let's continue exploring the rest of the template.

The matCellDef Directive

The data cell template has access to the data that is being displayed. In this case, our data table is displaying a list of lessons, so the lesson object in each row is accessible via the let lesson syntax, and can be used in the template just like any component variable.

The mat-header-row component and the matHeaderRowDef directive

This combination of related component / directive works in the following way:

the matHeaderRowDef identifies a configuration element for the table header row, but it does not apply any styling to the element

The mat-header-row on the other hand applies some minimal Material stying

The matHeaderRowDef directive also defines in which order the columns should be displayed. In our case, the directive expression is pointing to a component variable named displayedColumns.

Here is what the displayedColumns component variable will look like:

The values of this array are the column keys, which need to be identical to the names of the ng-container column sections (specified via the matColumnDef directive).

Note: It's this array that determines the visual order of the columns!

The mat-row component and the matRowDef directive

This component / directive pair also works in a similar way to what we have seen in previous cases:

matRowDef identifies which element inside mat-table provides configuration for how a data row should look like, without providing any specific styling

on the other hand, mat-row will provide some Material stying to the data row

With mat-row, we also have a variable exported that we have named row, containing the data of a given data row, and we have to specify the columns property, which contains the order on which the data cells should be defined.

Interacting with a given table data row

We can even use the element identified by the matRowDef directive to interact with a given data row. For example, this is how we can detect if a given data row was clicked:

When a row is clicked, we will call the onRowClicked() component method, that will then log the row data to the console:

If we now click on the first row of our data table, here is what the result will look like on the console:

As we can see the data for the first row is being printed to the console, as expected! But where is this data coming from?

To answer that, let's then talk about the data source that is linked to this data table, and go over the Material Data Table reactive design.

Data Sources and the Data Table Reactive Design

The data table that we have been presenting receives the data that it displays from a Data Source that implements an Observable-based API and follows common reactive design principles.

This means for example that the data table component does not know where the data is coming from. The data could be coming for example from the backend, or from a client-side cache, but that is transparent to the Data table.

The Data table simply subscribes to an Observable provided by the Data Source. When that Observable emits a new value, it will contain a list of lessons that then get's displayed in the data table.

Data Table core design principles

With this Observable-based API, not only the Data table does not know where the data is coming from, but the data table also does not know what triggered the arrival of new data.

Here are some possible causes for the emission of new data:

the data table is initially displayed

the user clicks on a paginator button

the user sorts the data by clicking on a sortable header

the user types a search using an input box

Again, the Data Table has no information about exactly which event caused new data to arrive, which allows the Data Table components and directives to focus only on displaying the data, and not fetching it.

Let's then see how can we implement such a reactive data source.

Why not use MatTableDataSource?

In this example, we will not be using the built-in MatTableDataSource because its designed for filtering, sorting and pagination of a client-side data array.

In our case, all the filtering, sorting and pagination will be happening on the server, so we will be building our own Angular CDK reactive data source from first principles.

Fetching Data from the backend

In order to fetch data from the backend, our custom Data Source is going to be using the LessonsService. This is a standard Observable-based stateless singleton service that is built internally using the Angular HTTP Client.

Let's have a look at this service, and break down how its implemented:

Breaking down the LessonsService implementation

As we can see, this service is completely stateless, and every method forwards calls to the backend using the HTTP client, and returns an Observable to the caller.

Our REST API is available in URLs under the /api directory, and multiple services are available (here is the complete implementation).

In this snippet, we are just showing the findLessons() method, that allows to obtain one filtered and sorted page of lessons data for a given course.

Here are the arguments that we can pass to this function:

courseId: This identifies a given course, for which we want to retrieve a page of lessons

filter: This is a search string that will help us filter the results. If we pass the empty string '' it means that no filtering is done on the server

sortOrder: our backend allows us to sort based on the seqNo column, and with this parameter, we can specify is the sort order is ascending (which is the default asc value), or descending by passing the value desc

pageNumber: With the results filtered and sorted, we are going to specify which page of that full list of results we need. The default is to return the first page (with index 0)

pageSize: this specifies the page size, which defaults to a maximum of 3 elements

With this arguments, the loadLessons() method will then build an HTTP GET call to the backend endpoint available at /api/lessons.

Here is what an HTTP GET call that fetches the lessons for the first page looks like:

As we can see, we are appending a series of HTTP query parameters to the GET URL using the HTTPParams fluent API.

This loadLessons() method will be the basis of our Data Source, as it will allow us to cover the server pagination, sorting and filtering use cases.

Implementing a Custom Angular CDK Data Source

Using the LessonsService, let's now implement a custom Observable-based Angular CDK Data Source. Here is some initial code, so that we can discuss its Reactive design (the full version is shown in a moment):

Breaking down the design of an Angular CDK Data Source

Has we can see, in order to create a Data Source we need to create a class that implements DataSource. This means that this class needs to implement a couple of methods: connect() and disconnect().

Note that these methods provide an argument which is a CollectionViewer, which provides an Observable that emits information about what data is being displayed (the start index and the end index).

We would recommend for now not to focus so much on the CollectionViewer at this moment, but on something much more important for understanding the whole design: the return value of the connect() method.

How to implement the DataSource connect() method

This method will be called once by the Data Table at table bootstrap time. The Data Table expects this method to return an Observable, and the values of that observable contain the data that the Data Table needs to display.

In this case, this observable will emit a list of Lessons. As the user clicks on the paginator and changes to a new page, this observable will emit a new value with the new lessons page.

We will implement this method by using a subject that is going to be invisible outside this class. That subject (the lessonsSubject) is going to be emitting the values retrieved from the backend.

The lessonsSubject is a BehaviorSubject, which means its subscribers will always get its latest emitted value (or an initial value), even if they subscribed late (after the value was emitted).

Why use BehaviorSubject?

Using BehaviorSubject is a great way of writing code that works independently of the order that we use to perform asynchronous operations such as: calling the backend, binding the data table to the data source, etc.

For example, in this design, the Data Source is not aware of the data table or at which moment the Data Table will require the data. Because the data table subscribed to the connect() observable, it will eventually get the data, even if:

the data is still in transit coming from the HTTP backend

or if the data was already loaded

Custom Material CDK Data Source - Full Implementation Review

Now that we understand the reactive design of the data source, let's have a look at the complete final implementation and review it step-by-step.

Notice that in this final implementation, we have also included the notion of a loading flag, that we will use to display a spinning loading indicator to the user later on:

Data Source Loading Indicator Implementation Breakdown

Let's start breaking down this code, we will start first with the implementation of the loading indicator. Because this Data Source class has a reactive design, let's implement the loading flag by exposing a boolean observable called loading$.

This observable will emit as first value false (which is defined in the BehaviorSubject constructor), meaning that no data is loading initially.

The loading$ observable is derived using asObservable() from a subject that is kept private to the data source class. The idea is that only this class knows when data is loading, so only this class can access the subject and emit new values for the loading flag.

The connect() method implementation

Let's now focus on the implementation of the connect method:

This method will need to return an Observable that emits the lessons data, but we don't want to expose the internal subject lessonsSubject directly.

Exposing the subject would mean yielding control of when and what data gets emitted by the data source, and we want to avoid that. We want to ensure that only this class can emit values for the lessons data.

So we are also going to return an Observable derived from lessonsSubject using the asObservable() method. This gives the data table (or any other subscriber) the ability to subscribe to the lessons data observable, without being able to emit values for that same observable.

The disconnect() method implementation

Let's now break down the implementation of the disconnect method:

This method is called once by the data table at component destruction time. In this method, we are going to complete any observables that we have created internally in this class, in order to avoid memory leaks.

We are going to complete both the lessonsSubject and the loadingSubject, which are then going to trigger the completion of any derived observables.

The loadLessons() method implementation

Finally, let's now focus on the implementation of the loadLessons method:

The Data Source exposes this public method named loadLessons(). This method is going to be called in response to multiple user actions (pagination, sorting, filtering) to load a given data page.

Here is how this method works:

the first thing that we will do is to report that some data is being loaded, by emitting true to the loadingSubject, which will cause loading$ to also emit true

the LessonsService is going to be used to get a data page from the REST backend

a call to findLessons() is made, that returns an Observable

by subscribing to that observable, we trigger an HTTP request

if the data arrives successfully from the backend, we are going to emit it back to the data table, via the connect() Observable

for that, we will call next() on the lessonsSubject with the lessons data

the derived lessons observable returned by connect() will then emit the lessons data to the data table

Handling Backend Errors

Let's now see, still in the loadLessons() method, how the Data Source handles backend errors, and how the loading indicator is managed:

if an error in the HTTP request occurs, the Observable returned by findLessons() will error out

If that occurs, we are going to catch that error using catchError() and we are going to return an Observable that emits the empty array using of

we could complementary also use another MessagesService to show a closable error popup to the user

wether the call to the backend succeeds or fails, we will in both cases have the loading$ Observable emit false by using finalize() (which works like finally in plain Javascript try/catch/finally)

And with this last bit, we have completed the review of our custom Data Source!

This version of the data source will support all our use cases: pagination, sorting and filtering. As we can see, the design is all about providing data transparently to the Data Table using an Observable-based API.

Let's now see how we can take this Data Source and plug it into the Data Table.

Linking a Data Source with the Data Table

The Data Table will be displayed as part of the template of a component. Let's write an initial version of that component, that displays the first page of lessons:

This component contains a couple of properties:

the displayedColumns array defines the visual order of the columns

The dataSource property defines an instance of LessonsDataSource, and that is being passed to mat-table via the template

Breaking down the ngOnInit method

In the ngOnInit method, we are calling the Data Source loadLessons() method to trigger the loading of the first lessons page. Let's detail what happens as a result of that call:

The Data Source calls the LessonsService, which triggers an HTTP request to fetch the data

The Data Source then emits the data via the lessonsSubject, which causes the Observable returned by connect() to emit the lessons page

The mat-table Data Table component has subscribed to the connect() observable and retrieves the new lessons page

The Data Table then displays the new lessons page, without knowing where the data came from or what triggered its arrival

And with this "glue" component in place, we now have a working Data Table that displays server data!

The problem is that this initial example is always loading only the first page of data, with a page size of 3 and with no search criteria.

Let's use this example as a starting point, and starting adding: a loading indicator, pagination, sorting, and filtering.

Displaying a Material Loading Indicator

In order to display the loading indicator, we are going to be using the loading$ observable of the Data Source. We will be using the mat-spinner Material component:

As we can see, we are using the async pipe and ngIf to show or hide the material loading indicator. Here is what the table looks like while the data is loading:

We will also be using the loading indicator when transitioning between two data pages using pagination, sorting or filtering.

Adding a Data Table Material Paginator

The Material Paginator component that we will be using is a generic paginator that comes with an Observable-based API. This paginator could be used to paginate anything, and it's not specifically linked to the Data Table.

For example, on a Master-Detail component setup, we could use this paginator to navigate between two detail elements.

This is how the mat-paginator component can be used in a template:

As we can see, there is nothing in the template linking the paginator with either the Data Source or the Data Table - that connection will be done at the level of the CourseComponent.

The paginator only needs to know how many total items are being paginated (via the length property), in order to know how many total pages there are!

Its based on that information (plus the current page index) that the paginator will enable or disable the navigation buttons.

In order to pass that information to the paginator, we are using the lessonsCount property of a new course object.

How to Link the Material Paginator to the Data Source

Let's now have a look at the CourseComponent, to see where course is coming from and how the paginator is linked to the Data Source:

Breaking down the ngOnInit() method

Let's start with the course object: as we can see this object is available at component construction time via the router.

This data object was retrieved from the backend at router navigation time using a router Data Resolver (see an example here).

This is a very common design, that ensures that the target navigation screen already has some pre-fetched data ready to display.

We are also loading the first page of data directly in this method (on line 20).

How is the Paginator linked to the Data Source?

We can see in the code above that the link between the paginator and the Data Source is done in the ngAfterViewInit() method, so let's break it down:

We are using the AfterViewInit lifecycle hook because we need to make sure that the paginator component queried via @ViewChild is already available.

The paginator also has an Observable-based API, and it exposes a page Observable. This observable will emit a new value every time that the user clicks on the paginator navigation buttons or the page size dropdown.

So in order to load new pages in response to a pagination event, all we have to do is to subscribe to this observable, and in response to a pagination event, we are going to make a call to the Data Source loadLessons() method, by calling loadLessonsPage().

In that call to loadLessons(), we are going to pass to the Data Source what page index we would like to load, and what page size, and that information is taken directly from the paginator.

Why have we used the tap() operator?

We could also have done the call to the data source from inside a subscribe() handler, but in this case, we have implemented that call using the pipeable version of the RxJs do operator called tap.

View the Paginator in Action

And with this in place, we now have a working Material Paginator! Here is what the Material Paginator looks like on the screen, while displaying page 2 of the lessons list:

Let's now continue to add more features to our example, let's add another very commonly needed feature: sortable table headers.

Adding Sortable Material Headers

In order to add sortable headers to our Data Table, we will need to annotate it with the matSort directive. In this case, we will make only one column in the table sortable, the seqNo column.

Here is what the template with all the multiple sort-related directives looks like:

Besides the matSort directive, we are also adding a couple of extra auxiliary sort-related directives to the mat-table component:

matSortActive: When the data is passed to the Data Table, its usually already sorted. This directive allows us to inform the Data Table that the data is already initally sorted by the seqNo column, so the seqNo column sorting icon will be displayed as an upwards arrow

matSortDirection: This is a companion directive to matSortActive, it specifies the direction of the initial sort. In this case, the data is initially sorted by the seqNo column in ascending order, and so the column header will adapt the sorting icon accordingly (screenshot below)

matSortDisableClear: Sometimes, besides ascending and descending order we might want a third "unsorted" state for the sortable column header, where we can clear the sorting order. In this case, we want to disable that to make sure the seqNo column always shown either the ascending or descending states

This is the sort configuration for the whole data table, but we also need to identify exactly what table headers are sortable!

In our case, only the seqNo column is sortable, so we are annotating the column header cell with the mat-sort-header directive.

And this covers the template changes, let's now have a look at the changes we made to the CourseComponent in order to enable table header sorting.

Linking the Sortable column header to the Data Source

Just like the case of pagination, the sortable header will expose an Observable that emits values whenever the user clicks on the sortable column header.

The MatSort directive then exposes a sort Observable, that can trigger a new page load in the following way:

As we can see, the sort Observable is now being merged with the page observable! Now a new page load will be triggered in two cases:

when a pagination event occurs

when a sort event occurs

The sort direction of the seqNo column is now taken from the sort directive (injected via @ViewChild()) to the backend.

Notice that after each sort we are also resetting the paginator, by forcing the first page of the sorted data to be displayed.

The Material Sort Header In Action

Here is what the Data Table with sortable headers looks like, after loading the data and clicking the sortable header (triggering a descending sort by seqNo):

Notice the sort icon on the seqNo column

At this point, we have server pagination and sorting in place. We are now ready to add the final major feature: server-side filtering.

Adding Server-Side Filtering

In order to implement server-side filtering, the first thing that we need to do is to add a search box to our template.

And because this is the final version, let's then display the complete template with all its features: pagination, sorting and also server-side filtering:

Breaking down the Search Box implementation

As we can see, the only new part in this final template version is the mat-input-container, containing the Material Input box where the user types the search query.

This input box follows a common pattern found in the Material library: The mat-input-container is wrapping a plain HTML input and projecting it.

This gives us full access to all standard input properties including for example all the Accessibility-related properties. This also gives us compatibility with Angular Forms, as we can apply Form directives directly in the input HTML element.

Notice that there is not even an event handler attached to this input box ! Let's then have a look at the component and see how this works.

Final Component with Server Pagination, Sorting and Filtering

This is the final version of CourseComponent with all features included:

Let's then focus on breaking down the server filtering part.

Getting a reference to the Search Input Box

We can see that we have injected a DOM reference to the <input> element using @ViewChild('input'). Notice that this time around, the injection mechanism gave us a reference to a DOM element and not to a component.

With that DOM reference, here is the part that triggers a server-side search when the user types in a new query:

What we are doing in this is snippet is: we are taking the search input box and we are creating an Observable using fromEvent.

This Observable will emit a value every time that a new keyUp event occurs. To this Observable we will then apply a couple of operators:

debounceTime(150): The user can type quite quickly in the input box, and that could trigger a lot of server requests. With this operator, we are limiting the amount of server requests emitted to a maximum of one every 150ms.

distinctUntilChanged(): This operator will eliminate duplicate values

And with these two operators in place, we can now trigger a page load by passing the query string, the page size and page index to the the Data Source via the tap() operator.

Let's now have a look at the what the screen would look like if the user types the search term "hello":

And with this in place, we have completed our example! We now have a complete solution for how to implement an Angular Material Data Table with server-side pagination, sorting and filtering.

Let's now quickly summarize what we have learned.

Conclusions

The Data Table, the Data Source and related components are a good example of a reactive design that uses an Observable-based API. Let's highlight the key points of the design:

the Material Data Table expects to receive the data from the Data Source via an Observable

The Data Source main role is to build and provide an Observable that emits new versions of the tabular data to the Data Table

A component class like CourseService will then "glue" everything together

This reactive design helps to ensure the loose coupling of the multiple elements involved, and provides a strong separation of concerns.

Source Code + Github Running Example

A running example of the complete code is available here on this branch on Github, and it includes a small backend Express server that serves the data and does the server-side sorting/pagination/filtering.

I hope that this post helps with getting started with the Angular Material Data Table and that you enjoyed it!

If you have some questions or comments please let me know in the comments below and I will get back to you.

To get notified of upcoming posts on Angular Material and other Angular topics, I invite you to subscribe to our newsletter:

Video Lessons Available on YouTube

]]>https://blog.angular-university.io/angular-service-worker/7b5dff9a-b7cb-46ec-b42a-fa7888b79ddfMon, 18 Dec 2017 08:00:00 GMTWith the Angular Service Worker and the Angular CLI built-in PWA support, it's now simpler than ever to make our web application downloadable and installable, just like a native mobile application.

In this post, we will cover how we can configure the Angular CLI build pipeline to generate applications that in production mode are downloadable and installable, just like native apps.

We will also add an App Manifest to our PWA, and make the application one-click installable.

I invite you to code along, as we will scaffold an application from scratch using the Angular CLI and we will configure it step-by-step to enable this feature that so far has been exclusive to native apps.

We will also see in detail what the CLI is doing so that you can also add the Service Worker to an already existing application if needed.

Along the way, we will learn about the Angular Service Worker design and how it works under the hood, and see how it works in a way that is quite different than other build-time generated service workers.

Better than current Native Mobile Installation

This service worker download & installation experience that you are about to see in action all happens in the background, without disturbing the user experience, and is actually much better than the current native mobile mechanism that we have for version upgrades.

This PWA-based mechanism even has implicit support for incremental version upgrades - if for example, we change only the CSS, then only the new CSS needs to be reinstalled, instead of having to install the whole application again!

Also, version upgrades can be handled transparently in the background and in a user-friendly way. The user will always see only one version of the application in the multiple tabs that it has opened, but we can also prompt the user and ask if he wants an immediate version upgrade.

Performance Benefits and Offline Support

The performance benefits of installing all our Javascript and CSS bundles on the user browser makes application bootstrapping much faster. How much faster? This could go from several times faster to an order of magnitude faster, depending on the application.

Any application, in general, will benefit from the performance boost enabled by this PWA download and installation feature, this is not exclusive to mobile applications.

Having the complete web application downloaded and installed on the user browser is also the first step for enabling application offline mode, but note that a complete offline experience requires more than just the download and install feature.

As we can see, the multiple advantages of this new PWA-based application installation feature are huge! So let's go through this awesome feature in detail.

With a couple of commands, the CLI will give us a working application that has Download & Installation enabled. The first step to create an Angular PWA is to upgrade the Angular CLI to the latest version:

npm install -g @angular/cli@latest

If you want to try the latest features, it's also possible to get the next upcoming version:

npm install -g @angular/cli@next

And with this in place, we can now scaffold an Angular application and add it Angular Service Worker support:

ng new angular-pwa-app --service-worker

We can also add the Angular Service Worker to an existing application using the following command:

ng add @angular/pwa --project <name of project as in angular.json>

Step 2 of 7 - Understanding How To Add Angular PWA Support Manually

The application scaffolded is almost identical to an application without PWA support. Let's see what this flag includes, in case you need to upgrade an application manually.

We can see that the @angular/service-worker package was added to package.json. Also, there is a new flag serviceWorker set to true in the CLI configuration file angular.json:

What does the serviceWorker flag do?

This flag will cause the production build to include a couple of extra files in the output dist folder:

The Angular Service Worker file ngsw-worker.js

The runtime configuration of the Angular Service Worker ngsw.json

Note that ngsw stands for Angular Service Worker

We will cover these two files in detail, right now let's see what else the CLI has added for PWA support.

What does the ServiceWorkerModule do?

The CLI has also included in our application root module the Service Worker module:

This module provides a couple of injectable services:

SwUpdate for managing application version updates

SwPush for doing server Web Push notifications

More than that, this module registers the Angular Service Worker in the browser (if Service Worker support is available), by loading the ngsw-worker.js script in the user's browser via a call to navigator.serviceWorker.register().

The call to register() causes the ngsw-worker.js file to be loaded in a separate HTTP request. And with this in place, there is only one thing missing to turn our Angular application into a PWA.

The build configuration file ngsw-config.json

The CLI has also added a new configuration file called ngsw-config.json, which configures the Angular Service Worker runtime behavior, and the generated file comes with some intelligent defaults.

Depending on your application, you might not even have to edit this file!

Here is what the file looks like:

There is a lot going on here, so let's break it down step-by-step. This file contains the default caching behavior or the Angular Service Worker, which targets the application static asset files: the index.html, the CSS and Javascript bundles.

The Angular Service Worker can cache all sorts of content in the browser Cache Storage.

This is a Javascript-based key/value caching mechanism that is not related to the standard browser Cache-Control mechanism, and the two mechanisms can be used separately.

The goal of the assetGroups section of the configuration file is to configure exactly what HTTP requests get cached in Cache Storage by the Angular Service Worker, and there are two cache configuration entries:

one entry named app, for all single page application files (all the application index.html, CSS and Javascript bundles plus the favicon)

another entry named assets, for any other assets that are also shipped in the dist folder, such as for example images, but that are not necessarily necessary to run every page

Caching static files that are the application itself

The files under the app section are the application: a single page is made of the combination of its index.html plus its CSS and Js bundles. These files are needed for every single page of the application and cannot be lazy loaded.

In the case of these files, we want to cache them as early and permanently as possible, and this is what the app caching configuration does.

The app files are going to be proactively downloaded and installed in the background by the Service Worker, and that is what the install mode prefetch means.

The Service worker will not wait for these files to be requested by the application, instead, it will download them ahead of time and cache them so that it can serve them the next time that they are requested.

This is a good strategy to adopt for the files that together make the application itself (the index.html, CSS and Javascript bundles) because we already know that we will need them all the time.

Caching other auxiliary static assets

On the other hand, the assets files are cached only if they are requested (meaning the install mode is lazy), but if they were ever requested once, and if a new version is available then they will be downloaded ahead of time (which is what update mode prefetch means).

Again this is a great strategy for any assets that get downloaded in a separate HTTP request such as images because they might not always be needed depending on the pages the user visits.

But if they were needed once then its likely that we will need the updated version as well, so we might as well download the new version ahead of time.

Again these are the defaults, but we can adapt this to suit our own application. In the specific case of the app files though, it's unlikely that we would like to use another strategy.

After all, the app caching configuration is the download and installation feature itself that we are looking for. Maybe we use other files, outside the bundles produced by the CLI? In that case, we would want to adapt our configuration.

It's important to keep in mind that with these defaults, we already have a downloadable and installable application ready to go, so let's try it out!

Step 4 of 7 - Running and Understanding the PWA Production Build

Let's first add something visual to the application that clearly identifies a given version running in the user browser. For example, we can replace the contents of the app.component.html file with the following:

Let's now build this Hello world PWA app. The Angular Service Worker will only be available in production mode, so let's first do a production build of our application:

ng build --prod

This will take a moment, but after a while, we will have our application build available inside the dist folder.

The Production Build Folder

Let's have a look to see what we have in our build folder, here are the most all the files generated:

As we can see, the serviceWorker flag in the angular.json build configuration file has caused the Angular CLI to include a couple of extra files (highlighted in blue).

What is the ngsw-worker.js file?

This file is the Angular Service Worker itself. Like all Service workers, it get's delivered via its own separate HTTP request so that the browser can track if it has changed, and apply it the Service Worker Lifecycle (covered in detail in this post ).

It's the ServiceWorkerModule that will trigger the loading of this file indirectly, by calling navigation.serviceWorker.register().

Note that the Angular Service Worker file ngsw-worker.js will always be the same with each build, as it gets copied by the CLI straight from node_modules.

This file will then remain the same until you upgrade to a new Angular version that contains a new version of the Angular Service Worker.

What is the ngsw.json file?

This is the runtime configuration file, that the Angular Service worker will use. This file is built based on the ngsw-config.json file, and contains all the information needed by the Angular Service Worker to know at runtime about which files it needs to cache, and when.

Here is what the ngsw.json runtime configuration file looks like:

As we can see, this file is an expanded version of the ngsw-config.json file, where all the wilcard urls have been applied and replaced with the paths of any files that matched them.

How does the Angular Service Worker use the ngsw.json file?

The Angular Service Worker is going to load these files either proactively in the case of install mode prefetch, or as needed in the case of install mode lazy, and it will also store the files in Cache Storage.

This loading is going to happen in the background, as the user first loads the application. The next time that the user refreshes the page, then the Angular Service Worker is going to intercept the HTTP requests, and will serve the cached files instead of getting them from the network.

Note that each asset will have a hash entry in the hash table. If we do any modification to any of the files listed there (even if its only one character), we will have a completely different hash in the following Angular CLI build.

The Angular Service Worker will then know that this file has a new version available on the server that needs to be loaded at the appropriate time.

Now that we have a good overview of everything that is going on, let's see this in action!

Step 5 of 7 - Launching an Angular PWA in Production Mode

Let's then start the application in production mode, and in order to do that, we are going to need a small web server. A great choice is http-server, so let's install it:

npm install -g http-server

Let's then go into the dist folder, and start the application in production mode:

cd dist
http-server -c-1 .

The -c-1 option will disable server caching, and a server will normally be running on port 8080, serving the production version of the application.

Note that if you had port 8080 blocked, the application might be running on 8081, 8082, etc., the port used is logged in the console at startup time.

If you have a REST API running locally on another server for example in port 9000, you can also proxy any REST API calls to it with the following command:

http-server -c-1 --proxy http://localhost:9000 .

With the server running, let's then head over to http://localhost:8080, and see what we have running using the Chrome Dev Tools:

As we can see, we have now version V1 running and we have installed a Service Worker with source file ngsw-worker.js, as expected!

Where are the Javascript and CSS bundles stored?

All the Javascript and CSS files, plus even the index.html have all been downloaded in the background and installed in the browser for later use.

These files can all be found in Cache Storage, using the Chrome Dev Tools:

The Angular Service Worker will start serving the application files the next time you load the page. Try to hit refresh, you will likely notice that the application starts much faster.

Note that the performance improvement will be much more noticeable in production than on localhost

Taking the Application Offline

To confirm that the application is indeed downloaded and installed into the user browser, let's do one conclusive test: let's bring the server down by hitting Ctrl+C.

Let's now hit refresh after shutting down the http-server process: you might be surprised that the application is still running, we get the exact same screen!

It looks like all the Javascript and CSS bundle files that make the application where fetched from somewhere else other than the network because the application is still running.

The only file that was attempted to be fetched from the network was the Service Worker file itself, which is expected (more on this in a moment).

Step 6 of 7 - Deploying a new Application Version, Understanding Version Management

This is a great feature, but isn't it a bit dangerous to cache everything, what if there is a bug and we want to ship a new version of the code?

Let's say that we made a small change to the application, like for example editing a global style in the styles.css file. Before running the production build again, let's keep the previous version of ngsw.json, so that we can see what changed.

Let's now run the production build again, and compare the generated ngsw.json file:

As we can see, the only thing that changed in the build output was the CSS bundle, all the remaining files are unchanged except for index.html (where the new bundle is being loaded).

How does the Angular Service Worker install new application versions?

Every time that the user reloads the application, the Angular Service Worker will check to see if there is a new ngsw.json file available on the server.

This is for consistency with the standard Service Worker behavior, and to avoid having stale versions of the application running for a long time. Stale versions could potentially contain bugs or even be completely broken, so its essential to check frequently if a new application version is available on the server.

In our case, the previous and the new versions of the ngsw.json file will be compared, and the new CSS bundle will be downloaded and installed in the background.

The next time the user reloads the page, the new application version will be shown!

Informing the user that a new version is available

For long-running SPA applications that the user might have opened for hours, we might want to check periodically to see if there is a new version of the application on the server and install it in the background.

In order to check if a new version is available, we can use the SwUpdate service and its checkForUpdate() method.

But in general, calling checkForUpdate() manually is not necessary because the Angular Service Worker will look for a new version of ngsw.json on each full application reload, for consistency with the standard Service Worker Lifecycle (see details here).

What we can is to ask to get notified when a new version is available by using the available Observable of SWUpdate, and then ask the user via a dialog if he wants to get the new version:

Let's break down what happens with this code when a new application version is deployed on the server:

new files are now available on the server, for example new CSS or Js bundles

there is a new ngsw.json file on the server, that contains the information about the new application version: which files to load, when to load them, etc.

But when the user reloads the application, the user will still see the old application version!

This is normal because the user still has a Service Worker running in the browser, that is still serving all files from Cache Storage, and bypassing the network altogether.

However, the Angular Service Worker will also do a call to the server to see if there is a new ngsw.json, and trigger the loading of any new files mentioned in the ngsw.json file in the background.

Once all the files for the new application version are loaded, the Angular Service Worker will emit the available event, meaning that a new version of the application is available. The user will then see the following:

If the user clicks OK, the full application will then be reloaded and the new version will be shown. Note that if we had not shown this dialog to the user, the user would still see the new version on the next reload anyway.

Angular Service Worker Version Management Summary

To summarize, here is how new application versions are managed by the Angular Service Worker:

a new build occurs, and a new ngsw.json is available

first application reload after the new version is deployed - the Angular Service Worker detects the new ngsw.json and loads any new files in the background

second reload after new version deployment - the user sees the new version

this behavior will work consistently, independently of how many tabs the user has opened (unlike what happens with the standard Service Worker Lifecycle)

And with this in place, we have a downloadable and installable Angular PWA Application, with built-in version management!

The last piece that we are now missing for a complete one-click installation experience, is to ask the user to install the application to its Home Screen.

Step 7 of 7 - One-Click Install with the App Manifest

Let's now make our application one-click installable, and note that this is optional, meaning that we can use the Angular Service Worker without an App Manifest.

On the other hand, in order for the App Manifest to work, we need a Service Worker running on the page! By providing a standard manifest.json file, the user will be asked to install the application to the Home Screen.

When will the Install to Home Screen button be shown to the user?

There are a couple of conditions for this to work, one of them being that the application needs to run over HTTPS and have a Service Worker.

Also, the option for installing to Home Screen will only be shown if certain extra conditions are met.

There is a constantly evolving heuristic that will determine if the button "Install To Home Screen" will be shown or no to the user, that typically has to do with the number of times that the user visited the site, how often, etc.

A Sample App Manifest file

In order to make this functionality work, we need to first create a manifest.jsonfile, and we are going to place in the root of our application next to our index.html:

This file defines what the icon installed on the Home screen will look like, and it also defines a couple of other native UI parameters.

Linking to the App Manifest on page load

Once we have a manifest.json, we need to link to it in the index.htmlof our application using a link tag in the page header:

Setting up the CLI for including the App Manifest

In order to have the App Manifest file in our production build, we are going to configure the CLI to copy this file to the dist folder, together with the complete assets folder.

We can configure that in file angular.json:

With this in place, we now have a manifest.json file in production. but if we now reload the application, most likely, nothing will happen!

Triggering Install to Home Screen

What we mean by that is that most likely no "Install To Home Screen" button will be shown to the user, and this is because the heurestic for displaying this button to the user was not met yet.

But we can trigger the button with the Chrome Dev Tools, in the Manifest tab using the Add To Home Screen option:

As we can see, on a Mac the look and feel of the button is still in early stages, but this is what the button should look like on mobile:

And with this in place, we now have a complete one-click download and installation experience for our application.

Summary

Native-like Application Download and Installation is now simpler than ever with the Angular Service Worker and the Angular CLI!

The performance benefits that this feature brings to any application (desktop included) are huge and can be added incrementally to a standard application in a progressive way.

Any application (mobile or not) can benefit from a much faster startup time, and this feature can be made to work out of the box with some intelligent defaults provided by the Angular CLI.

I hope that this post helps with getting started with the Angular Service Worker and that you enjoyed it! If you want to learn more about other Angular PWA features, have a look at the other posts of the complete series:

]]>https://blog.angular-university.io/angular-app-shell/933d0269-5545-4db8-b8cf-99776b74ae51Thu, 07 Dec 2017 10:29:18 GMTOne of the things that most impacts User Experience (especially on mobile) is the application startup experience, and perceived performance. In fact, studies have shown that 53% of mobile users abandon sites that take longer than 3 seconds to load!

And this is true for all applications in general, not only mobile applications. Any application can benefit from a much better startup experience, especially if we can get that working out of the box.

One of the things that we can do to improve the user experience is to show something to the user as quickly as possible, reducing the time to first paint.

And the best way to get that much-improved user experience and show something quickly to the user is to use an App Shell!

What is an App Shell?

To boost perceived startup performance, we want to show to the user the above the fold content as quickly as possible, and this usually means showing a menu navigation bar, the overall skeleton of the page, a loading indicator and other page-specific elements.

To do that, we will include the HTML and CSS for those elements directly in the initial HTTP response that we get back from the server when we are loading the index.html of our Single Page Application.

That combination of a limited number of above the fold plain HTML and styles which is displayed to the user as fast as possible is known as the Application Shell.

And in this post, we will learn all about how to add an App Shell to an Angular Application, using the Angular CLI!

Note: The App Shell functionality is available independently of the use of a Service Worker, and we don't need to use a server-side rendered Angular Universal application in production to benefit from the App Shell

Table Of Contents

Here is what we will do in this post: we are going to scaffold an Angular application from scratch from an empty folder, and we will add it an Application Shell that will be automatically generated at build time using the Angular CLI.

We will understand what is going on and how the whole App Shell solution works under the hood. We are going to do this in the following steps:

Step 1 - Scaffolding an Angular PWA Application with the Angular CLI

Step 2 - Checking the index.htmlbefore including an App Shell

Step 3 - Profiling Application Startup Before an App Shell

Step 4 - Scaffolding an Angular Universal Application

Step 5 - Adding the App Shell using the Angular CLI

Step 6 - Generating the App Shell in Production Mode

Step 7 - Measuring the App Shell performance improvements

This post is part of the ongoing Angular PWA Series, here is the complete series:

With a couple of commands, the CLI will give us a working application with an App Shell. The first step to create an Angular application is to upgrade the Angular CLI to the latest version:

npm install -g @angular/cli@latest

If you want to try the latest features, it's also possible to get the next upcoming version, not yet released:

npm install -g @angular/cli@next

And with this in place, we can now scaffold an Angular application. It's essential for the App Shell to work to have the Angular Router set up, and we will understand why in a moment.

We can include the router in the new application using the following command:

ng new my-app-shell --routing

And this will create a new folder named my-app-shell with a new Angular application which includes the Router already set up.

Step 2 of 7 - Checking the index.htmlbefore including an App Shell

In order to understand what problem the App Shell is solving, let's have a look at how the application works before including an App Shell.

Let's start by building this initial application in production mode:

ng build --prod

Now we have the production application in the dist folder. If we have a look at the index.html file, here is what we have:

As we can see, this page is a blank page that contains only the following:

the application styles

The Javascript bundles

This means that when this page is first loaded, for a few seconds the user will not see anything. There will be an initial browser paint, but it's not a meaningful paint: the page is empty!

All the content is going to be added to the page via Javascript, everything is dynamic content and there is no static content. Let's confirm this by starting the application and seeing what is going on using the Chrome Dev Tools.

Step 3 of 7 - Profiling Application Startup Before using an App Shell

Let's start the application in production mode:

ng serve --prod

Then we will go to localhost:4200 and measure the page startup performance:

let's open the Dev Tools and select the Performance tab

let's leave the "Screenshots" checkbox checked

in the Performance tab, let's hit the "Start Profiling and Reload Page" button

We stop the recording as soon as we see something on the page

Now let's have a look at the profiling results:

As we can see, the browser is rendering the page at about 1000ms (rendering is shown in purple). There was a first paint attempt at about 600ms, but the problem is that there was no content to be displayed yet, so the page remained blank.

And this is the best case scenario of a Hello World application, as a typical SPA will render the first results later than that!

Let's then see how we can improve this.

How to improve page startup time?

The only way to improve things is to serve some more HTML and CSS in the body of the index.html. This is because in this very early stage of page load Angular is not yet running and in fact, the Angular bundles are still being downloaded!

To do that, we would like to take at least a good part the content of app.component.ts main application component HTML and CSS output, and move it to index.html. This should include the main skeleton of the page, including the navigation system.

But if we look into the template of the component, we will see that it has a router outlet in it:

So we need to do is to pre-render this component, and get the HTML and CSS output for the App Shell, but we need to specify what we want to put in place of the router outlet.

What is the relation between the App Shell and Angular Universal?

We are going to pre-render the main component at build time using Angular Universal, and use the pre-rendering output in our index.html.

But in place of the router outlet, we probably want to put something lighter than the full content of the / home route, because that might include too much HTML and CSS.

Instead, in place of the router outlet, we probably only want to show a loading indicator or a simplified version of the page instead of the whole home route.

The simplest way to do that is to create an auxiliary route in our application, for example in the path /app-shell-path. Then we need to pre-render the complete content of that route and include it in our index.html, and we have our App Shell!

In order to do pre-rendering in Angular, we will need Angular Universal. Let's then scaffold an Angular Universal application, that contains the same components as our client-side single page application.

Step 4 of 7 - Scaffolding an Angular Universal Application

We can add pre-rendering capabilities to our application, by running the following Angular CLI command:

ng generate universal ngu-app-shell --client-project <project name>

We can find the client project name inside the angular.json CLI configuration file. Let's remember that now a CLI application can contain multiple client projects, so we need to identify the correct one.

As we can see, we have just created a new component called app-shell! This component was then linked to the /app-shell-path route, but only in the Angular Universal application.

This /app-shell-path special route is just an internal Angular CLI mechanism for generating the App Shell, the application users will not be able to navigate to this route. In this case, this route is a build time auxiliary construct only.

Here is the routing configuration that was added only in the app.server.module.ts file (and not on the main app.module.ts):

As we can see, the /app-shell-path route is linked to AppShellComponent, which will be added in place of the router-outlet tag. The AppShellComponent is a normal scaffolded Angular component, just like any component that we obtain using ng generate.

We can edit it to include the content that we would like to display in the body of the App Shell. Here is an example that uses a loading indicator:

Besides configuring the App shell route and component, we also have some new configuration in the angular.json file:

As we can see, we have added to the build configuration of our production Angular application some configuration that says:

Pre-render the route app-shell-path using the Angular Universal application named ngu-app-shell, and use that as the App Shell

So everything is setup and ready to go, let's then build our application, see it in action and measure the performance improvements.

Step 6 of 7 - Generating the App Shell in Production Mode

Let's now run the app shell build! Let's say that your project is named app-shell-test, which is the value specified on top of your angular.json file.

We can now build the App Shell by running the following command:

ng run app-shell-test:app-shell

This time around, the content of index.html generated in the dist folder looks a lot different. Let's have a look:

As we can see, this is no longer a blank page. The styles for the AppShellComponent were added inline in the page (as usual), and the HTML for the navigation menu and the loading indicator is also present on the page.

So what happened here? The Angular CLI has taken the output of pre-rendering the App shell route, and it added that HTML output inside the index.html file.

So it looks like everything worked, and we have an App Shell ready to use!

Step 7 of 7 - Measure the performance improvements gained by using the App Shell

Let's now run our application in production mode and see the results. We can run a build that is as close to production by running:

ng serve --prod

We can also do the following, let's cd into the directory and run the application using a simple HTTP server:

npm install -g http-server
cd dist
http-server -c-1 .

App Shell Performance Results

With the server running, let's head over to localhost:8080 and do some profiling. Let's see how soon is the app shell visible to the user:

A much improved time to first paint

As we can see, in this particular case the App Shell is visible at around 660ms, which represents a huge improvement to the typical time to first paint of a full SPA, which could be a couple of seconds!

Even in the case of this Hello World example we have a time to first paint that is almost half the initial time, so imagine the gains in a full-blown SPA.

This can be even further improved in several ways:

by using an inlined Base 64 image for the loading indicator instead of an external image, avoiding an extra HTTP request needed to load the image

by moving or even duplicating certain styles from external stylesheets to the App Shell, etc.

Each application needs to be optimized separately depending on how much content do we need to show to the user, and the App Shell mechanism gives us the foundation for doing that and achieving that super fast perceived startup time that we are looking for.

Summary

The built-in App Shell mechanism in the Angular CLI is a hugely beneficial performance improvement for any application (not only mobile), that is working right out of the box.

From the user perspective, a time to first paint of about half a second just feels almost instantaneous, even though in reality the application is still loading and fetching data from the backend.

The exact time to first paint will depend on each application, and the App Shell feature gives us all the tools needed to get it as low as possible.

Although this App shell mechanism is usually tied to PWAs, a PWA is not necessary to benefit from the App Shell Angular CLI features, as these two progressive improvements are configurable separately.

I hope that this post helps with getting started with the Angular App Shell and that you enjoyed it! If you want to learn more about other Angular PWA features, have a look at the other posts of the complete series:

]]>https://blog.angular-university.io/angular-reactive-templates/537be862-811c-4d67-8ae3-afb2b73c584aWed, 06 Dec 2017 08:00:00 GMTThe ngIf template syntax is very useful in many common use cases, like for example using an else clause when we want to display a loading indicator.

But it turns out that there is more to ngIf than it might seem at first sight: The "ngIf as" syntax combined with the async pipe is also a key part for being able to write our templates in a more reactive style.

There are a lot of benefits of writing our templates this way:

memory leaks are less likely (depending on the type of Observables we use)

more readable code

less potential issues with multiple subscriptions at the level of the service layer

less state in our components

Overal this is a great way to write more readable templates while avoiding by design a number of common problems.

Table Of Contents

In this post, we will cover the following topics:

we will start by covering the ngIf Else syntax

we will then cover the "ngIf as" syntax

we will try to write a template in a none reactive style and discuss potential problems we might run into

we will then refactor the template to a more reactive style, using the ngIf else and ngIf as syntaxes and discuss the benefits of this approach

So without further ado, let's get started with our design discussion on Reactive Angular templates!

NgIf Else Example

Let's first have a look first at the ngIf Else syntax, in isolation, here is an example:

As we can see, we can specify in ngIf an else clause with the name of a template, that will replace the element where ngIf is applied in case the condition is false.

This would either print "Condition is true" or the content of the loading template to the element annotated with ngIf, depending on the truthiness of the condition.

The ngIf as Syntax

With ngIf, its also possible to evaluate the truthiness of an expression, and assign the result of the expression (which might not be a boolean) to a variable.

Let's have a look at an example:

The template above would print "Angular For Beginners" because the expression course is truthy. As we can see, course corresponds to a component property which is a plain Javascript object.

But the result of the expression is not a boolean, its an object and it gets assigned to a local template variable named result, and so the description property gets printed to the screen as expected.

What might not be apparent when first encountering this syntax, is that it makes it much simpler to write our templates in a more Reactive Style.

What is the relation between the ngIf as syntax and Reactive Programming?

In order to understand how the ngIf as syntax is an important part of the Angular built-in reactive programming support, we will need a more concrete example where we will load some data from the backend.

We will then try to display the data on the screen using the async pipe. Imagine that we have a CourseService, that brings some data asynchronously from the backend, using, for example, the Angular HTTP module:

Let's also have a look at what the Course custom type looks like:

As we can see, it's made out of two mandatory properties id and shortDescription, and three more optional properties that are annotated with a question mark.

Let's say that now our application loads this data from the backend and we want to display it on the screen (all the course properties). Here is what our component would look like:

So what is happening here? Let's break it down:

we are defining an Observable member variable named courseObs, that is initially undefined

we are initializing courseObs with an Observable that is returned by the service layer

we don't know when the Observable will return or what it will return, other than it should be a Course instance

So here is how we can to display this data on the screen: we would want to use the Angular async pipe.

Why use the async pipe ?

Because it automatically subscribes and unsubscribes from Observables as the component gets instantiated or destroyed, which is a great feature.

This is especially important in the case of long-lived observables like for example certain Observables returned by the router or by AngularFire.

Also because it makes our programs easier to read and more declarative, with fewer state variables in our component classes.

For example, notice that the component above only has an observable member variable and that we don't have direct access to the data at the level of the component itself - only the template accesses the data directly.

So let's then use the async pipe to print the course details to the screen:

As we can see, this ends up not being very convenient, because we need to use the async pipe multiple times. Actually, this would cause multiple subscriptions (meaning multiple HTTP requests), which could lead to other problems and also it's not as readable.

So in practice this is what we often ended up doing instead to avoid these issues:

As we can see, we have subscribed to the observable returned by the service layer and defined a local variable that contains the result of the backend call. Now our template is a lot simpler: we simply have a variable named course that we can use to access the data.

But there could be a couple of potential problems with this, especially in a larger application.

What are some potential problems with this approach?

This approach works great, but one potential issue is that now we have to manually manage the subscriptions of this Observable.

In the case of HTTP observables this does not have an impact because those Observables emit only once and then they complete, but in the case of long-lived Observables this could potentially cause an issue.

Another potential issue with this approach

Also, we have defined here a local variable to pass data to the template, which ends up making our program more imperative if compared to the more reactive approach of simply defining an Observable, passing it to the template and declaring on the template how to use it.

Because now we have here the state stored on this variable at the level of the component, we might be tempted to further write code that mutates that state.

Here in this small example, it would not cause an issue, but in a larger application, this could potentially cause some maintainability problems.

By using the async pipe we were looking to write a program in a more reactive style, where those couple of potential problems are avoided by design.

But instead, we ended up using manual subscriptions as we could not find a practical way of using the async pipe.

Another alternative - refactor into a smaller component

Let's see if there is a better way: can we avoid using the async pipe multiple times?

We could, for example, move the course detail implementation to a new component. This would allow us to keep using the async pipe, and avoid the manual subscription at the level of the component:

And this is what the course-detail component would look like:

This is a good alternative, because we might end up using this component in other places of the application.

But we ended up needing to create a separate component mostly to avoid having to use the async pipe multiple times in the parent template.

It turns out that there is a much better solution, where we will leverage some of the features of ngIf! Let's have a look.

Reactive Style Templates - The ngIf as syntax and the async pipe

If we combine all the available features of ngIf with the async pipe, we can now come up with the following solution:

So what is going on in this example? Let's break it down:

the async pipe is being used to subscribe only once to courseObs

the else clause is defining what to display while the data is not available (the ng-template named loading)

the 'as' syntax is specifying a template variable for the expression courseObs | async, and that variable is named course

the result of this expression is aliased as course, and corresponds to the course object emitted by the course observable

now there is a local course variable available inside the ngIfsection, that corresponds to the value emitted by the backend call

This course variable is ready to be used, just like if the course object had been passed synchronously as an @Input() to this component

Advantages of this more reactive approach

Here are the advantages of writing our templates in this more reactive style:

There are no manual subscriptions at the component level for observables coming out of the service layer

we don't have to create smaller components to be able to use the async pipe only once and prevent multiple subscriptions

no local data state variables are defined at the level of the component, so its less likely to run into issues caused by mutating local component state

we now have a more declarative code: both the component and the template are very declarative, we are simply plugging in together streams of data instead of storing local variables and passing them to the template

We end up with a very readable template, less code and fewer things that can go wrong, like memory leaks on long-running streams - because the async pipe will take care of unsubscriptions transparently!

Also, there was no need to repeat the async pipe multiple times on the template, or creating an intermediate component just to avoid that.

Conclusions

As we could see in this example, the ngIf / else features although useful in many other cases, are especially useful when combined with the async pipe for simplifying the development of Angular applications in a more reactive style.

I hope you enjoyed the post, I invite you to have a look at the list below for other similar posts and resources on Angular.

I invite you to subscribe to our newsletter to get notified when more posts like this come out:

Video Lessons Available on YouTube

Have a look at the Angular University Youtube channel, we publish about 25% to a third of our video tutorials there, new videos are published all the time.

]]>https://blog.angular-university.io/angular-host-context/7df63ea3-9e50-4f99-b716-b2a921124891Wed, 29 Nov 2017 09:17:56 GMTIn this post, we will learn how the default Angular styling mechanism (Emulated Encapsulation) works under the hood, and we will also cover the Sass support of the Angular CLI, and some best practices for how to leverage the many Sass features available.

We will talk about when to use each feature and why, talk about the benefits of component style isolation and also cover how to debug our styles if something is not working.

This is the second post of a two-part series in Angular Component Styling, if you are looking to learn about ngClass and ngStyle, have a look at part one:

In order to cover each feature, we will be adding the multiple examples to this small Angular CLI sample application, that will use as external styles a Bootstrap default theme.

Why Style Isolation?

So without further ado, let's get started with our Angular Style Isolation deep dive. The first question that comes to mind is, why would we want to isolate the styles of our components? There are a couple of reasons for that, and one key reason is CSS maintainability.

As we develop a component style suite for an application, we tend to run into situations where the styles from one feature start interfering with the styles of another feature.

This is because browsers do not have yet widespread support for style isolation, where we can constrain one style to be applied only to one particular part of the page.

If we are not careful and systematically organize our styles to prevent this issue (for example using a methodology like SMACSS), we will quickly run into CSS maintenance issues.

Wouldn't it be great to be able to style our components with just short, simple and easy to read selectors, without having to worry about all the scenarios where those styles could be accidentally overridden?

Another benefit of style isolation

Here is another scenario: how many times did we try to use a third-party component, add it to our application just to find out that the component is completely broken due to styling issues?

Style isolation would allow us to ship our components knowing that the styles of the component will (most likely) not be overridden by other styles in target applications.

This makes the component effectively much more reusable, because the component will now in most cases simply just work, styles included.

Angular View Encapsulation brings us all of these advantages, so let's learn how it works!

A Demo of Angular Emulated Encapsulation

In this section, we will see how Angular component styling works under the hood, as this is the best way to understand it. This will also allow us to debug the mechanism if needed.

This is a video demonstration of the default mechanism in action:

In order to benefit from the default view encapsulation mechanism of Angular, all we need to do is to add our CSS classes to an external CSS file:

But then, instead of adding this file to our index.html as a link tag, we will instead associate it with our component using the styleUrls property:

The color red would then be applied to this button, as expected. But what if now we have another button, for example directly at the level of our index.html?

If you didn't know that there was some sort of style isolation mechanism in action, you might be surprised to find out that this button does NOT get a red background!

So what is going on here? Let's see how this mechanism works, because knowing that is what is going to allow us to debug it if needed.

How does Angular Style Isolation work? Emulated View Encapsulation

To better understand how default view encapsulation works, let's see what the app-root custom HTML element will look like at runtime:

Several things are going on at the level of the runtime HTML:

a strange looking property was added to the ap-root custom element: the _nghost-c0 property

Each of the HTML elements inside the application root component got another strange looking but different property: _ngcontent-c0

What are these properties?

So how do these properties work? To better understand these properties and how they enable style isolation, we are going to create a second separate component, that just contains a button with the blue color.

For simplicity, we will define the styles for this component inline next to the template:

And using this newly defined component, we are going to add it to the template of the application root component:

Try to guess at this stage what the HTML at runtime would look like, and what happened to those strangely named properties!

The host element and template element style isolation properties

With this second component in place, let's have a second look at the HTML. The way that these two properties work will now become much more apparent:

Notice the blue-button element, we have now a new host property called _nghost-c1.

The blue-button element is still tagged with the _ngcontent-c0 property which is applied to all template elements on the application root component.

But now, the elements inside the blue-button template now get applied the _ngcontent-c1 property instead!

Summary of how the host and template element properties work

Let's then summarize how these special HTML properties work, and then see how they enable style isolation:

upon application startup (or at build-time with AOT), each component will have a unique property attached to the host element, depending on the order in which the components are processed: _nghost-c0, _nghost-c1, etc.

together with that, each element inside each component template will also get applied a property that is unique to that particular component: _ngcontent-c0, _ngcontent-c1, etc.

This is all transparent and done under the hood for us.

How do these properties enable view encapsulation?

The presence of these properties could allow us to write manually CSS styles which are much more targetted than just the simple styles that we have on our template.

For example, if we want to scope the blue color to the blue-button component only, we could write manually the following style:

While style 1 was applicable to any element with the blue-button class anywhere on the page, style 2 will only work for elements that have that strangely named property!

So this means that style 2 is effectively scoped to only elements of the blue-button component template, and will not affect any other elements of the page.

So we now can see how those two special properties do enable some sort of style isolation, but it would be cumbersome to have to use those properties manually in our CSS (and in fact, we should not).

But luckily, we don't have to. Angular will do that automatically for us.

How does Angular encapsulate styles?

At startup time (or at build time if using AOT), Angular will see what styles are associated with which components, via the styles or styleUrls component properties.

Angular will then take those styles and apply them transparently the corresponding isolating properties, and will then inject the styles directly into the page header as a style tag:

The _ngcontent-c1 property is unique to elements of the blue-button template, so the style will be scoped to those elements only.

And that is how the Angular default view encapsulation mechanism works!

This mechanism is not 100% bullet-proof as it does not guarantee perfect isolation, but in practice, it will nearly always work.

The mechanism it's not based on the shadow DOM but instead in these special HTML properties, so if we really wanted to we could still override these styles.

But given that the native Shadow Dom isolation mechanism is currently available only in Chrome and Opera, we cannot yet rely on it.

This mechanism is very useful because it enables us to write simple styles that will not break easily, but we might want to break this isolation selectively from time to time.

Let's learn a couple of ways of doing that, and why we would want to do that.

The :host pseudo-class selector

Sometimes we want to style the component custom HTML element itself, and not something inside its template.

Let's say for example that we want to style the app-root component itself, by adding it, for example, an extra border.

We cannot do that using styles inside its app.component.css associated file, right?

This is because all styles inside that file will be scoped to elements of the template, and not the outer app-root element itself.

If we want to style the host element of the component itself, we need the special :host pseudo-class selector. This is the new version of our app.component.css that uses it:

This selector will ensure those styles are only targeting the app-root element. Remember that _nghost-c0 property that we talked about before? This is how it's used to implement the :host selector at runtime:

The use of the special _nghost-c0 will ensure that those styles are scope only to the app-root element, because app-root gets added that property at runtime:

If you would like to see a visual demonstration of the :host pseudo-class selector in action, have a look at this video:

Combining the :host selector with other selectors

Notice that the can combine this selector with other selectors, which is something that we have not yet talked about.

This is not specific to this selector, but have a look for example at this selector, where we are styling h2 elements inside the host element:

You could be surprised to find out that this style only applies to the h2 elements inside the app-root template, but not to the h2 inside the blue-button component.

To see why, let's have a look at the styles that were generated at runtime:

So we can see that the special scoping property gets applied also to nested selectors, to ensure the style is always scoped to that particular template.

But if we did want to override the styles of all the h2 elements, there is still a way.

The ::ng-deep pseudo-class selector

If we want our component styles to cascade to all child elements of a component, but not to any other element on the page, we can currently do so using by combining the :host with the ::ng-deep selector:

This will generate at runtime a style that looks like this:

So this style will be applied to all h2 elements inside app-root, but not outside of it as expected.

This combination of selectors is useful for example for applying styles to elements that were passed to the template using ng-content, have a look at this post for more details.

::ng-deep, /deep/ and >>> deprecation

The ::ng-deep pseudo-class selector also has a couple of aliases: >>> and /deep/, and all three are soon to be removed.

The main reason for that is that this mechanism for piercing the style isolation sandbox around a component can potentially encourage bad styling practices.

The situation is still evolving, but right now, ::ng-deep can be used if needed for certain use cases.

The :host-context pseudo-class selector

Sometimes, we also want to have a component apply a style to some element outside of it. This does not happen often, but one possible common use case is for theme enabling classes.

For example, let's say that we would like to ship a component with multiple alternative themes. Each theme can be enabled via adding a CSS class to a parent element of the component.

Here is how we could implement this use case using the :host-context selector:

These themed styles are deactivated by default. In order to activate one theme, we need to add to any parent element of this component one of the theme-activating classes.

For example, this is how we would activate the blue theme:

Have a look at this video to see a visual demo of the host-context selector in action:

All of this functionality that we saw so far was using plain CSS.

But especially in the case of themes, it would be great to be able to extend the CSS language and for example define the primary color of a theme in a variable, to avoid repetition like we would do in Javascript.

That is one of the many use cases that we can support using a CSS preprocessor.

Angular CLI - Sass, Less and Stylus support

A CSS pre-processor is a program that takes an extended version of CSS, and compiles it down to plain CSS.

The Angular CLI supports all major pre-processors, but the one that seems most commonly used in Angular related projects (such as for example Angular Material) is Sass.

In order to use a Sass file instead of a CSS file, we just need to pass such file to the styleUrls property of a component:

The CLI will then take this Sass file and convert it to plain CSS on the fly. Actually, we can generate new components using Sass files using this command:

ng new cli-test-project --style=sass

We can also set a global property, so that Sass files are used by default:

ng set defaults.styleExt scss

Demo of some the things we can do with Sass

A pre-processor is a great thing to add to our project, to help us write more maintainable styles. Let's have a look at some of the things that we can do with Sass:

If you have never seen this syntax before, it could look a bit surprising! But here is what is going on, line by line:

on line 2, we have actually defined a CSS variable! This is a huge feature missing from CSS

we can define not only colors but numbers or event shorthand combined properties such as: $my-border: 1px solid red

on lines 6, 10 and 11 we are using the variable that we just created

on line 9 we are using a nested style, and making a reference to the parent style using the & syntax

And this is just a small sample of what we can do with Sass, just a few very commonly used features. The Angular CLI also has support for global styles, that we can combine with our component-specific view encapsulated styles.

We can add global styles not only for the supported pre-processors, but for plain CSS as well.

Summary

There are a ton of options to style our components, so it's important to know which one to use when and why. Here is a short summary:

sometimes we want global styles, that are applied to all elements of a page - we can add those to our angular-cli.json config

the Angular View encapsulation mechanism allows us to write simpler styles, that are simpler to read and won't interfere with other styles

The default view encapsulation mechanism will bring the long-term benefit of having much less CSS-related bugs. Using it, we will rarely fall into the situation when we add some CSS that fixes one screen but accidentally break something else - this is a huge plus!

If we are writing a lot of CSS in our project, we probably want to adopt a methodology for structuring our styles from the beginning, such as for example SMACSS

At a given point we could consider introducing a pre-processor and use some of its features, for example for defining CSS variables

I hope that this post helps in choosing how to style your components, if you have some questions please let me know in the comments below and I will get back to you.

To get notified when more posts like this come out, I invite you to subscribe to our newsletter:

]]>https://blog.angular-university.io/service-workers/fc0b0d93-45e5-45b7-93d0-dab5cfc47816Mon, 20 Nov 2017 08:10:00 GMTIn this post, we are going to do a practical guided Tour of Service Workers, by focusing on one of its most important use cases: Application Download and Installation (including application versioning).

As a learning exercise, I invite you to code along, and turn your application into a PWA by making it downloadable and installable! We will be doing the same to a sample application, available in this repository.

If you have tried to learn Service Workers before, you might have noticed that many of the features of Service Workers and the Service Worker Lifecycle can, at first sight, seem a bit surprising.

Why would we need a separate daemon instance to intercept the HTTP requests of our own application, where we can't really do long-running calculations or access the DOM?

And yet Service workers are the cornerstone of a Progressive Web App, they are the key component that binds all other PWA APIs together and enable the support of native-like capabilities such as:

Let's then get started with our Service Worker Fundamentals deep dive!

What is a Service Worker?

A Service Worker is like background daemon process that sits between our web application and the network, intercepting all HTTP requests made by the application.

The Service Worker does not have access direct access to the DOM. Actually, the same Service Worker instance is shared across multiple tabs of the same application and can intercept the requests from all those tabs.

Note that for security reasons the Service Worker cannot see requests made by other web applications running in the same browser, and only works over HTTPS (except on localhost, for development purposes).

In summary: a Service Worker is a network proxy, running inside the browser itself!

Service Workers Overview

The code for the Service Worker is periodically downloaded from our website and there is a whole lifecycle management process in place.

Its the browser that at any time will decide if the Service Worker should be running, this is so to spare resources, especially on mobile.

So if we are not doing any HTTP requests for a while or not getting any notifications, it's possible that the browser will shut down the Service Worker.

If we do trigger an HTTP request that should be handled by the Service Worker, the browser will activate it again, in case it was not yet running. So seeing the Service Worker stopped in the Dev Tools does not necessarily mean that something is broken.

The Service Worker can intercept HTTP requests made by all the browser tabs that we have opened for a given domain and Url path (that path is called the Service Worker scope).

On the other hand, it cannot access the DOM of any of those browser tabs, but it can access browser APIs such as for example the Cache Storage API.

Service Worker Use Case: Application Download, Installation, and Versioning

You might be thinking at this point, what does network proxying have to do with application download and installation, and offline support?

The Service Worker is a network proxy with an installation lifecycle, but it's up to us to use it to implement native-like PWA capabilities: the Service Worker by itself does not provide those features.

So let's see how can we design a solution based on the Service Worker that will implement the background download and install use case.

Download and Installation Design Breakdown

Here is a summary of the design that we are about to implement:

we are going to download the Service Worker script from the server

we are going to make sure that the browser installs and activates the service worker in the background as late as possible in the application bootstrap time, in order not to disrupt the initial user experience

on the background, the service worker is going to download the whole web application (meaning the HTML, CSS and Javascript), version it and keep it for later

only the next time the user comes to the site, the service worker is going to kick in (more on this later)

this second time the user visits the site, the application will NOT be downloading the HTML, CSS and Javascript from the network - the Service Worker will serve the cached files that it had kept for later

This second time, the application startup will be much faster

The user will at least have a working application, even if the network is down

And this is how having a network proxy in the browser allows us to have installable web applications! This is all 100% compatible with the back and refresh buttons.

Let's then start implementing this design: first we need a sample application.

Step 1 - Service Worker Registration

Our starting point is a plain HTML, CSS and Javascript Bootstrap page, that used some very common CSS and Javascript bundles.

We will turn this simple page into a background downloadable and installable PWA, and the same design applies to a single page application: after all its just HTML, CSS and Javascript!

Reminder: The code for the sample application is available here in Github

The first step to turn this standard website into a downloadable PWA is to add a Service Worker via a registration script:

Notice the script sw-register.js, which is going to trigger the installation of our network proxy, the Service Worker. Let's then have a look at this registration script:

Let's break the registration process down line by line, and see what it means:

first, we are checking if the browser supports Service Workers, by looking for the serviceWorker property in the global navigator object

if the browser does not support SWs, then everything will still work, it's just that no installation will happen in the background, so we fallback to a normal web application scenario

When should a Service Worker be registered?

Even if we detect that the browser does support Service Workers, we are still not going to register a SW immediately! In this case, we are waiting for the page load event.

The load event is only triggered when the whole page is loaded, including its linked resources like images, CSS and Javascript and that can take a long time.

Why delay the registration of the Service Worker?

There are a couple of reasons why we want to delay the registration of the Service Worker: we want to avoid causing disruption of the initial user experience, as the application loads for the first time.

Browsers only do a limited amount of HTTP requests at the same time, and there is only so much network capacity. The Service Worker might or might not do separate network requests that can interfere with the ones needed to show initial content to the user.

This means that delaying the Service Worker registration prevents the Service Worker from degrading the initial user experience. Instead, the Service Worker will wait for the application to start up and then it will be installed in the background.

Note that in the case of a single page application, we might want to delay the registration even further, and wait beyond the load event.

The key is to understand that in the case of a Service Worker that does download and installation, we want to register it as late as possible, to avoid degrading user experience.

Service Workers and Consistency by Default

Another reason for delaying the registration of this type of Service Worker is to have consistent application behavior. Let's remember that the Service Worker will often serve the whole application itself!

So we want to avoid a situation where:

some of the page CSS and JS resources were served by the Service worker

while others came from the network

If some of the initial requests for a page came from the network, we want to make sure that all the remaining bundles were also loaded from the network as well, for consistency.

Avoiding inconsistent application scenarios

In the case of application download and installation, we want to avoid falling in a situation where we activate a Service Worker in the middle of a page startup.

This is because depending on timing conditions, we might accidentally fall into some hard to reproduce situation where the page is broken due to an unpredictable combination of HTML/CSS/JS artifacts, some coming from the network and the others from some sort of cache that the Service Worker is using.

In the next time that we visit this page the Service Worker will be active, and then we will load all resources from the Service Worker, instead of the network.

This means that again we will have a consistent set of bundles, all coming from a cache and corresponding to a given version of the application.

What happens at registration time?

In the example above, when the load event triggers we are going to call register() and identify the file sw.js as being a Service Worker script.

The browser is then going to download the sw.js file, and version it by creating a snapshot of all the bytes contained in this file. In the future, even if one single digit changes, the browser will consider that there is a completely new version of the Service Worker.

What is the service worker scope property?

The scope property determines what set of HTTP requests can be intercepted by the Service Worker, or not. In this case, the scope is '/', meaning that our Service Worker will be able to intercept all HTTP requests made by this application.

If the scope would instead be /api, then the Service worker would not be able to intercept a request like for example /bundles/app.css, but it would still be able to intercept a REST API request such as /api/courses.

Multiple Service Workers in the same page? Service Worker ID

This means that its possible have multiple Service Workers running on the same page, but on different scopes!

If a Service Worker would have a unique identifier, it would be the combination of the origin domain plus the scope path.

And this how the browser determines if two different scripts correspond to two different versions of the same Service Worker (and not based on keeping the same SW sw.js file name).

If two Service Worker scripts have the same scope path and even a byte of difference, the browser is going to consider them two versions of the same SW and install the latest version in the background.

Can I place the Service Worker in any folder?

The location of the sw.js file is important: if this file would be placed in a folder /service-worker/sw.js, then it would not be able to intercept requests like /bundles/app.css or /api/courses.

Instead, the maximum scope of HTTP requests that the Service Worker could intercept, would be any requests starting with /service-worker, the folder where the script is on!

Given this, we could, for example, register different service workers for different scopes: one service worker for all /bundles requests and another for all /api requests.

As we can see, there is a ton of flexibility! Right now, for implementing Download and Installation, we are going to use the root / scope and use only one Service Worker.

Step 2 - Service Worker Hello World

When the browser identifies a new version of the Service Worker for a given scope, it will trigger the install phase, which results in the emission of the install Lifecycle event.

Note that the Service Worker spec does not define what happens exactly in the install phase. That is up to us to implement that, by listening to the install event in sw.js.

After installation comes activation, and then network interception is ready to be used! Let's understand exactly how the installation and activation phases work, based on this Hello World Service Worker sw.js example:

An HTTP Logging Interceptor

This code is actually the implementation of a simple logging HTTP interceptor, and we will evolve it to implement Application Download & Installation!

Right now, let's break down this initial Hello World example, and see what is going on here:

we are using a reference to self: this means the current global context where the code runs, which would for example be the window if this would run at the level of the application

However, in this case, self points to the Service Worker global context

we are subscribing to the install and activate events, and logging their occurrence to the console

each logging statement is prepended with the version of the Service Worker, this will help us understand how multiple versions work

the install and activate steps both pass a Promise to waitUntil(), right now this is just to show how we would do async operations in these phases

if the promise passed to waitUntil() resolves successfully, then the installation/activation phase is completed successfully

if on the other hand, the promise is rejected then the installation/activation phase fails, and the next phase won't be triggered

we have also subscribed to the fetch event. Using it, we are intercepting all the HTTP requests made by the application

The fetch event has a method called respondWith(), which takes as argument also a promise

The promise we pass it needs to return (when resolved) the response to the HTTP request

Async operations in the install and activate phases

As we can see, like almost all PWA-related APIs, the Service Worker API for these lifecycle phases is Promise-based. During these phases, we can do asynchronous operations like for example fetch resources from the network.

In order to mark a phase as completed, we return a Promise that when resolved will successfully mark the phase as completed. In this case, both the install and activate phases return a Promise that gets successfully resolved, and so the application is now ready to start intercepting network calls.

Using the fetch event to intercept HTTP requests

Let's now have a closer look at the callback of the fetch event, which contains the HTTP logging functionality.

As we can see, this fetch callback is going to return the actual response of the HTTP call using respondWith(), and the response can be calculated asynchronously by passing a Promise to respondWith().

Note: the application code will be unaware of where this response came from: if from the network or from the Service Worker

We can take the response passed to respondWith() from anywhere, for example:

we can forward the call to the network and send back the network response

or we can retrieve the response from Cache Storage

we can even build a Response() object manually

In this case, here is what we are doing:

we are logging the URL of the intercepted request

then we forward the HTTP request to the network using the Fetch API

fetch() will return a Promise, that if resolved will deliver the network response, or fail in case of a fatal network error

note that fetch() will only throw an error if the network is down or some other fatal condition occurs like a DNS error. For example, an HTTP status code of 500 Internal Server Error would not cause the fetch promise to error

then we pass the fetch() promise that will emit the network response to respondWith()

Viewing the Hello World Service Worker in action

This response passed to respondWith() is then going to be passed to the application! As we can see, this Service Worker acts as a logging proxy.

From the point of view of the application, this response served by the Service Worker is indistinguishable from a call made if the Service Worker was not present, the only side effect is the logging in the console.

And here is our Service Worker running in the Chrome Dev Tools (Application Tab):

While coding along with this post, it's better to leave the "Update On Reload" option set to off, in order to better understand the Service Worker Lifecycle

This initial logging example is everything that we will need to understand in detail the Service Worker Lifecycle.

Why isn't the Service Worker immediately active?

You might have noticed one thing: although we are logging the installation and activation events, but there was no HTTP request logged to the console which means that fetch event does not seem to be working!

It's like the fetch logging interceptor is not working, even though the Service Worker is active.

But if we open another tab, or refresh the same tab, here is what we have:

So it looks like the Service Worker started intercepting HTTP requests only after we reloaded the page. That's a bit surprising the first time we see it, but this happens by default to ensure consistency.

The Service Worker Lifecycle, and Consistency by Default

The Service worker behavior we see here, although surprising at first it's actually a great feature that is very well thought out.

In all these scenarios: initial page load and Service Worker activation, opening a new tab or refreshing the original tab, there is something going on that is common to all scenarios:

Either all the HTTP requests of the page were served by the Service Worker, or none at all! This is what happened here:

the first time we loaded the page, none of the requests were served by the Service Worker

but when the first refresh occurred, or we opened a new tab, all of the requests were served by the Service Worker

And this ensures consistency: one version of the page, one version of the service worker. This avoids a whole class of some very hard to troubleshoot error scenarios.

How do Service workers interact with Browser tabs?

Let's now simulate some normal user behavior. What happens if we open other browser tabs of the same application?

v1 HTTP call intercepted - getbootstrap.com/dist/css/bootstrap.min.css
... the same HTTP requests, all served by version 1
Service Worker registration completed ...

We are going to see that this page is being served by the exact same SW Version 1! Note that the console logging is shared across tabs, which can be rather surprising.

If you refresh the application a couple of times and then switch back to another tab you are going to see logged HTTP requests that were made in the other tab.

This is actually expected, because we have the same Service Worker intercepting the requests from all tabs.

Service Worker Versioning In Action

To further understand the Service worker Lifecycle, let's now see what happens if we modify something in the Service Worker code. Let's, for example, modify the version number to v2.

Notice that we don't need to change the name of the file sw.js to notify the browser that a new version of the Service worker is available.

The browser will see that both versions are linked to the scope /, and if there is even one character of difference between both versions, the browser will install the new version.

Let's then try to install v2, still with multiple tabs opened. If we change the version number of the SW script to v2 and open another tab, here is what we see in the Dev Tools:

As we can see, the new version of the Service Worker is not immediately applied, it's in some sort of Waiting state!

it looks like version v1 remained active during the whole refresh process, because it kept intercepting HTTP requests

all requests are still being intercepted by v1

Version v2 was Installed in the background, but not Activated!

Version v2 is now in the Waiting state

A couple of important questions come to mind here:

Why is the new version v2 Installed but not Activated?

One reason is that we have multiple tabs opened, and we want to show to the user a consistent experience. It would be confusing for the user to have two tabs opened running different versions of the same application.

And because Service workers intercept and modify HTTP requests, two different versions of the service worker might mean two different versions of the application itself!

So how will the browser handle this new version of the Service worker running on the / scope?

The browser is going to go ahead and perform any Installation operations like download bundles or an offline page in the install phase of v2, but the browser will not Activate v2 as long as there are multiple tabs opened still running v1.

This consistency by default is a key design goal of the Service Worker Lifecycle!

Now, before continuing to explore the Lifecycle, a quick note about browser Hard Refresh and Service Workers.

Service Workers and Hard Refresh

If something is unclear while trying out Service Workers, trying to do a hard refresh (Ctrl+Shift+R) will not help in the learning process.

This is because if you hit hard-refresh, the whole Service Worker is going to be bypassed, and it won't control the page - This is the standard browser behavior which is unlikely to change.

Ctrl+Shift+R is meant to bypass all network caches, and because the Service Worker is often used for caching, it bypasses it too.

With this important note out of the way, let's continue to dig deeper into how the Service Workers Lifecycle works, and how it enables Application Download and Installation.

Let's understand why at this stage with v2 already Installed why is v1 still running, and why v2 is not yet Active.

Why even with only one tab opened the new SW version will not become active?

We did refresh our single tab running v1, but still, v2 was not activated: v2 was Installed in the background, but not Activated.

This is because, from the point of view of the browser, the current page remains active until the refresh completes, and only then the page gets swapped out when we have at least received the response headers from the server.

And because the page was kept during part of the refresh process, the only way to ensure consistency is to keep it active all the way through the whole process.

After that, because we have kept the Service worker v1 active during the refresh, we want by default to keep it running after the refresh completed as well, which explains why V1 is still active after the page refresh is completed.

How to activate the new Service Worker version V2 then?

One way would be to use the skipWaiting option in the DevTools, but let's not do that! Let's instead reproduce the normal user experience: let's close all tabs running service worker v1, and open a new tab.

As we can see, this time the browser activated Service Worker v2 that it had previously installed in the background, and v2 intercepted all the network requests from this page, meaning V2 is now Active!

And with this, we have now a good understanding of the Service Worker lifecycle so let's summarize.

Service Worker Lifecycle Summary

We can see that although a bit tricky at first sight, the way that Service Worker Lifecycle works makes a lot of sense. The Lifecycle is all about:

showing only one version of the application to the user

not disrupting the user experience

not delaying application startup

by default, avoiding version mismatches between the page and the Service Worker

This last point is especially important for the Download & Installation Use Case that we are about to review.

Let's remember, one of the common use cases of Service Workers is to cache the whole application, meaning literally all the HTML, CSS and Javascript!

Where does the Service Worker store those files then?

The Cache Storage API

At installation time, the Service Worker is going to fetch from the network all the bundles that together make a given application version, and then it's going to store them in a browser cache know as Cache Storage.

Like the Service Worker API, Cache Storage is also Promise-based and very easy to use. Let's then take this API and use it to implement the installation phase of the Download and Install use case.

Step 1 - Implementing Background Application Download

The first thing that we are going to do is, we are going to download all the Javascript and CSS Files in the background during the Install phase, and we are going to add those files directly to Cache Storage:

Again a lot is going on here in this example, so let's break it down step-by-step:

the first thing that we are doing is, we are getting a reference to an open cache, using caches.open() which returns a Promise

we are appending a version number to the cache name, meaning that as new versions are released, new caches will be created

Then we are doing a series of HTTP requests to fetch all the files that make a given version of the application

We are then adding all those files directly to cache storage

the key of the cache is the Request object used to make the HTTP request

the values stored in the cache are the HTTP Response objects themselves, that we can serve straight to the application

the addAll() call returns a Promise, that will resolve successfully if all the HTTP requests made to load each file work

Inspecting the contents of Cache Storage

In our case, the download of all the files worked, meaning the Install phase ended successfully! So let's now see what we have stored in Cache Storage, using the Chrome Dev Tools:

This panel is available on the same Application tab in the Dev Tools, under the collapsible menu named Cache Storage.

Note: If you open the menu and cannot find the new
cache content, then right-click on the Cache Storage node and click Refresh

As we can see, all the application bundles have been downloaded in the background, and the application is ready to be served from the cache!

But before doing so, let's go ahead and first clear all previous versions of the application from Cache Storage.

Step 2 - Purging Previous Application versions

The best moment to purge previous versions of the application is at Service Worker Activation time, because this is the only moment that we can be sure that the user is no longer using the previous application version in any of the browser tabs.

This is how we can purge previous application versions at Activation time:

As we can see, we are looping through all the cache names available in Cache Storage, and deleting all caches that don't correspond to the current application version (which is V3).

A note on the async/await syntax

Notice that caches.keys() is returning a Promise, like in general it happens with Cache Storage API calls.

We want to wait for that Promise to resolve and then use that value in the rest of the code below, and so we are applying the await syntax that will wait for the Promise to resolve before continuing.

As we can see, this is a great way to make asynchronous Promise-based code to look much more readable and closer to synchronous code, but this only works inside a method annotated with the async keyword.

This async/await syntax is already available in a lot of browsers (see here for support), for example in Chrome you can try these examples without any transpilation needed.

Step 3 - Serving the Application From Cache With a Cache Then Network Strategy

The last step needed for implementing Application Download and Installation is to serve the application bundles from Cache Storage directly, and fallback to the network if necessary:

Let's then break down this example, to see how the Cache Then Network strategy is being applied:

we are intercepting all HTTP calls made by the application, inside an async function

the async function will always return a Promise to respondWith(), either explicitly as a return value, or by transparently wrapping the returned value in a Promise

inside the async function, we start by opening the cache that corresponds to the current application version

we are then going to query the cache, to see if there is an HTTP Response that would match the HTTP Request made by the application

the call to match() also returns a Promise, so we will await for the result before continuing

if a match was found, this means that the request made by the application was found in the cache, so we return that HTTP Response straight to respondWith()

note that there is no need to return a Promise from the async method, if we return a value it will implicitly get wrapped in a Promise by the async/await mechanism

if no match was found, we are going to let the request go through to the network by awaiting for the result of a fetch() call

then we are going to log the request that got forwarded to the network, and return the result of the fetch() call to the application

With this in place, any request that the application makes to load the cached bundles will be served from Cache Storage, while other requests such as for example a REST API call to /api/courses will still go through to the network.

And with this last step in place, we have a complete solution for downloading and background installing our web application! So let's try this out.

Deploying a new Version of the application

To see the Download and Install mechanism in action, let's open a new tab in our sample application, and see that it's now running version V3 of the Service Worker, which implements the Download and Install feature.

As we can see, V3 of the Service Worker (and of the application) are still up and running, as expected. This means that the application version was served by Service Worker v3, which means the bundles all came from the Cache named app-cache-v3.

But we can see also that version V4 was Installed in the background. Let's have a look at what we have on the Service Worker tab:

As we can see, version V4 is waiting to be Activated. But the bundles of V4, which could correspond to a completely different version of the whole web application are now ready to be used.

To confirm this, let's have a look at the contents of Cache Storage:

As we can see, Cache Storage contains two versions of the application at this stage:

version v3, which is still being served to the user

version v4, which was downloaded in the background and is ready to be used as soon as all version v3 tabs are closed

In order to activate version v4, let's simulate some normal user interaction. The user would eventually close all the browser tabs running version v3, and then come back later to the application.

At that moment, the browser will activate version V4 and serve the corresponding files from the cache:

And with this, the whole lifecycle is completed and the user now has a freshly updated version of the application downloaded and installed in the browser.

The new version of the application was donwloaded and installed in the background, without interfering with the normal user experience. This is actually even better then native mobile installations!

Customizing the Service Worker Lifecycle Behavior

What we have described so far was the default behavior of the Service Worker Lifecycle, which makes a lot of sense in the context of the Donwload and installation use case.

Let's now see how can we customize the Lifecycle if needed, to better suit other PWA use cases.

Notice that modifying the behavior of the Service Worker Lifecycle although tempting, is not really recommended, as we will see.

Skipping the Wait Phase (and potential issues it might cause)

For example, we could skip the Waiting Phase altogether of the Service worker Lifecycle, by calling the skipWaiting() API at the end of the Install phase:

In this example, we are awaiting for the files to be downloaded and installed, and then we are going to call self.skipWaiting(), which will return a Promise.

This will cause the Waiting Phase of the Lifecycle to be skipped, and for the new version of the Service Worker to become immediately active.

This means that if the user opens a new tab, the new version would be active which might lead to inter-tab inconsistencies. In most cases, it's better to not skip the Waiting phase and avoid those inconsistent scenarios by design.

This does not mean, however, that by using skipWaiting() the new version of the Service Worker can immediately intercept requests from the running tab.

Taking over the current page with clients.claim()

We have seen that for example that the very first time that a page with a Service Worker is loaded, the Service Worker will be Installed and Activated, but it will somehow still not be able to intercept the network requests made by the page.

We would have to refresh the page in order have the new Service Worker to start intercepting requests.

Again, this is for consistency: if the initial requests of a page were not served by a Service Worker, then by default none of the HTTP requests made by that page after startup will be served by the Service Worker either.

But we can change this, by having the Service Worker claim all the active application tabs at Activation time:

Calling claim() will allow the Activated Service Worker to immediately start intercepting requests (Ajax included) from the running page (as well as other open tabs), without having to wait for a reload.

This early activation of the Service Worker brings the potential for an inconsistency: we might end up with a page served by version v4 to have its runtime HTTP requests intercepted by Service Worker v5.

But for some use cases, this early activation is what we need: imagine a second service worker running on scope /api that caches application data on IndexedDB: we might want to activate it as soon as possible, to cache the application data as soon as possible.

Updating a Service Worker Manually

By default, the browser will check upon user navigation if there is a new version of the Service Worker on the server ready to be installed.

If by some reason, we have an application that is going to remain opened for a long period of time (like a PWA installed to the user Home screen), we can manually check if there is a new version of the Service Worker by using the registration object like this:

If a new version of the Service worker is available on the server, the call to update() will trigger a new background installation.

This periodic check is usually not necessary, as the browser will already do this check very frequently with each user navigation, or with other events such as for example if a Push notification is received.

One good scenario when we would like to check if there is a new version is: what if the version that we are running has a bug? Let's then talk about what happens if something goes wrong with the application.

Built-in Browser protection against broken Service Workers

As you might imagine, caching the application on the user computer and bypassing the network is a bit dangerous: what if the version the user downloaded accidentally had an error?

There are a couple of built-in browser protections against this.

For example, the Service Worker will never intercept itself!

Meaning that the file sw.js that we pass to serviceWorker.register('sw.js') will never be intercepted by a fetch event.

However, this does not apply to the Service Worker registration script sw-register.js, so we need to make sure that we never cache that.

Service Workers and normal Browser caching

The standard browser cache mechanism based on the Cache-Control header is very easy to misuse, due to the confusing nature of its configuration options.

To avoid those issues, its recommended to get familiar with some common Caching Best Practices, as this will help with any application in general, not only with PWAs.

Errors made in setting up Cache-Control headers for our application will be troublesome in production even if we don't run a PWA, but the use of a Service Worker will make those problems much worse.

We might run into a situation where we have cached the Service Worker sw.js file in the standard browser cache, because it was served with a Cache-Control header that gives the file a long lifetime.

Let's say that the sw.js was served with a cache lifetime of one month:

Cache-Control: max-age=2592000

The browser will indeed cache the header, but because the file is a Service Worker it will cache it only for a maximum time of 24h instead of 1 month!

This is a great precaution, but still, the website would be broken for a full day before a patch can be installed. The simplest and safest solution is to never cache the Service worker file or its registration script.

Avoid caching the Service Worker file

This can be ensured by the server by marking these files explicitly as being immediately expired:

Cache-Control: max-age=0

And speaking of the normal browser cache, what about the caching headers for the CSS and JS bundles?

Precautions concerning the use of the Browser Cache and Service Workers

The CSS / Js bundles stored in Cache Storage will be loaded from the network, and those bundles could or could not be served with a Cache-Control header, meaning that potentially we have two caches in action, that might interfere with each other.

This could lead to trouble scenarios, like for example a new version of the Service Worker gets installed, but tries to load a new version of a JS bundle file, which did not change the file name!

But the file is cached in the normal browser cache, and the ancient version accidentally still gets served to the Service Worker.

This means that the installation of the Service worker completes successfully, but Cache Storage now has the wrong version of one of the bundles, meaning that the application installation is corrupted.

So how do we avoid running into these scenarios? The simplest is to apply the same caching policies that we would for a non-PWA application: different types of files need different caching strategies.

Cache-Control for CSS/JS bundles

For CSS and JS bundles, the simplest is to append to the file name a hash of the file content, or a version number, like for example: bootstrap.v4.min.css.

Then for these files, we can choose a very long max age, essentially declaring them immutable and caching them forever:

Cache-Control: max-age=31536000

If a new version of the file is available, the file name will change (this could be enforced by the build system) and the new version will be downloaded and cached.

This will avoid many common caching issues for both browsers that support Service Workers, and those who don't.

Loading Resource Bundles from Third Party Domains

In this example, we have downloaded bundles all from our local domain. But what if we would like to load CSS and JS bundles from other third-party domains from inside the Service worker, like for example from a CDN?

This is possible, but the third-party domain as to allow for that cross-origin request to be executed, just like any other CORS request.

This can be done by serving the bundle file with this header:

access-control-allow-origin: https://yourdomain.com

If we are serving these bundle files from a CDN like Amazon Cloudfront, and we want the files to be loadable via a cross origin request coming from any domain and not just https://yourdomain.com, we can instead use this header:

access-control-allow-origin: *

Conclusions

As we can see, all the multiple PWA features and the related PWA APIs make the most sense if we look at them together and in the context of a specific use case, instead of in isolation.

We can do much more than the download and installation use case that we covered, this was just an example that happens to be the best starting point to understand why the Service Worker Lifecycle was designed the way it was.

The core philosophy of the Service Worker spec is about putting these network proxying capabilities in the hands of developers, so that we can implement many different PWA use cases and patterns, as opposed to providing only a set of predefined offline patterns (like it was the case of Application Cache).

I hope that this post helps with getting started with Service Workers and that you enjoyed it!

If you want to learn more about the Angular PWA features that are built on top of Service Workers, have a look at the other posts of the complete Angular PWA series: