Figure 3. Using the Request Blocking tab to block the problematic scripts.

And then auditing the page again:

Figure 4. The Performance score improved to 97 after blocking the problematic
scripts.

You could alternatively use Local Overrides to add async attributes to each
of the script tags, but "we'll leave that as an exercise for the reader." Go to
Multi-client demo to try it out. Or check out this tweet for a video demonstration.

Lighthouse 5.2 in the Audits panel

The Audits panel is now running Lighthouse 5.2. The new Third-Party Usage
diagnostic audit tells you how much third-party code was requested and how long that third-party
code blocked the main thread while the page loaded. See Optimize your third-party resources
to learn more about how third-party code can degrade load performance.

File DevTools issues from the Main Menu

If you ever encounter a bug in DevTools and want to file an issue, or if you ever get an idea
on how to improve DevTools and want to request a new feature, go to Main Menu > Help >
Report a DevTools issue to create an issue in the DevTools engineering team's tracker. Providing a
minimal, reproducible example on Glitch dramatically increases
the team's ability to fix your bug or implement your feature request!

Figure 10. Main Menu > Help > Report a DevTools issue.

Feedback

To discuss the new features and changes in this post, or anything else related to DevTools:

Consider Canary

If you're on Mac or Windows, consider using Chrome Canary as your default
development browser. Canary gives you access to the latest DevTools features.

Note: Canary is released as soon as its built, without testing. This means that Canary
breaks about once-a-month. It's usually fixed within a day. You can go back to using Chrome
Stable while Canary is broken.

An Android phone or emulator connected and set up for development
(Enable USB debugging if
you’re using a physical phone).

A browser that supports Trusted Web Activities on your development phone.
Chrome 72 or later will work. Support in other browsers is on its way.

A website you'd like to view in the Trusted Web Activity.

A Trusted Web Activity lets your Android App launch a full screen Browser Tab without
any browser UI.
This capability is restricted to websites that you own, and you prove this by setting
up Digital Asset Links.
Digital Asset Links consist essentially of a file on your website that points to your app and some
metadata in your app that points to your website.
We'll talk more about them later.

When you launch a Trusted Web Activity, the browser will check that the Digital Asset Links check
out, this is called verification.
If verification fails, the browser will fall back to displaying your website as a
Custom Tabs.

Clone and customize the example repo

The svgomg-twa repo contains an example TWA that
you can customize to launch your website:

Import the Project into Android Studio, using File > New > Import Project, and select
the folder to which the project was cloned.

Open the app's build.gradle and modify the values in twaManifest.
There are two build.gradle files.
You want the module one at app/build.gradle.

Change hostName to point to your website.
Your website must be available on HTTPS, though you omit that from the hostName field.

Change name to whatever you want.

Change applicationId to something specific to your project.
This translates into the app’s package name and is
how the app is identified on the
Play Store - no two apps can share the applicationId and if you change it you’ll need to
create a new Play Store listing.

Build and run

In Android Studio hit Run, Run ‘app’ (where ‘app’ is your module name, if you’ve changed it)
and the TWA will be built and run on your device!
You’ll notice that your website is launched as a Custom Tab, not a Trusted Web Activity, this is
because we haven’t set up our Digital Asset Links yet, but first...

A note on signing keys

Digital Asset Links take into account the key that an APK has been signed with and a common
cause for verification failing is to use the wrong signature.
(Remember, failing verification means you'll launch your website as a Custom Tab with
browser UI at the top of the page.)
When you hit Run or Build APK in Android Studio, the APK will be created with your developer
debug key, which Android Studio automatically generated for you.

If you deploy your app to the Play Store you’ll hit Build > Generate Signed APK, which will
use a different signature, one that you’ll have created yourself (and protected with a password).
That means that if your Digital Asset Links file specifies your production key, verification
will fail when you build with your debug key.
This also can happen the other way around - if the Digital Asset Links file has your debug key
your TWA will work fine locally, then when you download the signed version from the Play Store,
verification will fail.

You can put both your debug key and production key in your asset link file
(see Adding More Keys below),
but your debug key is less secure.
Anyone who gets a copy of the file can use it.
Finally, if you have your app installed on your device with one key, you can’t install the version
with the other key. You must uninstall the previous version first.

Building your app

To build with debug keys:

Click Run 'app' where 'app' is the name of your module if you changed it.

To build with release keys:

Click Build then Generate Signed APK.

Choose APK.

If you're doing this for the first time, on the next page press Create New
to create a new key and follow the
Android documentation.
Otherwise select your previously created key.

Press Next and pick the release build variant.

Make sure you check both the V1 and the V2 signatures (the Play Store won’t let you upload
the APK otherwise).

Click Finish.

If you built with debug keys, your app will be automatically deployed to your device.
On the other hand if you built with release keys, after a few seconds a pop up will appear in the
bottom right corner giving you the option to locate or analyze the APK.
(If you miss it, you can press on the Event Log in the bottom right.)
You’ll need to use adb manually to
install the signed APK with adb install app-release.apk.

This table shows which key is used based on how you create your APK.

Key

Debug

Release

When is it created?

Automatically by Android Studio.

Manually by you.

When is it used?

Run 'app'.

Debug 'app'.

Build APK.

Generate Signed APK.

When the app is downloaded from the Play Store.

Creating your asset link file

Now that your app is installed (with either the debug or release key) you can generate the Digital
Asset Link file.
I’ve created the
Asset Link Tool to help you
do this.
If you'd prefer not to download the Asset Link Tool, you can
determine your app's signature manually.

When the app launches, you’ll be given a list of all applications installed on your device by
applicationId.
Filter the list by the applicationId you chose earlier and click on that entry.

You’ll see a page listing your app’s signature and with a generated Digital Asset Link.
Click on the Copy or Share buttons at the bottom to export it however you like (eg, save to
Google Keep, email it to yourself).

Put the Digital Asset Link in a file called assetlinks.json and upload it to your website at
.well-known/assetlinks.json (relative to the root).

Ensuring your asset link file is accessible

Now that you’ve uploaded it, make sure you can access your asset link file in a browser.
Check that https://example.com/.well-known/assetlinks.json resolves to the file you just uploaded.

Jekyll based websites

If your website is generated by Jekyll (such as GitHub Pages), you’ll need to add a line of
configuration so that the .well-known directory is included in the output.
GitHub help has more information on this topic.
Create a file called _config.yml at the root of your site (or add to it if it already exists) and
enter:

# Folders with dotfiles are ignored by default.
include: [.well-known]

Adding more keys

A Digital Asset Link file can contain more than one app, and for each app, it can contain more than
one key.
For example, to add a second key just use the
Asset Link Tool to
determine the key and add it as a second entry to the sha256_cert_fingerprints field.
The code in Chrome that parses this JSON is quite strict, so make sure you don’t accidentally add an
extra comma at the end of the list.

Troubleshooting

Viewing relevant logs

Chrome logs the reason that Digital Asset Links verification fails and you can view the logs on an
Android device with adb logcat.
If you’re developing on Linux/Mac you can see the read the relevant logs from a connected device
with:

> adb logcat -v brief | grep -e OriginVerifier -e digital_asset_links

For example if you see the message Statement failure matching fingerprint. you should use the
Asset Link Tool to see your app’s signature and make sure it matches that in your assetlinks.json
file (Be wary of confusing your debug and release keys. Look at the
A note on signing keys section.)

Checking your browser

A Trusted Web Activity will try to adhere to the user’s default choice of browser.
If the user’s default browser supports TWAs, it will be launched.
Failing that if any installed browser supports TWAs, they will be chosen.
Finally, the default behavior is to fall back to a Custom Tabs mode.

This means that if you’re debugging something to do with Trusted Web Activities you should
make sure you’re using the browser you think that you are.
You can use the following command to check which browser is being used:

Next Steps

Hopefully if you’ve followed this guide, you'll have a working Trusted Web Activity and have enough
knowledge to debug what's going on when verification fails.
If not, please have a look at the Troubleshooting section or file a GitHub issue against
these docs.

For your next steps, I’d recommend you start off by
creating an icon for your app.
With that done you can consider deploying your app to the Play Store.

Get started with GPU Compute on the Web

This article is about me playing with the experimental WebGPU API and sharing
my journey with web developers interested in performing data-parallel
computations using the GPU.

Background

As you may already know, the Graphic Processing Unit (GPU) is an electronic
subsystem within a computer that was originally specialized for processing
graphics. However, in the past 10 years, it has evolved towards a more flexible
architecture allowing developers to implement many types of algorithms, not just
render 3D graphics, while taking advantage of the unique architecture of the
GPU. These capabilities are referred to as GPU Compute, and using a GPU as a
coprocessor for general-purpose scientific computing is called general-purpose
GPU (GPGPU) programming.

GPU Compute has contributed significantly to the recent machine learning boom,
as convolution neural networks and other models can take advantage of the
architecture to run more efficiently on GPUs. With the current Web Platform
lacking in GPU Compute capabilities, the W3C’s “GPU for the Web” Community Group
is designing an API to expose the modern GPU APIs that are available on most
current devices. This API is called WebGPU.

WebGPU is a low-level API, like WebGL. It is very powerful and quite verbose, as
you’ll see. But that’s OK. What we’re looking for is performance.

In this article, I’m going to focus on the GPU Compute part of WebGPU and, to be
honest, I'm just scratching the surface, so that you can start playing on your
own. I will be diving deeper and covering WebGPU rendering (canvas, texture,
etc.) in forthcoming articles.

Dogfood: WebGPU is available for now in Chrome 78 for macOS behind an
experimental flag. You can enable it at chrome://flags/#enable-unsafe-webgpu. The
API is constantly changing and currently unsafe. As GPU sandboxing isn't
implemented yet for the WebGPU API, it is possible to read GPU data for other
processes! Don’t browse the web with it enabled.

Access the GPU

Accessing the GPU is easy in WebGPU. Calling navigator.gpu.requestAdapter()
returns a JavaScript promise that will asynchronously resolve with a GPU
adapter. Think of this adapter as the graphics card. It can either be integrated
(on the same chip as the CPU) or discrete (usually a PCIe card that is more
performant but uses more power).

Once you have the GPU adapter, call adapter.requestDevice() to get a promise
that will resolve with a GPU device you’ll use to do some GPU computation.

Both functions take options that allow you to be specific about the kind of
adapter (power preference) and device (extensions, limits) you want. For the
sake of simplicity, we’ll use the default options in this article.

Write buffer memory

Let’s see how to use JavaScript to write data to memory for the GPU. This
process isn’t straightforward because of the sandboxing model used in modern web
browsers.

The example below shows you how to write four bytes to buffer memory accessible
from the GPU. It calls device.createBufferMappedAsync() which takes the size of
the buffer and its usage. Even though the usage flag GPUBufferUsage.MAP_WRITE
is not required for this specific call, let's be explicit that we want to write
to this buffer. The resulting promise resolves with a GPU buffer object and its
associated raw binary data buffer.

Writing bytes is familiar if you’ve already played with ArrayBuffer; use a
TypedArray and copy the values into it.

At this point, the GPU buffer is mapped, meaning it is owned by the CPU, and
it’s accessible in read/write from JavaScript. So that the GPU can access it, it
has to be unmapped which is as simple as calling gpuBuffer.unmap().

The concept of mapped/unmapped is needed to prevent race conditions where GPU
and CPU access memory at the same time.

Read buffer memory

Now let’s see how to copy a GPU buffer to another GPU buffer and read it back.

Since we’re writing in the first GPU buffer and we want to copy it to a second
GPU buffer, a new usage flag GPUBufferUsage.COPY_SRC is required. The second
GPU buffer is created in an unmapped state with the synchronous
device.createBuffer(). Its usage flag is GPUBufferUsage.COPY_DST |
GPUBufferUsage.MAP_READ as it will be used as the destination of the first GPU
buffer and read in JavaScript once GPU copy commands have been executed.

// Get a GPU buffer and an arrayBuffer for writing.
// Upon success the GPU buffer is returned in the mapped state.
const [gpuWriteBuffer, arrayBuffer] = await device.createBufferMappedAsync({
size: 4,
usage: GPUBufferUsage.MAP_WRITE | GPUBufferUsage.COPY_SRC
});
// Write bytes to buffer.
new Uint8Array(arrayBuffer).set([0, 1, 2, 3]);
// Unmap buffer so that it can be used later for copy.
gpuWriteBuffer.unmap();
// Get a GPU buffer for reading in an unmapped state.
const gpuReadBuffer = device.createBuffer({
size: 4,
usage: GPUBufferUsage.COPY_DST | GPUBufferUsage.MAP_READ
});

Because the GPU is an independent coprocessor, all GPU commands are executed
asynchronously. This is why there is a list of GPU commands built up and sent in
batches when needed. In WebGPU, the GPU command encoder returned by
device.createCommandEncoder()is the JavaScript object that builds a batch of
“buffered” commands that will be sent to the GPU at some point. The methods on
GPUBuffer, on the other hand, are “unbuffered”, meaning they execute atomically
at the time they are called.

Once you have the GPU command encoder, call copyEncoder.copyBufferToBuffer()
as shown below to add this command to the command queue for later execution.
Finally, finish encoding commands by calling copyEncoder.finish() and submit
those to the GPU device command queue. The queue is responsible for handling
submissions done via device.getQueue().submit() with the GPU commands as
arguments. This will atomically execute all the commands stored in the array in
order.

At this point, GPU queue commands have been sent, but not necessarily executed.
To read the second GPU buffer, call gpuReadBuffer.mapReadAsync(). It returns a
promise that will resolve with an ArrayBuffer containing the same values as
the first GPU buffer once all queued GPU commands have been executed.

In short, here’s what you need to remember regarding buffer memory operations:

GPU buffers have to be unmapped to be used in device queue submission.

When mapped, GPU buffers can be read and written in JavaScript.

GPU buffers are mapped when mapReadAsync(), mapWriteAsync(),
createBufferMappedAsync() and createBufferMapped() are called.

Shader programming

Programs running on the GPU that only perform computations (and don't draw
triangles) are called compute shaders. They are executed in parallel by hundreds
of GPU cores (which are smaller than CPU cores) that operate together to crunch
data. Their input and output are buffers in WebGPU.

To illustrate the use of compute shaders in WebGPU, we’ll play with matrix
multiplication, a common algorithm in machine learning illustrated below.

Figure 1.
Matrix multiplication diagram

In short, here’s what we’re going to do:

Create three GPU buffers (two for the matrices to multiply and one for the
result matrix)

Describe input and output for the compute shader

Compile the compute shader code

Set up a compute pipeline

Submit in batch the encoded commands to the GPU

Read the result matrix GPU buffer

GPU Buffers creation

For the sake of simplicity, matrices will be represented as a list of floating
point numbers. The first element is the number of rows, the second element the
number of columns, and the rest is the actual numbers of the matrix.

Figure 2.
Simple representation of a matrix in JavaScript and it's equivalent in mathematical notation

The three GPU buffers are storage buffers as we need to store and retrieve data in
the compute shader. This explains why the GPU buffer usage flags include
GPUBufferUsage.STORAGE for all of them. The result matrix usage flag also has
GPUBufferUsage.COPY_SRC because it will be copied to another buffer for
reading once all GPU queue commands have all been executed.

Bind group layout and bind group

Concepts of bind group layout and bind group are specific to WebGPU. A bind
group layout defines the input/output interface expected by a shader, while a
bind group represents the actual input/output data for a shader.

In the example below, the bind group layout expects some storage buffers at
numbered bindings 0, 1, and 2 for the compute shader. The bind group on
the other hand, defined for this bind group layout, associates GPU buffers to
the bindings: gpuBufferFirstMatrix to the binding 0, gpuBufferSecondMatrix
to the binding 1, and resultMatrixBuffer to the binding 2.

Compute shader code

The compute shader code for multiplying matrices is written in GLSL, a
high-level shading language used in WebGL, which has a syntax based on the C
programming language. Without going into detail, you should find below the three
storage buffers marked with the keyword buffer. The program will use
firstMatrix and secondMatrix as inputs and resultMatrix as its output.

Note that each storage buffer has a binding qualifier used that corresponds to
the same index defined in bind group layouts and bind groups declared above.

Pipeline setup

WebGPU in Chrome currently uses bytecode instead of raw GLSL code. This means we
have to compile computeShaderCode before running the compute shader. Luckily
for us, the @webgpu/glslang package allows us to compile computeShaderCode
in a format that WebGPU in Chrome accepts. This bytecode format is based on a
safe subset of SPIR-V.

Note that the “GPU on the Web” W3C Community Group has still not decided at the
time of writing on the shading language for WebGPU.

The compute pipeline is the object that actually describes the compute operation
we're going to perform. Create it by calling device.createComputePipeline().
It takes two arguments: the bind group layout we created earlier, and a compute
stage defining the entry point of our compute shader (the main GLSL function)
and the actual compute shader module compiled with glslang.compileGLSL().

Commands submission

After instantiating a bind group with our three GPU buffers and a compute
pipeline with a bind group layout, it is time to use them.

Let’s start a programmable compute pass encoder with
commandEncoder.beginComputePass(). We'll use this to encode GPU commands
that will perform the matrix multiplication. Set its pipeline with
passEncoder.setPipeline(computePipeline) and its bind group at index 0 with
passEncoder.setBindGroup(0, bindGroup). The index 0 corresponds to the set =
0 qualifier in the GLSL code.

Now, let’s talk about how this compute shader is going to run on the GPU. Our
goal is to execute this program in parallel for each cell of the result matrix,
step by step. For a result matrix of size 2 by 4 for instance, we’d call
passEncoder.dispatch(2, 4) to encode the command of execution. The first
argument “x” is the first dimension, the second one “y” is the second dimension,
and the latest one “z” is the third dimension that defaults to 1 as we don’t
need it here. In the GPU compute world, encoding a command to execute a kernel
function on a set of data is called dispatching.

Figure 3.
Execution in parallel for each result matrix cell

In our code, “x” and “y” will be respectively the number of rows of the first
matrix and the number of columns of the second matrix. With that, we can now
dispatch a compute call with passEncoder.dispatch(firstMatrix[0],
secondMatrix[1]).

As seen in the drawing above, each shader will have access to a unique
gl_GlobalInvocationID object that will be used to know which result matrix
cell to compute.

To end the compute pass encoder, call passEncoder.endPass(). Then, create a
GPU buffer to use as a destination to copy the result matrix buffer with
copyBufferToBuffer. Finally, finish encoding commands with
copyEncoder.finish() and submit those to the GPU device queue by calling
device.getQueue().submit() with the GPU commands.

Performance findings

So how does running matrix multiplication on a GPU compare to running it on a
CPU? To find out, I wrote the program just described for a CPU. And as you can
see in the graph below, using the full power of GPU seems like an obvious choice
when the size of the matrices is greater than 256 by 256.

Figure 5.
GPU vs CPU benchmark

This article was just the beginning of my journey exploring WebGPU. Expect more
articles soon featuring more deep dives in GPU Compute and on how rendering
(canvas, texture, sampler) works in WebGPU.

The Chromium Chronicle: Coding Outside the Sandbox

Episode 5: August 2019

by Ade in Mountain View

Chrome is split into processes. Some of them are sandboxed, which means that
they have reduced access to the system and to users' accounts. In a sandboxed
process, bugs that allow malicious code to run are much less severe.

The browser process has no sandbox, so a bug could give malicious code full
access to the whole device. What should you do differently? And what's the
situation with other processes?

In other processes, OS access is limited via platform-specific restrictions.
For more information, see Chrome's sandbox implementation guide.

Make sure to avoid the following common mistakes:

Don’t parse or interpret untrustworthy data using C++ in the
browser process.

Don’t trust the origin a renderer claims to represent. The browser’s
RenderProcessHost can be used to get the current origin securely.

Instead, use the following best practices:

Be extra paranoid if your code is in the browser process.

Validate all IPC from other processes. Assume all other processes are already
compromised and out to trick you.

Do your processing in a renderer or utility process or some other sandboxed
process. Ideally, also use a memory safe language such as JavaScript
(solves >50% security bugs).

For years, we ran network stacks (e.g. HTTP, DNS, QUIC) in the browser process,
which led to some critical vulnerabilities. On
some platforms, networking now has its own process, with a sandbox coming.

The Native File System API: Simplifying access to local files

The Native File System API (formerly known as the Writeable Files API), is
available behind a flag in Chrome 77 and later, and should begin an origin
trial in Chrome 78 (stable in October). It is part of our capabilities
project, and this post will be updated as the implementation progresses.

What is the Native File System API?

The new Native File System API enables developers to build powerful web apps
that interact with files on the user's local device, like IDEs, photo and video
editors, text editors, and more. After a user grants a web app access, this
API allows web apps to read or save changes directly to files and folders
on the user's device.

We've put a lot of thought into the design and implementation of the Native
File System API to ensure that people can easily manage their files. See the
Security and permissions
section of this post.

Using the Native File System API

To show off the true power and usefulness of the Native File System APIs,
I wrote a single file text editor. It lets you open a text
file, edit it, save the changes back to disk, or start a new file and save
the changes to disk. It's nothing fancy, but provides enough to help you
understand the concepts.

Enabling via chrome://flags

If you want to experiment with the Native File System API locally, enable
the #native-file-system-api flag in chrome://flags.

Read a file from the local file system

The first use case I wanted to tackle was to ask the user to choose a file,
then open and read that file from disk.

Ask the user to pick a file to read

The entry point to the Native File System API is
window.chooseFileSystemEntries(). When called, it shows
a file picker dialog, and prompts the user to select a file. After selecting
a file, the API returns a handle to the file. An optional options parameter
lets you influence the behavior of the file picker, for example, allowing the
user to select multiple files, or directories, or different file types.
Without any options specified, the file picker allows the user to select a
single file, perfect for our text editor.

Like many other powerful APIs, calling chooseFileSystemEntries() must be
done in a secure context, and must be called from within
a user gesture.

Once the user selects a file, chooseFileSystemEntries() returns a handle,
in this case a FileSystemFileHandle that contains the
properties and methods needed to interact with the file.

It’s helpful to keep a reference to the file handle around so that it
can be used later. It’ll be needed to save changes back to the file, or
to perform any other file operations. In the next few milestones, installed
Progressive Web Apps will also be able to save the handle to IndexedDB
and persist access to the file across page reloads.

Read a file from the file system

Now that you have a handle to a file, you can get the properties of the file,
or access the file itself. For now, let’s simply read the contents of the
file. Calling handle.getFile() returns a File object,
which contains binary data as a blob. To get the data from the blob, call
one of the reader methods (slice(), stream(), text(), arrayBuffer()).

Write the file to the local file system

In the text editor, there are two ways to save a file: Save, and Save As.
Save simply writes the changes back to the original file using the file
handle we got earlier. But Save As creates a new file, and thus requires a
new file handle.

Create a new file

The chooseFileSystemEntries() API with {type: 'saveFile'} will show the
file picker in “save” mode, allowing the user to pick a new file they want
to use for saving. For the text editor, I also wanted it to automatically
add a .txt extension, so I provided some additional parameters.

Save changes to the original file

To write data to disk, I needed to create a FileSystemWriter
by calling createWriter() on the file handle, then call write() to do
the write. If permission to write hasn’t been granted already, the browser
will prompt the user for permission to write to the file first (during
createWriter()). The write() method takes a string, which is what we
want for a text editor, but it can also take a BufferSource,
or a Blob.

Note: There's no guarantee that the contents are written to disk until
the close() method is called.

The keepExistingData option, when calling createWriter(), isn’t supported
yet, so once I get the writer, I immediately call truncate(0) to ensure I
start with an empty file. Otherwise, if the length of the new content is
shorter than the existing content, the existing content after the new
content would remain. keepExistingData will be added in a future milestone.

When createWriter() is called, Chrome checks if the user has granted
write permission. If not, it requests permission. If the user grants
permission, the app can write the contents to the file. But, if the user
does not grant permission, createWriter() will throw a DOMException, and
the app will not be able to write to the file. In the text editor, these
DOMExceptions are handled in the saveFile()
method.

What else is possible?

Beyond reading and writing files, the Native File System API provides
several other new capabilities.

Open a directory and enumerate its contents

To enumerate all files in a directory, call chooseFileSystemEntries()
with the type option set to 'openDirectory'. The user selects a directory
in a picker, after which a FileSystemDirectoryHandle
is returned, which lets you enumerate and access the directory’s files.

Opening a file or saving a new file

File picker used to open an existing file for reading.

When opening a file, the user provides permission to read a file or
directory via the file picker. The open file picker can only be shown via
a user gesture when served from a secure context. If
the user changes their mind, they can cancel the selection in the file
picker and the site does not get access to anything. This is the same
behavior as that of the <input type="file"> element.

File picker used to save a file to disk.

Similarly, when a web app wants to save a new file, the browser will show
the save file picker, allowing the user to specify the name and location
of the new file. Since they are saving a new file to the device (versus
overwriting an existing file), the file picker grants the app permission
to write to the file.

Restricted folders

To help protect users and their data, the browser may limit the user’s
ability to save to certain folders, for example, core operating system
folders like Windows, the macOS Library folders, etc. When this happens,
the browser will show a modal prompt and ask the user to choose a
different folder.

Modifying an existing file or directory

A web app cannot modify a file on disk without getting explicit permission
from the user.

Permission prompt

Prompt shown to users before the browser is granted write
permission on an existing file.

If a person wants to save changes to a file that they previously granted
read access, the browser will show a modal permission prompt, requesting
permission for the site to write changes to disk. The permission request
can only be triggered by a user gesture, for example, clicking a “Save”
button.

Alternatively, a web app that edits multiple files, like an IDE, can
also ask for permission to save changes at the time of opening.

If the user chooses Cancel, and does not grant write access, the web
app cannot save changes to the local file. It should provide an alternative
method to allow the user to save their data, for example providing a way to
“download” the file, saving data to the cloud, etc.

Transparency

Omnibox icon indicating the user has granted the website permission to
save to a local file.

Once a user has granted permission to a web app to save a local file,
Chrome will show an icon in the omnibox. Clicking on the omnibox icon
opens a popover showing the list of files the user has given access to.
The user can easily revoke that access if they choose.

Permission persistence

The web app can continue to save changes to the file without prompting as
long as the tab is open. Once a tab is closed, the site loses all access.
The next time the user uses the web app, they will be re-prompted for access
to the files. In the next few milestones, installed Progressive Web Apps
(only) will also be able to save the handle to IndexedDB and persist access
to handles across page reloads. In this case, an icon will be shown in the
omnibox as long as the app has write access to local files.

Feedback

We want to hear about your experiences with the Native File System API.

Tell us about the API design

Is there something about the API that doesn’t work like you expected? Or
are there missing methods or properties that you need to implement your
idea? Have a question or comment on the security model?

Problem with the implementation?

Did you find a bug with Chrome's implementation? Or is the implementation
different from the spec?

File a bug at https://new.crbug.com. Be sure to include as
much detail as you can, simple instructions for reproducing, and set
Components to Blink>Storage>FileSystem. Glitch
works great for sharing quick and easy repros.

Planning to use the API?

Planning to use the Native File System API on your site? Your public support
helps us to prioritize features, and shows other browser vendors how
critical it is to support them.

Experimenting with Periodic Background Sync

What's periodic background sync?

Have you ever been in any of the following situations? Riding a fast train or
the subway with flaky or no connectivity, being throttled by your carrier after
watching too many videos on the go, or living in a country where bandwidth is
struggling to keep up with the demand? If you have, then you’ve surely
experienced the frustration of getting certain things done on the web, and
wondered why native apps tend to do better in these scenarios.

Native apps can fetch fresh content, such as timely news articles or up-to-date
weather information, ahead of time. Even if there’s no network in the subway,
you can still read the news. Periodic background sync (PBS) is an
experimental feature
that gives people the same feature on the web. You can enjoy instant page loads
with the latest news from your favorite newspaper, have enough music or videos
to entertain yourself during an otherwise boring no-connectivity commute, and
more.

Why add periodic background sync to your web app?

Consider a web app that uses a service worker to offer a rich offline experience:

When a person launches the app, it may only have stale content loaded.

Without periodic background sync, the app can only refresh itself when
launched. As a result, people will see a flash of old content being slowly
replaced by new content, or just a loading spinner.

With PBS, the app can update itself in the background, giving people a
smoother and reliably fresh experience.

Now people can read the latest news, even in the subway!

Let’s now look at two types of updates that would be beneficial if done ahead of time.

Updating an application

This is the data required for your web app to work correctly.

Examples:

Updated search index for a search app.

A critical application update.

Updated icons or user interface.

Updating content

If your web app regularly publishes updates, you can fetch the newest content to
give folks using your site a better experience.

Examples:

Fresh articles from news sites.

New songs from a favorite artist.

Badges and achievements in a fitness app.

Non-goals

Triggering events at a specific time is outside the scope of this API. PBS
can't be used for time-based "alarm clock" scenarios.

There is no guaranteed cadence of the periodic sync tasks. When registering for
PBS, you provide a minInterval value that acts as a lower bound for the sync
interval, but there is no way to guarantee an upper bound. The browser decides
this cadence for each web app.

A web app can register multiple periodic tasks, and the frequency determined by
the browser for the tasks may or may not end up being the same.

Getting this right

We are putting periodic background sync through a trial period
so that you can help us make sure that we got it right. This section explains
some of the design decisions we took to make this feature as helpful as
possible.

The first design decision we made is that a web app can only use PBS once a
person has installed it on their
device, and has launched it as a distinct application. PBS is not available in
the context of a regular tab in Chrome.

Furthermore, since we don’t want unused or seldom used web apps to gratuitously
consume battery or data, we designed PBS such that developers will have to earn
it by providing value to their users. Concretely, we are using a
site engagement score
to determine if and how often periodic background syncs can happen for a given
web app. In other words, a periodicsync event won't be fired at all unless
the engagement score is greater than zero, and its value will affect the
frequency at which the periodicsync event will fire. This ensures that the
only apps syncing in the background are the ones you are actively using.

PBS shares some similarities with existing APIs and practices on popular
platforms. For instance, one-off background sync
as well as push notifications allow a web app's logic to live a little longer
(via its service worker) after a person has closed the page. On most platforms,
it’s common for people to have installed apps that periodically access the
network in the background to provide a better user experience—for critical
updates, prefetching content, syncing data, etc. Similarly, periodic background
sync also extends the lifetime of a web app's logic to run at regular periods,
for what might be a few minutes at a time.

If the browser allowed this to occur frequently and without restrictions, it
could result in some privacy concerns. Here's how Chrome has addressed this risk
for PBS:

The background sync activity only occurs on a network that the device has
previously connected to. We recommend to only connect to networks operated by
trustworthy parties.

As with all internet communications, PBS reveals the IP addresses of the
client and the server it's talking to, and the name of the server. To reduce
this exposure to roughly what it would be if the app only synced when it was
in the foreground, the browser limits the frequency of an app's background
syncs to align with how often the person uses that app. If the person stops
frequently interacting with the app, PBS will stop triggering. This is a net
improvement over the status quo in native apps.

Alternatives

Before PBS, web apps had to jump through hoops to keep content fresh—like
triggering a push notification to wake
up their service worker and update
content as a side effect. But the timing of those notifications is decided by
the developer. PBS leaves it to the browser to work with the operating system to
figure out when an update should happen, allowing it to optimize for things like
power and connectivity state, and prevent resource abuse in the background.

Using PBS instead of push notifications also means that these updates will
happen without the fear of interrupting users, which might be the case with a
regular notification. Developers still have the option of using push
notifications for truly important updates, such as significant breaking news.
Users can uninstall the web app, or disable the "Background Sync" site setting
for specific web apps if needed.

Note: Periodic background sync should not be confused with a different web
platform feature: "one-off" background sync.
While their names are similar, their use cases are different. One-off background
sync allows your web app's service worker to respond to network availability on
a non-repeated basis. It's most commonly used to automatically retry sending a
request that failed because the network was temporarily unavailable.

Origin trial

The current experimental implementation of periodic background sync is available
in Chrome 77 and higher. It's implemented as an "origin trial," and you must
join the origin trial
before it can be enabled for your web app's users.

Note: Origin trials allow you to try new features and give feedback on their
usability, practicality, and effectiveness to the web standards community. For
more information, see the
Origin Trials Guide for Web Developers.

We anticipate that the trial will end around March 2020, at which point the web
platform community can use the feedback collected during the trial to inform a
decision about the future of the feature.

During the origin trial, PBS can be tested on all platforms on which Chrome
supports installing web apps, including macOS, Windows, Linux, Chrome OS, and
Android. On macOS, Windows, and Linux, PBS events will only be fired if an
instance of Chrome is actively running. This restriction is similar to how
push notifications work on those platforms.
If Chrome is quit and then re-launched after multiple background sync intervals
have elapsed, a single periodicsync event will be fired soon after Chrome
starts up, assuming all other conditions are met.

Note: For local testing purposes, developers can also try out PBS functionality
by visiting chrome://flags/#periodic-background-sync in Chrome 77 and above,
and enabling the feature there. This setting only applies to your local copy of
Chrome, and is not a scalable substitute for the origin trial.

As part of the origin trial process, the Chrome team welcomes your input.
Feedback on the experimental specification can be provided via GitHub,
and comments or bug reports on Chrome's implementation can be provided by filing a bug
with the Component field set to
"Blink>BackgroundSync".

Example code

The following snippets cover common scenarios for interacting with periodic
background sync. Some of them are meant to run within the context of your web
app, possibly in response to someone clicking a UI element that opts-in to
periodic background sync. Other snippets are meant to be run in your service
worker's code.

You can see these snippets in context by reading the source code for the live demo.

Checking whether periodic sync can be used

The
Permissions API
tells you whether PBS can be enabled. You can query for
'periodic-background-sync' permission from either your web app's window
context, or from within a service worker.

If the status is 'granted', then your web app meets the requirements to register for PBS.

If the status is anything other than 'granted' (most likely 'denied'), then
your web app can't use PBS. This might be because the current browser doesn't
support it, or because one of the other requirements outlined above hasn't been
met.

Registering a periodic sync

You can register for PBS within your web app's window context, but it must be
after the service worker is registered. Both a tag ('content-sync' in the
below example) and a minimum sync interval (in milliseconds) are required. You
can use whatever string you'd like for the tag, and it will be passed in as a
parameter to the corresponding periodicsync event in your service worker. This
allows you to distinguish between multiple types of sync activity that you might
register.

If you attempt to register when PBS is not supported, the call will throw an exception.

Responding to a periodic sync event

To respond to PBS syncs, add a periodicsync event listener to your service
worker. The callback parameter contains the tag matching the string you used
during registration. This allows you to customize the callback's behavior—like
updating one set of cached data as opposed to another—based on different tag
values.

self.addEventListener('periodicsync', (event) => {
if (event.tag === 'content-sync') {
// See the "Think before you sync" section for
// checks you could perform before syncing.
event.waitUntil(syncContent());
}
// Other logic for different tags as needed.
});

Checking if a sync with a given tag is registered

You can use the getTags() method to retrieve an array of tag strings,
corresponding to active PBS registrations.

One use case is to check whether or not a PBS registration used to update cached
data is already active, and if it is, avoid updating the cached data again.

You might also use this method to show a list of active registrations in your
web app's settings page, and allow people to enable or disable specific types of
syncs based on their preferences.

Think before you sync

When your service worker wakes up to handle a periodicsync event, you have the
opportunity to request data, but not the obligation to do so. While handling
the event, you may want to take the current network, data saver status, and
available storage quota into account before refreshing cached data. You also
might structure your code so that there are "lightweight" and "heavyweight"
network payloads, depending on those criteria.

The following features can be used inside of a service worker to help make the
decision about how much (if anything) to refresh inside your periodicsync
handler:

Debugging

It can be a challenge to get the "big picture" view of periodic background sync
while testing things locally. Information about active registrations,
approximate sync intervals, and logs of past sync events can provide valuable
context while debugging your web app's behavior. Fortunately, all of that
information can be found as an experimental feature in Chrome's DevTools.

Note: PBS debugging is currently disabled by default. Please read "Enabling the DevTools interface"
for the steps needed to enable it during the origin trial.

Recording local activity

The "Periodic Background Sync" panel's interface is organized around key events
in the PBS lifecycle: registering for sync, performing a background sync, and
unregistering. In order to obtain information about these events, you need to
"start recording" from within DevTools first.

While recording, entries will appear in DevTools corresponding to events, with
context and metadata logged for each.

After enabling recording once, it will stay enabled for up to three days,
allowing DevTools to capture local debugging information about background syncs
that might take place, e.g., hours in the future.

Simulating events

While recording background activity can be helpful, there are times when you'd
want to test your periodicsync handler immediately, without waiting for the
event to fire on its normal cadence.

You can do this via the "Service Workers" panel within the Applications tab in
Chrome DevTools. The "Periodic Sync" field allows you to provide a tag for the
event to use, and trigger it as many times as you'd like.

Manually triggering a periodicsync event did not make it into Chrome 77, so
the best way to test it out is to use Chrome 78 (currently in Canary)
or later. You'll need to follow the same "Enabling the DevTools interface"
steps to turn it on.

Live demo

You can try out this
live demo app
that uses periodic background sync. Make sure that:

You're using Chrome 77 or later.

You "install" the web app before
trying to enable periodic background sync.

(The demo app's author already took the step of signing up for the origin trial.)

References and acknowledgements

This article is adapted from Mugdha Lakhani & Peter Beverloo's original
write-up,
with contributions from Chris Palmer. Mughda also wrote the code samples, live
demo, and the code for the Chrome implementation of this feature.

Enabling the DevTools interface

The following steps are required while periodic background sync remains an
origin trial. If and when it progresses out of the origin trial phase, the
DevTools interface will be enabled by default.

Card issuer networks as payment method names

Deprecate Web MIDI use on insecure origins

Web MIDI use is classified into two groups: non-privilege use, and privilege use
with sysex permission. Until Chrome 77, only the latter use prompts users for
permission. To reduce security concerns, permissions will always be requested
regardless of sysex use. This means that using Web MIDI on insecure origins will
no longer be allowed.

Deprecations

Deprecate WebVR 1.1 API

This API is now deprecated in Chrome, being replaced
by the WebXR Device API,
which is expected to ship in Chrome 78. The WebVR Origin Trial ended on July 24,
2018.

WebVR was never enabled by default in Chrome, and was never ratified as a web
standard. The WebXR Device API is the
replacement API for WebVR. Removing WebVR from Chrome allows us to focus on the
future of WebXR and remove the maintenance burden of WebVR, as well as reaffirm
that Chrome is committed to WebXR as the future for building immersive web-based
experiences. Removal is expected in Chrome 79.

A Contact Picker for the Web

The Contact Picker API begins an origin trial in Chrome 77
(stable in September) as part of our capabilities project. We’ll keep this
post updated as the implementation progresses.Last Updated: August 7th, 2019

What is the Contact Picker API?

Access to the user’s contacts has been a feature of native apps since
(almost) the dawn of time. It’s one of the most common feature requests
I hear from web developers, and is often the key reason they build a native
app.

The Contact Picker API is a new, on-demand picker that allows users to
select entries from their contact list and share limited details of the
selected entries with a website. It allows users to share only what they
want, when they want, and makes it easier for users to reach and connect
with their friends and family.

For example, a web-based email client could use the Contact Picker API to
select the recipient(s) of an email. A voice-over-IP app could look up
which phone number to call. Or a social network could help a user discover
which friends have already joined.

We've put a lot of thought into the design and implementation of the
Contact Picker API to ensure that the browser will only share exactly
what people choose. See the
Security and Privacy section below.

Enabling via chrome://flags

To experiment with the Contact Picker API locally, without an origin
trial token, enable the #enable-experimental-web-platform-features flag
in chrome://flags.

Enabling support during the origin trial phase

Starting in Chrome 77, the Contact Picker API will be available as an origin
trial on Chrome for Android. Origin trials allow you to try new features
and give feedback on their usability, practicality, and effectiveness, both
to us, and to the web standards community. For more information, see the
Origin Trials Guide for Web Developers.

Feature detection

In addition, on Android, the Contact Picker requires Android M or later.

Opening the Contact Picker

The entry point to the Contact Picker API is navigator.contacts.select().
When called, it returns a Promise and shows the Contact Picker, allowing the
user to select the contact(s) they want to share with the site. After
selecting what to share and clicking Done, the promise resolves with an
array of contacts selected by the user.

You must provide an array of properties you’d like returned as the first
parameter, and optionally whether multiple contacts can be selected as a
second parameter.

The Contacts Picker API can only be called from a secure,
top-level browsing context, and like other powerful APIs, it requires a
user gesture.

Handling the results

The Contact Picker API returns an array of contacts, and each contact
includes an array of the requested properties. If a contact doesn’t have
data for the requested property, or the user chooses to opt-out of sharing
a particular property, it returns an empty array.

For example, if a site requests name, email, and tel, and a user
selects a single contact that has data in the name field, provides two
phone numbers, but does not have an email address, the response returned will be:

Security and permissions

User control

Access to the users' contacts is via the picker, it can only be called with
a user gesture, on a secure, top-level browsing context.
This ensures that a site can’t show the picker on page load, or randomly show
the picker without any context.

User can choose not to share some properties, in this screenshot, the
user has unchecked the 'Phone numbers' button. Even though the site
asked for phone numbers, they will not be shared with the site.

There's no option to bulk-select all contacts so that users are encouraged
to select only the contacts that they need to share for that particular
website. Users can also control which properties are shared with the site
by toggling the property button at the top of the picker.

Transparency

To clarify which contact details are being shared, the picker will always
show the contact's name and icon, plus any properties that the site has
requested. For example, if a site requests name, email, and tel,
all three properties will be shown in the picker. Alternatively,
if a site only requests tel, the picker will show only the name, and
telephone numbers.

Picker, site requesting name, email, and
tel, one contact selected.
Picker, site requesting only tel, one contact selected.

A long press on a contact will show all of the information that will be
shared if the contact is selected (image right).

No permission persistence

Access to contacts is on-demand, and not persisted. Each time a site wants
access, it must call navigator.contacts.select() with a user gesture,
and the user must individually choose the contact(s) they want to share
with the site.

Feedback

We want to hear about your experiences with the Contact Picker API.

Tell us about the API design

Is there something about the API that doesn’t work like you expected? Or
are there missing methods or properties that you need to implement your idea?

Problem with the implementation?

Did you find a bug with Chrome's implementation? Or is the implementation
different from the spec?

File a bug at https://new.crbug.com. Be sure to include as much
detail as you can, simple instructions for reproducing, and set
Components to Blink>Contacts. Glitch works great
for sharing quick and easy repros.

Planning to use the API?

Planning to use the Contact Picker API? Your public support helps us to
prioritize features, and shows other browser vendors how critical it is to
support them.

I’m Pete LePage, let’s dive in and see
what’s new for developers in Chrome 76!

PWA Omnibox Install Button

In Chrome 76, we're making it easier for users to install Progressive Web Apps
on the desktop, by adding an install button to the address bar, sometimes
called the omnibox.

If your site meets the
Progressive Web App installability criteria, Chrome
will show an install button in the omnibox indicating to the user that your
PWA can be installed. If the user clicks the install button, it’s essentially
the same as calling prompt() on the beforeinstallprompt event;
it shows the install dialog, making it easy for the user to install your PWA.

More control over the PWA mini-infobar

Example of the Add to Home screen mini-infobar for AirHorner

On mobile, Chrome shows the mini-infobar the first time a user visits your
site if it meets the Progressive Web App installability criteria.
We heard from you that you want to be able to prevent the mini-infobar from
appearing, and provide your own install promotion instead.

Starting in Chrome 76, calling preventDefault() on the beforeinstallprompt
event will stop the mini-infobar from appearing.

Be sure to update your UI - to let users know your PWA can be installed.
Check out Patterns for Promoting PWA Installation for
our recommend best practices for promoting the installation of your
Progressive Web Apps.

Faster updates to WebAPKs

When a Progressive Web App is installed on Android, Chrome automatically
requests and installs a Web APK. After it’s been installed,
Chrome periodically checks if the web app manifest has changed,
maybe you’ve updated the icons, colors, or changed the app name, to see if
a new WebAPK is required.

Starting in Chrome 76, Chrome will check the manifest more frequently;
checking every day, instead of every three days. If any of the key properties
have changed, Chrome will request and install a new WebAPK, ensuring the
title, icons and other properties are up to date.

The Chromium Chronicle: Test your Web Platform Features with WPT

Episode 4: July 2019

by Robert in Waterloo

If you work on Blink, you might know of web_tests (formerly LayoutTests).
web-platform-tests (WPT) lives inside web_test/external/wpt. WPT is the
preferred way to test web-exposed features as it is shared with other
browsers via GitHub. It has two main types of tests: reftests and
testharness.js tests.

reftests take and compare screenshots of two pages. By default, screenshots
are taken after the load event is fired; if you add a reftest-wait class
to the <html> element, the screenshot will be taken when the class is removed.
Disabled tests mean diminishing test coverage. Be aware of font-related
flakiness; use the Ahem font when possible.

testharness.js is a JavaScript framework for testing anything
except rendering. When writing testharness.js tests, pay attention to timing,
and remember to clean up global state.

Use testdriver.js if you need automation otherwise unavailable on the web.
You can get a user gesture from test_driver.bless, generate complex,
trusted inputs with test_driver.action_sequence, etc.

WPT also provides some useful server-side features through file names.
Multi-global tests (.any.js and its friends) run the same tests in different
scopes (window, worker, etc.); .https.sub.html asks the test to be loaded
over HTTPS with server-side substitution support like below:

Some features can also be enabled in query strings.baz.html?pipe=sub|header(X-Key,val)|trickle(d1) enables substitution, adds X-Key: val
to the headers of the response, and delays 1 second before responding. Search for "pipes"
on web-platform-tests.org for more.

WPT can also test behaviors that are not included in specs yet; just
name the test as .tentative. If you need Blink internal APIs (e.g.
testRunner, internals), put your tests in web_tests/wpt_internal.

Changes made to WPT are automatically exported to GitHub. You will see
comments from a bot in your CL. GitHub changes from other vendors are also
continuously imported. To receive automatically filed bugs when new failures
are imported, create an OWNERS file in a subdirectory in WPT: