Little confession here: when I first saw Netlify CMS at a glance, I thought: cool, maybe I’ll try that someday when I’m exploring CMSs for a new project. Then as I looked at it with fresh eyes: I can already use this! It’s a true CMS in that it adds a content management UI on top of any static site generator that works from flat files! Think of how you might build a site from markdown files with Gatsby, Jekyll, Hugo, Middleman, etc. You can create and edit Markdown files and the site’s build process runs and the site is created.
Netlify CMS gives you (or anyone you set it up for) a way to create/edit those Markdown files without having to use a code editor or know about Pull Requests on GitHub or anything. It’s a little in-browser app that gives you a UI and does the file manipulation and Git stuff behind the scenes.
Here’s an example.
Our conferences website is a perfect site to build with a static site generator.It’s on GitHub, so it’s open to Pull Requests, and each conference is a Markdown file.That’s pretty cool already. The community has contributed 77 Pull Requests already really fleshing out the content of the site, and the design, accessibility, and features as well!
I used 11ty to build the site, which works great with building out those Markdown files into a site, using Nunjucks templates. Very satisfying combo, I found, after a slight mostly configuration-related learning curve.
Enter Netlify CMS.
But as comfortable as you or I might be with a quick code edit and Pull Request, not everybody is. And even I have to agree that going to a URL quick, editing some copy in input fields, and clicking a save button is the easiest possible way to manage content.
That CMS UI is exactly what Netlify CMS gives you. Wanna see the entire commit for adding Netlify CMS?It’s two files! That still kinda blows my mind. It’s a little SPA React app that’s entirely configurable with one file.
Cutting to the chase here, once it is installed, I now have a totally customized UI for editing the conferences on the site available right on the production site.Netlify CMS doesn’t do anything forceful or weird, like attempt to edit the HTML on the production site directly. It works right into the workflow in the same exact way that you would if you were editing files in a code editor and committing in Git.
Auth & Git
You use Netlify CMS on your production site, which means you need authentication so that just you (and the people you want) have access to it. Netlify Identity makes that a snap. You just flip it on from your Netlify settings and it works.
I activated GitHub Auth so I could make logging in one-click for me.
The Git magic happens through a technology called Git Gateway. You don’t have to understand it (I don’t really), you just enable it in Netlify as part of Netlify Identity, and it forms the connection between your site and the Git repository.Now when you create/edit content, actual Markdown files are created and edited (and whatever else is involved, like images!) and the change happens right in the Git repository.I made this the footer of the site cause heck yeah.Share this:

This is part four of a five-part series discussing the Web Components specifications. In part one, we took a 10,000-foot view of the specifications and what they do. In part two, we set out to build a custom modal dialog and created the HTML template for what would evolve into our very own custom HTML element in part three.Article Series:
An Introduction to Web Components
Crafting Reusable HTML Templates
Creating a Custom Element from Scratch
Encapsulating Style and Structure with Shadow DOM (This post)
Advanced Tooling for Web Components (Coming soon!)If you haven’t read those articles, you would be advised to do so now before proceeding in this article as this will continue to build upon the work we’ve done there.
When we last looked at our dialog component, it had a specific shape, structure and behaviors, however it relied heavily on the outside DOM and required that the consumers of our element would need to understand it’s general shape and structure, not to mention authoring all of their own styles (which would eventually modify the document’s global styles). And because our dialog relied on the contents of a template element with an id of “one-dialog”, each document could only have one instance of our modal.
The current limitations of our dialog component aren’t necessarily bad. Consumers who have an intimate knowledge of the dialog’s inner workings can easily consume and use the dialog by creating their own <template> element and defining the content and styles they wish to use (even relying on global styles defined elsewhere). However, we might want to provide more specific design and structural constraints on our element to accommodate best practices, so in this article, we will be incorporating the shadow DOM to our element.
What is the shadow DOM?
In our introduction article, we said that the shadow DOM was “capable of isolating CSS and JavaScript, almost like an <iframe>.” Like an <iframe>, selectors and styles inside of a shadow DOM node don’t leak outside of the shadow root and styles from outside the shadow root don’t leak in. There are a few exceptions that inherit from the parent document, like font family and document font sizes (e.g. rem) that can be overridden internally.
Unlike an <iframe>, however, all shadow roots still exist in the same document so that all code can be written inside a given context but not worry about conflicts with other styles or selectors.
Adding the shadow DOM to our dialog
To add a shadow root (the base node/document fragment of the shadow tree), we need to call our element’s attachShadow method:class OneDialog extends HTMLElement {
constructor() {
super();
this.attachShadow({ mode: 'open' });
this.close = this.close.bind(this);
}
}By calling attachShadow with mode: 'open', we are telling our element to save a reference to the shadow root on the element.shadowRoot property. attachShadow always returns a reference to the shadow root, but here we don’t need to do anything with that.
If we had called the method with mode: 'closed', no reference would have been stored on the element and we would have to create our own means of storage and retrieval using a WeakMap or Object, setting the node itself as the key and the shadow root as the value.const shadowRoots = new WeakMap();class ClosedRoot extends HTMLElement {
constructor() {
super();
const shadowRoot = this.attachShadow({ mode: 'closed' });
shadowRoots.set(this, shadowRoot);
}connectedCallback() {
const shadowRoot = shadowRoots.get(this);
shadowRoot.innerHTML = `<h1>Hello from a closed shadow root!</h1>`;
}
}We could also save a reference to the shadow root on our element itself, using a Symbol or other key to try to make the shadow root private.
In general, the closed mode for shadow roots exists for native elements that use the shadow DOM in their implementation (like <audio> or <video>). Further, for unit testing our elements, we might not have access to the shadowRoots object, making it unable for us to target changes inside our element depending on how our library is architected.
There might be some legitimate use cases for user-land closed shadow roots, but they are few and far between, so we’ll stick with the open shadow root for our dialog.
After implementing the new open shadow root, you might notice now that our element is completely broken when we try to run it:
See the Pen Dialog example using template with shadow root by Caleb Williams (@calebdwilliams)on CodePen.
This is because all of the content we had before was added to and manipulated in the traditional DOM (what we’ll call the light DOM). Now that our element has a shadow DOM attached, there is no outlet for the light DOM to render. Let’s start fixing these issues by moving our content to the shadow DOM:class OneDialog extends HTMLElement {
constructor() {
super();
this.attachShadow({ mode: 'open' });
this.close = this.close.bind(this);
}
connectedCallback() {
const { shadowRoot } = this;
const template = document.getElementById('one-dialog');
const node = document.importNode(template.content, true);
shadowRoot.appendChild(node);
shadowRoot.querySelector('button').addEventListener('click', this.close);
shadowRoot.querySelector('.overlay').addEventListener('click', this.close);
this.open = this.open;
}disconnectedCallback() {
this.shadowRoot.querySelector('button').removeEventListener('click', this.close);
this.shadowRoot.querySelector('.overlay').removeEventListener('click', this.close);
}
set open(isOpen) {
const { shadowRoot } = this;
shadowRoot.querySelector('.wrapper').classList.toggle('open', isOpen);
shadowRoot.querySelector('.wrapper').setAttribute('aria-hidden', !isOpen);
if (isOpen) {
this._wasFocused = document.activeElement;
this.setAttribute('open', '');
document.addEventListener('keydown', this._watchEscape);
this.focus();
shadowRoot.querySelector('button').focus();
} else {
this._wasFocused && this._wasFocused.focus && this._wasFocused.focus();
this.removeAttribute('open');
document.removeEventListener('keydown', this._watchEscape);
}
}
close() {
this.open = false;
}
_watchEscape(event) {
if (event.key === 'Escape') {
this.close();
}
}
}customElements.define('one-dialog', OneDialog);The major changes to our dialog so far are actually relatively minimal, but they carry a lot of impact. For starters, all our our selectors (including our style definitions) are internally scoped. For example, our dialog template only has one button internally, so our CSS only targets button { ... }, and those styles don’t bleed out to the light DOM.
We are, however, still reliant on the template that is external to our element. Let’s change that by removing the markup from our template and dropping it into our shadow root’s innerHTML.
See the Pen Dialog example using only shadow root by Caleb Williams (@calebdwilliams)on CodePen.
Including content from the light DOM
The shadow DOM specification includes a means for allowing content from outside the shadow root to be rendered inside of our custom element. For those of you who remember AngularJS, this is a similar concept to ng-transclude or using props.children in React. With Web Components, this is done using the <slot> element.
A simple example would look like this:<div>
<span>world <!-- this would be inserted into the slot element below --></span>
<#shadow-root><!-- pseudo code -->
<p>Hello <slot></slot></p>
</#shadow-root>
</div>A given shadow root can have any number of slot elements, which can be distinguished with a name attribute. The first slot inside of the shadow root without a name, will be the default slot and all content not otherwise assigned will flow inside that node. Our dialog really needs two slots: a heading and some content (which we’ll make default).
See the Pen Dialog example using shadow root and slots by Caleb Williams (@calebdwilliams)on CodePen.
Go ahead and change the HTML portion of our dialog and see the result. Any content inside of the light DOM is inserted into the slot to which it is assigned. Slotted content remains inside the light DOM although it is rendered as if it were inside the shadow DOM. This means that these elements are still fully style-able by a consumer who might want to control the look and feel of their content.
A shadow root’s author can style content inside the light DOM to a limited extent using the CSS ::slotted() pseudo-selector; however, the DOM tree inside slotted is collapsed, so only simple selectors will work. In other words, we wouldn’t be able to style a <strong> element inside a <p> element within the flattened DOM tree in our previous example.
The best of both worlds
Our dialog is in a good state now: it has encapsulated, semantic markup, styles and behavior; however, some consumers of our dialog might still want to define their own template. Fortunately, by combining two techniques we’ve already learned, we can allow authors to optionally define an external template.
To do this, we will allow each instance of our component to reference an optional template ID. To start, we need to define a getter and setter for our component’s template.get template() {
return this.getAttribute('template');
}set template(template) {
if (template) {
this.setAttribute('template', template);
} else {
this.removeAttribute('template');
}
this.render();
}Here we’re doing much the same thing that we did with our open property by tying it directly to its corresponding attribute. But at the bottom, we’re introducing a new method to our component: render. We are going to use our render method to insert our shadow DOM’s content and remove that behavior from the connectedCallback; instead, we will call render when our element is connected:connectedCallback() {
this.render();
}render() {
const { shadowRoot, template } = this;
const templateNode = document.getElementById(template);
shadowRoot.innerHTML = '';
if (templateNode) {
const content = document.importNode(templateNode.content, true);
shadowRoot.appendChild(content);
} else {
shadowRoot.innerHTML = `<!-- template text -->`;
}
shadowRoot.querySelector('button').addEventListener('click', this.close);
shadowRoot.querySelector('.overlay').addEventListener('click', this.close);
this.open = this.open;
}Our dialog now has some really basic default stylings, but also gives consumers the ability to define a new template for each instance. If we wanted, we could even use attributeChangedCallback to make this component update based on the template it’s currently pointing to:static get observedAttributes() { return 'open', 'template']; }attributeChangedCallback(attrName, oldValue, newValue) {
if (newValue !== oldValue) {
switch (attrName) {
/** Boolean attributes */
case 'open':
this[attrName] = this.hasAttribute(attrName);
break;
/** Value attributes */
case 'template':
this[attrName] = newValue;
break;
}
}
}See the Pen Dialog example using shadow root, slots and template by Caleb Williams (@calebdwilliams)on CodePen.
In the demo above, changing the template attribute on our <one-dialog> element will alter which design is being used when the element is rendered.
Strategies for styling the shadow DOM
Currently, the only reliable way to style a shadow DOM node is by adding a <style> element to the shadow root’s inner HTML. This works fine in almost every case as browsers will de-duplicate stylesheets across these components, where possible. This does tend to add a bit of memory overhead, but generally not enough to notice.
Inside of these style tags, we can use CSS custom properties to provide an API for styling our components. Custom properties can pierce the shadow boundary and effect content inside a shadow node.
“Can we use a <link> element inside of a shadow root?” you might ask. And, in fact, we can. The trouble comes when trying to reuse this component across multiple applications as the CSS file might not be saved in a consistent location throughout all apps. However, if we are certain as to the element’s stylesheet location, then using <link> is an option. The same holds true for including an @import rule in a style tag.
CSS custom properties
One of the benefits of using CSS custom properties — also called CSS variables — is that they bleed through the shadow DOM. This is by design, giving component authors a surface for allowing theming and styling of their components from the outside. It is important to note, however, that since CSS cascades, changes to custom properties made inside a shadow root do not bleed back up.
See the Pen CSS custom properties and shadow DOM by Caleb Williams (@calebdwilliams)on CodePen.
Go ahead and comment out or remove the variables set in the CSS panel of the demo above and see how this impacts the rendered content. Afterward, you can take a look at the styles in the shadow DOM’s innerHTML, you’ll see how the shadow DOM can define its own property that won’t affect the light DOM.
Constructible stylesheets
At the time of this writing, there is a proposed web feature that will allow for more modular styling of shadow DOM and light DOM elements using constructible stylesheets that has already landed in Chrome 73 and received positive signaling from Mozilla.
This feature would allow authors to define stylesheets in their JavaScript files similar to how they would write normal CSS and share those styles across multiple nodes. So, a single stylesheet could be appended to multiple shadow roots and potentially the document as well.const everythingTomato = new CSSStyleSheet();
everythingTomato.replace('* { color: tomato; }');document.adoptedStyleSheets = [everythingTomato];class SomeCompoent extends HTMLElement {
constructor() {
super();
this.adoptedStyleSheets = [everythingTomato];
}
connectedCallback() {
this.shadowRoot.innerHTML = `<h1>CSS colors are fun</h1>`;
}
}In the above example, the everythingTomato stylesheet would be simultaneously applied to the shadow root and to the document’s body. This feature would be very useful for teams creating design systems and components that are intended to be shared across multiple applications and frameworks.
In the next demo, we can see a really basic example of how this can be utilized and the power that constructble stylesheets offer.
See the Pen Construct style sheets demo by Caleb Williams (@calebdwilliams)on CodePen.
In this demo, we construct two stylesheets and append them to the document and to the custom element. After three seconds, we remove one stylesheet from our shadow root. For those three seconds, however, the document and the shadow DOM share the same stylesheet. Using the polyfill included in that demo, there are actually two style elements present, but Chrome Canary runs this natively.
That demo also includes a form for showing how a sheet’s rules can easily and effectively changed asynchronously as needed. This addition to the web platform can be a powerful ally for those creating design systems that span multiple frameworks or site authors who want to provide themes for their websites.
There is also a proposal for CSS Modules that could eventually be used with the adoptedStyleSheets feature. If implemented in its current form, this proposal would allow importing CSS as a module much like ECMAScript modules:import styles './styles.css';class SomeCompoent extends HTMLElement {
constructor() {
super();
this.adoptedStyleSheets = [styles];
}
}Part and theme
Another feature that is in the works for styling Web Components are the ::part() and ::theme() pseudo-selectors. The ::part() specification will allow authors to define parts of their custom elements that have a surface for styling:class SomeOtherComponent extends HTMLElement {
connectedCallback() {
this.attachShadow({ mode: 'open' });
this.shadowRoot.innerHTML = `
<style>h1 { color: rebeccapurple; }</style>
<h1>Web components are <span part="description">AWESOME</span></h1>
`;
}
}
customElements.define('other-component', SomeOtherComponent);In our global CSS, we could target any element that has a part called description by invoking the CSS ::part() selector.other-component::part(description) {
color: tomato;
}In the above example, the primary message of the <h1> tag would be in a different color than the description part, giving custom element authors the ability to expose styling APIs for their components and maintain control over the pieces they want to maintain control over.
The difference between ::part() and ::theme() is that ::part() must be specifically selected whereas ::theme() can be nested at any level. The following would have the same effect as the above CSS, but would also work for any other element that included a part="description" in the entire document tree.:root::theme(description) {
color: tomato;
}Like constructible stylesheets, ::part() has landed in Chrome 73.
Wrapping up
Our dialog component is now complete, more-or-less. It includes its own markup, styles (without any outside dependencies) and behaviors. This component can now be included in projects that use any current or future frameworks because they are built against the browser specifications instead of third-party APIs.
Some of the core controls are a little verbose and do rely on at least a moderate knowledge of how the DOM works. In our final article, we will discuss higher-level tooling and how to incorporate with popular frameworks.Article Series:
An Introduction to Web Components
Crafting Reusable HTML Templates
Creating a Custom Element from Scratch
Encapsulating Style and Structure with Shadow DOM (This post)
Advanced Tooling for Web Components (Coming soon!)Share this:

Say we want to target an element and just visually blur the border of it. There is no simple, single built-in web platform feature we can reach for. But we can get it done with a little CSS trickery.Here’s what we’re after:
The desired result.
Let’s see how we can code this effect, how we can enhance it with rounded corners, extend support so it works cross-browser, what the future will bring in this department and what other interesting results we can get starting from the same idea!
Coding the basic blurred border
We start with an element on which we set some dummy dimensions, a partially transparent (just slightly visible) border and a background whose size is relative to the border-box, but whose visibility we restrict to the padding-box:$b: 1.5em; // border-widthdiv {
border: solid $b rgba(#000, .2);
height: 50vmin;
max-width: 13em;
max-height: 7em;
background: url(oranges.jpg) 50%/ cover
border-box /* background-origin */
padding-box /* background-clip */;
}The box specified by background-origin is the box whose top left corner is the 0 0 point for background-position and also the box that background-size (set to cover in our case) is relative to. The box specified by background-clip is the box within whose limits the background is visible.
The initial values are padding-box for background-origin and border-box for background-clip, so we need to specify them both in this case.
If you need a more in-depth refresher on background-origin and background-clip, you can check out this detailed article on the topic.
The code above gives us the following result:
See the Pen by thebabydino (@thebabydino) on CodePen.
Next, we add an absolutely positioned pseudo-element that covers its entire parent’s border-box and is positioned behind (z-index: -1). We also make this pseudo-element inherit its parent’s border and background, then we change the border-color to transparent and the background-clip to border-box:$b: 1.5em; // border-widthdiv {
position: relative;
/* same styles as before */
&:before {
position: absolute;
z-index: -1;
/* go outside padding-box by
* a border-width ($b) in every direction */
top: -$b; right: -$b; bottom: -$b; left: -$b;
border: inherit;
border-color: transparent;
background: inherit;
background-clip: border-box;
content: ''
}
}Now we can also see the background behind the barely visible border:
See the Pen by thebabydino (@thebabydino) on CodePen.
Alright, you may be seeing already where this is going! The next step is to blur() the pseudo-element. Since this pseudo-element is only visible only underneath the partially transparent border (the rest is covered by its parent’s padding-box-restricted background), it results the border area is the only area of the image we see blurred.
See the Pen by thebabydino (@thebabydino) on CodePen.
We’ve also brought the alpha of the element’s border-color down to .03 because we want the blurriness to be doing most of the job of highlighting where the border is.
This may look done, but there’s something I still don’t like: the edges of the pseudo-element are now blurred as well. So let’s fix that!
One convenient thing when it comes to the order browsers apply properties in is that filters are applied before clipping. While this is not what we want and makes us resort to inconvenient workarounds in a lot of other cases… right here, it proves to be really useful!
It means that, after blurring the pseudo-element, we can clip it to its border-box!
My preferred way of doing this is by setting clip-path to inset(0) because… it’s the simplest way of doing it, really! polygon(0 0, 100% 0, 100% 100%, 0 100%) would be overkill.
See the Pen by thebabydino (@thebabydino) on CodePen.
In case you’re wondering why not set the clip-path on the actual element instead of setting it on the :before pseudo-element, this is because setting clip-path on the element would make it a stacking context. This would force all its child elements (and consequently, its blurred :before pseudo-element as well) to be contained within it and, therefore, in front of its background. And then no nuclear z-index or !important could change that.
We can prettify this by adding some text with a nicer font, a box-shadow and some layout properties.
What if we have rounded corners?
The best thing about using inset() instead of polygon() for the clip-path is that inset() can also accommodate for any border-radius we may want!
And when I say any border-radius, I mean it! Check this out!div {
--r: 15% 75px 35vh 13vw/ 3em 5rem 29vmin 12.5vmax;
border-radius: var(--r);
/* same styles as before */
&:before {
/* same styles as before */
border-radius: inherit;
clip-path: inset(0 round var(--r));
}
}It works like a charm!
See the Pen by thebabydino (@thebabydino) on CodePen.
Extending support
Some mobile browsers still need the -webkit- prefix for both filter and clip-path, so be sure to include those versions too. Note that they are included in the CodePen demos embeded here, even though I chose to skip them in the code presented in the body of this article.
Alright, but what if we need to support Edge? clip-path doesn’t work in Edge, but filter does, which means we do get the blurred border, but no sharp cut limits.
Well, if we don’t need corner rounding, we can use the deprecated clip property as a fallback. This means adding the following line right before the clip-path ones:clip: rect(0 100% 100% 0)And our demo now works in Edge… sort of! The right, bottom and left edges are cut sharply, but the top one still remains blurred (only in the Debug mode of the Pen, all seems fine for the iframe in the Editor View). And opening DevTools or right clicking in the Edge window or clicking anywhere outside this window makes the effect of this property vanish. Bug of the month right there!
Alright, since this is so unreliable and it doesn’t even help us if we want rounded corners, let’s try another approach!
This is a bit like scratching behind the left ear with the right foot (or the other way around, depending on which side is your more flexible one), but it’s the only way I can think of to make it work in Edge.
Some of you may have already been screaming at the screen something like “but Ana… overflow: hidden!” and yes, that’s what we’re going for now. I’ve avoided it initially because of the way it works: it cuts out all descendant content outside the padding-box. Not outside the border-box, as we’ve done by clipping!
This means we need to ditch the real border and emulate it with padding, which I’m not exactly delighted about because it can lead to more complications, but let’s take it one step at a time!
As far as code changes are concerned, the first thing we do is remove all border-related properties and set the border-width value as the padding. We then set overflow: hidden and restrict the background of the actual element to the content-box. Finally, we reset the pseudo-element’s background-clip to the padding-box value and zero its offsets.$fake-b: 1.5em; // fake border-widthdiv {
/* same styles as before */
overflow: hidden;
padding: $fake-b;
background: url(oranges.jpg) 50%/ cover
padding-box /* background-origin */
content-box /* background-clip */;
&:before {
/* same styles as before */
top: 0; right: 0; bottom: 0; left: 0;
background: inherit;
background-clip: padding-box;
}
}See the Pen by thebabydino (@thebabydino) on CodePen.
If we want that barely visible “border” overlay, we need another background layer on the actual element:$fake-b: 1.5em; // fake border-width
$c: rgba(#000, .03);div {
/* same styles as before */
overflow: hidden;
padding: $fake-b;
--img: url(oranges.jpg) 50%/ cover;
background: var(--img)
padding-box /* background-origin */
content-box /* background-clip */,
linear-gradient($c, $c);
&:before {
/* same styles as before */
top: 0; right: 0; bottom: 0; left: 0;
background: var(--img);
}
}See the Pen by thebabydino (@thebabydino) on CodePen.
We can also add rounded corners with no hassle:
See the Pen by thebabydino (@thebabydino) on CodePen.
So why didn’t we do this from the very beginning?!
Remember when I said a bit earlier that not using an actual border can complicate things later on?
Well, let’s say we want to have some text. With the first method, using an actual border and clip-path, all it takes to prevent the text content from touching the blurred border is adding a padding (of let’s say 1em) on our element.
See the Pen by thebabydino (@thebabydino) on CodePen.
But with the overflow: hidden method, we’ve already used the padding property to create the blurred “border”. Increasing its value doesn’t help because it only increases the fake border’s width.
We could add the text into a child element. Or we could also use the :after pseudo-element!
The way this works is pretty similar to the first method, with the :after replacing the actual element. The difference is we clip the blurred edges with overflow: hidden instead of clip-path: inset(0) and the padding on the actual element is the pseudos’ border-width ($b) plus whatever padding value we want:$b: 1.5em; // border-widthdiv {
overflow: hidden;
position: relative;
padding: calc(1em + #{$b});
/* prettifying styles */
&:before, &:after {
position: absolute;
z-index: -1; /* put them *behind* parent */
/* zero all offsets */
top: 0; right: 0; bottom: 0; left: 0;
border: solid $b rgba(#000, .03);
background: url(oranges.jpg) 50%/ cover
border-box /* background-origin */
padding-box /* background-clip */;
content: ''
}
&:before {
border-color: transparent;
background-clip: border-box;
filter: blur(9px);
}
}See the Pen by thebabydino (@thebabydino) on CodePen.
What about having both text and some pretty extreme rounded corners? Well, that’s something we’ll discuss in another article – stay tuned!
What about backdrop-filter?
Some of you may be wondering (as I was when I started toying with various ideas in order to try to achieve this effect) whether backdrop-filter isn’t an option.
Well, yes and no!
Technically, it is possible to get the same effect, but since Firefox doesn’t yet implement it, we’re cutting out Firefox support if we choose to take this route. Not to mention this approach also forces us to use both pseudo-elements if we want the best support possible for the case when our element has some text content (which means we need the pseudos and their padding-box area background to show underneath this text).
Update: due to a regression, the backdrop-filter technique doesn’t work in Chrome anymore, so support is now limited to Safari and Edge at best.
For those who don’t yet know what backdrop-filter does: it filters out what can be seen through the (partially) transparent parts of the element we apply it on.
The way we need to go about this is the following: both pseudo-elements have a transparent border and a background positioned and sized relative to the padding-box. We restrict the background of pseudo-element on top (the :after) to the padding-box.
Now the :after doesn’t have a background in the border area anymore and we can see through to the :before pseudo-element behind it there. We set a backdrop-filter on the :after and maybe even change that border-color from transparent to slightly visible. The bottom (:before) pseudo-element’s background that’s still visible through the (partially) transparent, barely distinguishable border of the :after above gets blurred as a result of applying the backdrop-filter.$b: 1.5em; // border-widthdiv {
overflow: hidden;
position: relative;
padding: calc(1em + #{$b});
/* prettifying styles */
&:before, &:after {
position: absolute;
z-index: -1; /* put them *behind* parent */
/* zero all offsets */
top: 0; right: 0; bottom: 0; left: 0;
border: solid $b transparent;
background: $url 50%/ cover
/* background-origin & -clip */
border-box;
content: ''
}
&:after {
border-color: rgba(#000, .03);
background-clip: padding-box;
backdrop-filter: blur(9px); /* no Firefox support */
}
}Remember that the live demo for this doesn’t currently work in Firefox and needs the Experimental Web Platform features flag enabled in chrome://flags in order to work in Chrome.
Eliminating one pseudo-element
This is something I wouldn’t recommend doing in the wild because it cuts out Edge support as well, but we do have a way of achieving the result we want with just one pseudo-element.
We start by setting the image background on the element (we don’t really need to explicitly set a border as long as we include its width in the padding) and then a partially transparent, barely visible background on the absolutely positioned pseudo-element that’s covering its entire parent. We also set the backdrop-filter on this pseudo-element.$b: 1.5em; // border-widthdiv {
position: relative;
padding: calc(1em + #{$b});
background: url(oranges.jpg) 50%/ cover;
/* prettifying styles */
&:before {
position: absolute;
/* zero all offsets */
top: 0; right: 0; bottom: 0; left: 0;
background: rgba(#000, .03);
backdrop-filter: blur(9px); /* no Firefox support */
content: ''
}
}Alright, but this blurs out the entire element behind the almost transparent pseudo-element, including its text. And it’s no bug, this is what backdrop-filter is supposed to do.
The problem at hand.
In order to fix this, we need to get rid of (not make transparent, that’s completely useless in this case) the inner rectangle (whose edges are a distance $b away from the border-box edges) of the pseudo-element.
We have two ways of doing this.
The first way (live demo) is with clip-path and the zero-width tunnel technique:$b: 1.5em; // border-width
$o: calc(100% - #{$b});div {
/* same styles as before */
&:before {
/* same styles as before *//* doesn't work in Edge */
clip-path: polygon(0 0, 100% 0, 100% 100%, 0 100%,
0 0,
#{$b $b}, #{$b $o}, #{$o $o}, #{$o $b},
#{$b $b});
}
}The second way (live demo) is with two composited mask layers (note that, in this case, we need to explicitly set a border on our pseudo):$b: 1.5em; // border-widthdiv {
/* same styles as before */
&:before {
/* same styles as before */border: solid $b transparent;/* doesn't work in Edge */
--fill: linear-gradient(red, red);
-webkit-mask: var(--fill) padding-box,
var(--fill);
-webkit-mask-composite: xor;
mask: var(--fill) padding-box exclude,
var(--fill);
}
}Since neither of these two properties works in Edge, this means support is now limited to WebKit browsers (and we still need to enable the Experimental Web Platform features flag for backdrop-filter to work in Chrome).
Future (and better!) solution
The filter() function allows us to apply filters on individual background layers. This eliminates the need for a pseudo-element and reduces the code needed to achieve this effect to two CSS declarations!border: solid 1.5em rgba(#000, .03);
background: $url
border-box /* background-origin */
padding-box /* background-clip */,
filter($url, blur(9px))
/* background-origin & background-clip */
border-boxAs you may have guessed, the issue here is support. Safari is the only browser to implement it at this point, but if you think the filter() is something that could help you, you can add your use cases and track implementation progress for both Chrome and Firefox.
More border filter options
I’ve only talked about blurring the border up to now, but this technique works for pretty much any CSS filter (save for drop-shadow() which wouldn’t make much sense in this context). You can play with switching between them and tweaking values in the interactive demo below:
See the Pen by thebabydino (@thebabydino) on CodePen.
And all we’ve done so far has used just one filter function, but we can also chain them and then the possibilities are endless – what cool effects can you come up with this way?
See the Pen by thebabydino (@thebabydino) on CodePen.Share this:

Earlier this month Eric Bailey wrote about the current state of accessibility on the web and why it felt like fighting an uphill battle:As someone with a good deal of interest in the digital accessibility space, I follow WebAIM’s work closely. Their survey results are priceless insights into how disabled people actually use the web, so when the organization speaks with authority on a subject, I listen.
WebAIM’s accessibility analysis of the top 1,000,000 homepages was released to the public on February 27, 2019. I’ve had a few days to process it, and frankly, it’s left me feeling pretty depressed. In a sea of already demoralizing findings, probably the most notable one is that pages containing ARIA—a specialized language intended to aid accessibility—are actually more likely to have accessibility issues.Following up from that post, Ethan Marcotte jotted down his thoughts on the matter and about who has the responsibility to fix these issues in the long run:Organizations like WebAIM have, alongside countless other non-profits and accessibility advocates, been showing us how we could make the web live up to its promise as a truly universal medium, one that could be accessed by anyone, anywhere, regardless of ability or need. And we failed.
I say we quite deliberately. This is on us: on you, and on me. And, look, I realize it may sting to read that. Hell, my work is constantly done under deadline, the way I work seems to change every year month, and it can feel hard to find the time to learn more about accessibility. And maybe you feel the same way. But the fact remains that we’ve created a web that’s actively excluding people, and at a vast, terrible scale. We need to meditate on that.
I suppose the lesson I’m taking from this is, well, we need to much, much more than meditating. I agree with Marcy Sutton: accessibility is a civil right, full stop. Improving the state of accessibility on the web is work we have to support. The alternative isn’t an option. Leaving the web in its current state isn’t fair. It isn’t just.I entirely agree with Ethan here – we all have a responsibility to make the web a better place for everyone and especially when it comes to accessibility where the bar is so very low for us now. This isn’t to say that I know best, because there’s been plenty of times when I’ve dropped the ball when I’m designing something for the web.
What can we do to tackle the widespread issue surrounding web accessibility?
Well, as Eric mentions in his post, it’s first and foremost a problem of education and he points to Firefox and their great accessibility inspector as a tool to help us see and understanding accessibility principles in action:
Marco Zehe is on the Firefox accessibility team wrote and about what the inspector is and how to use it:This inspector is not meant as an evaluation tool. It is an inspection tool. So it will not give you hints about low contrast ratios, or other things that would tell you whether your site is WCAG compliant. It helps you inspect your code, helps you understand how your web site is translated into objects for assistive technologies.Chris also wrote up some of his thoughts a short while ago, including other accessibility testing tools and checklists that can help us get started making more accessible experiences. The important thing to note here is that these tools need to be embedded within our process for web design if they’re going to solve these issues.
We can’t simply blame our tools.
I know the current state of web accessbility is pretty bad and that there’s an enormous amount of work to do for us all, but to be honest, I can’t help but feel a little optimistic. For the first time in my career, I’ve had designers and engineers alike approach me excitedly about accessibility. Each year, there are tons of workshops, articles, meetups, and talks (and I particularly like this talk by Laura Carvajal) on the matter meaning there’s a growing source of referential content that can teach us to be better.
And I can’t help but think that all of these conversations are a good sign – but now it’s up to us to do the work.Share this:

In the last article, we got our hands dirty with Web Components by creating an HTML template that is in the document but not rendered until we need it.
Next up, we’re going to continue our quest to create a custom element version of the dialog component below which currently only uses HTMLTemplateElement:
See the Pen Dialog with template with script by Caleb Williams (@calebdwilliams)on CodePen.
So let’s push ahead by creating a custom element that consumes our template#dialog-template element in real-time.Article Series:
An Introduction to Web Components
Crafting Reusable HTML Templates
Creating a Custom Element from Scratch (This post)
Encapsulating Style and Structure with Shadow DOM (Coming soon!)
Advanced Tooling for Web Components (Coming soon!)Creating a custom element
The bread and butter of Web Components are custom elements. The customElements API gives us a path to define custom HTML tags that can be used in any document that contains the defining class.
Think of it like a React or Angular component (e.g. <MyCard />), but without the React or Angular dependency. Native custom elements look like this: <my-card></my-card>. More importantly, think of it as a standard element that can be used in your React, Angular, Vue, [insert-framework-you’re-interested-in-this-week] applications without much fuss.
Essentially, a custom element consists of two pieces: a tag name and a class that extends the built-in HTMLElement class. The most basic version of our custom element would look like this:class OneDialog extends HTMLElement {
connectedCallback() {
this.innerHTML = `<h1>Hello, World!</h1>`;
}
}customElements.define('one-dialog', OneDialog);Note: throughout a custom element, the this value is a reference to the custom element instance.
In the example above, we defined a new standards-compliant HTML element, <one-dialog></one-dialog>. It doesn’t do much… yet. For now, using the <one-dialog> tag in any HTML document will create a new element with an <h1> tag reading “Hello, World!”
We are definitely going to want something more robust, and we’re in luck. In the last article, we looked at creating a template for our dialog and, since we will have access to that template, let’s utilize it in our custom element. We added a script tag in that example to do some dialog magic. let’s remove that for now since we’ll be moving our logic from the HTML template to inside the custom element class.class OneDialog extends HTMLElement {
connectedCallback() {
const template = document.getElementById('one-dialog');
const node = document.importNode(template.content, true);
this.appendChild(node);
}
}Now, our custom element (<one-dialog>) is defined and the browser is instructed to render the content contained in the HTML template where the custom element is called.
Our next step is to move our logic into our component class.
Custom element lifecycle methods
Like React or Angular, custom elements have lifecycle methods. You’ve already been passively introduced to connectedCallback, which is called when our element gets added to the DOM.
The connectedCallback is separate from the element’s constructor. Whereas the constructor is used to set up the bare bones of the element, the connectedCallback is typically used for adding content to the element, setting up event listeners or otherwise initializing the component.
In fact, the constructor can’t be used to modify or manipulate the element’s attributes by design. If we were to create a new instance of our dialog using document.createElement, the constructor would be called. A consumer of the element would expect a simple node with no attributes or content inserted.
The createElement function has no options for configuring the element that will be returned. It stands to reason, then, that the constructor shouldn’t have the ability to modify the element that it creates. That leaves us with the connectedCallback as the place to modify our element.
With standard built-in elements, the element’s state is typically reflected by what attributes are present on the element and the values of those attributes. For our example, we’re going to look at exactly one attribute: [open]. In order to do this, we’ll need to watch for changes to that attribute and we’ll need attributeChangedCallback to do that. This second lifecycle method is called whenever one of the element constructor’s observedAttributes are updated.
That might sound intimidating, but the syntax is pretty simple:class OneDialog extends HTMLElement {
static get observedAttributes() {
return ['open'];
}
attributeChangedCallback(attrName, oldValue, newValue) {
if (newValue !== oldValue) {
this[attrName] = this.hasAttribute(attrName);
}
}
connectedCallback() {
const template = document.getElementById('one-dialog');
const node = document.importNode(template.content, true);
this.appendChild(node);
}
}In our case above, we only care if the attribute is set or not, we don’t care about a value (this is similar to the HTML5 required attribute on inputs). When this attribute is updated, we update the element’s open property. A property exists on a JavaScript object whereas an attribute exists on an HTMLElement, this lifecycle method helps us keep the two in sync.
We wrap the updater inside the attributeChangedCallback inside a conditional checking to see if the new value and old value are equal. We do this to prevent an infinite loop inside our program because later we are going to create a property getter and setter that will keep the property and attributes in sync by setting the element’s attribute when the element’s property gets updated. The attributeChangedCallback does the inverse: updates the property when the attribute changes.
Now, an author can consume our component and the presence of the open attribute will dictate whether or not the dialog will be open by default. To make that a bit more dynamic, we can add custom getters and setters to our element’s open property:class OneDialog extends HTMLElement {
static get boundAttributes() {
return ['open'];
}
attributeChangedCallback(attrName, oldValue, newValue) {
this[attrName] = this.hasAttribute(attrName);
}
connectedCallback() {
const template = document.getElementById('one-dialog');
const node = document.importNode(template.content, true);
this.appendChild(node);
}
get open() {
return this.hasAttribute('open');
}
set open(isOpen) {
if (isOpen) {
this.setAttribute('open', true);
} else {
this.removeAttribute('open');
}
}
}Our getter and setter will keep the open attribute (on the HTML element) and property (on the DOM object) values in sync. Adding the open attribute will set element.open to true and setting element.open to true will add the open attribute. We do this to make sure that our element’s state is reflected by its properties. This isn’t technically required, but is considered a best practice for authoring custom elements.
This does inevitably lead to a bit of boilerplate, but creating an abstract class that keeps the these in sync is a fairly trivial task by looping over the observed attribute list and using Object.defineProperty.
Now that we know whether or not our dialog is open, let’s add some logic to actually do the showing and hiding:class OneDialog extends HTMLElement {
/** Omitted */
constructor() {
super();
this.close = this.close.bind(this);
}
set open(isOpen) {
this.querySelector('.wrapper').classList.toggle('open', isOpen);
this.querySelector('.wrapper').setAttribute('aria-hidden', !isOpen);
if (isOpen) {
this._wasFocused = document.activeElement;
this.setAttribute('open', '');
document.addEventListener('keydown', this._watchEscape);
this.focus();
this.querySelector('button').focus();
} else {
this._wasFocused && this._wasFocused.focus && this._wasFocused.focus();
this.removeAttribute('open');
document.removeEventListener('keydown', this._watchEscape);
this.close();
}
}
close() {
if (this.open !== false) {
this.open = false;
}
const closeEvent = new CustomEvent('dialog-closed');
this.dispatchEvent(closeEvent);
}
_watchEscape(event) {
if (event.key === 'Escape') {
this.close();
}
}
}There’s a lot going on here, but let’s walk through it. The first thing we do is grab our wrapper and toggle the .open class based on isOpen. To keep our element accessible, we need to toggle the aria-hidden attribute as well.
If the dialog is open, then we want to save a reference to the previously-focused element. This is to account for accessibility standards. We also add a keydown listener to the document called watchEscape that we have bound to the element’s this in the constructor in a pattern similar to how React handles method calls in class components.
We do this not only to ensure the proper binding for this.close, but also because Function.prototype.bind returns an instance of the function with the bound call site. By saving a reference to the newly-bound method in the constructor, we’re able to then remove the event when the dialog is disconnected (more on that in a moment). We finish up by focusing on our element and setting setting the focus on the proper element in our shadow root.
We also create a nice little utility method for closing our dialog that dispatches a custom event alerting some listener that the dialog has been closed.
If the element is closed (i.e. !open), we check to make sure the this._wasFocused property is defined and has a focus method and call that to return the user’s focus back to the regular DOM. Then we remove our event listener to avoid any memory leaks.
Speaking of cleaning up after ourselves, that takes us to yet another lifecycle method: disconnectedCallback. The disconnectedCallback is the inverse of the connectedCallback in that the method is called once the element is removed from the DOM and allows us to clean up any event listeners or MutationObservers attached to our element.
It just so happens we have a few more event listeners to wire up:class OneDialog extends HTMLElement {
/** Omitted */
connectedCallback() {
this.querySelector('button').addEventListener('click', this.close);
this.querySelector('.overlay').addEventListener('click', this.close);
}
disconnectedCallback() {
this.querySelector('button').removeEventListener('click', this.close);
this.querySelector('.overlay').removeEventListener('click', this.close);
}
}Now we have a well-functioning, mostly accessible dialog element. There are a few bits of polish we can do, like capturing focus on the element, but that’s outside the scope of what we’re trying to learn here.
There is one more lifecycle method that doesn’t apply to our element, the adoptedCallback, which fires when the element is adopted into another part of the DOM.
In the following example, you will now see that our template element is being consumed by a standard <one-dialog> element.
See the Pen Dialog example using template by Caleb Williams (@calebdwilliams)on CodePen.
Another thing: non-presentational components
The <one-template> we have created so far is a typical custom element in that it includes markup and behavior that gets inserted into the document when the element is included. However, not all elements need to render visually. In the React ecosystem, components are often used to manage application state or some other major functionality, like <Provider /> in react-redux.
Let’s imagine for a moment that our component is part of a series of dialogs in a workflow. As one dialog is closed, the next one should open. We could make a wrapper component that listens for our dialog-closed event and progresses through the workflow.class DialogWorkflow extends HTMLElement {
connectedCallback() {
this._onDialogClosed = this._onDialogClosed.bind(this);
this.addEventListener('dialog-closed', this._onDialogClosed);
}get dialogs() {
return Array.from(this.querySelectorAll('one-dialog'));
}_onDialogClosed(event) {
const dialogClosed = event.target;
const nextIndex = this.dialogs.indexOf(dialogClosed);
if (nextIndex !== -1) {
this.dialogs[nextIndex].open = true;
}
}
}This element doesn’t have any presentational logic, but serves as a controller for application state. With a little effort, we could recreate a Redux-like state management system using nothing but a custom element that could manage an entire application’s state in the same one that React’s Redux wrapper does.
That’s a deeper look at custom elements
Now we have a pretty good understanding of custom elements and our dialog is starting to come together. But it still has some problems.
Notice that we’ve had to add some CSS to restyle the dialog button because our element’s styles are interfering with the rest of the page. While we could utilize naming strategies (like BEM) to ensure our styles won’t create conflicts with other components, there is a more friendly way of isolating styles. Spoiler! It’s shadow DOM and that’s what we’re going to look at in the next part of this series on Web Components.
Another thing we need to do is define a new template for every component or find some way to switch templates for our dialog. As it stands, there can only be one dialog type per page because the template that it uses must always be present. So either we need some way to inject dynamic content or a way to swap templates.
In the next article, we will look at ways to increase the usability of the <one-dialog> element we just created by incorporating style and content encapsulation using the shadow DOM.Article Series:
An Introduction to Web Components
Crafting Reusable HTML Templates
Creating a Custom Element from Scratch (This post)
Encapsulating Style and Structure with Shadow DOM (Coming soon!)
Advanced Tooling for Web Components (Coming soon!)Share this:

The Chrome team announced a new feature called Lite Pages that can be activated by flipping on the Data Saver option on an Android device:Chrome on Android’s Data Saver feature helps by automatically optimizing web pages to make them load faster. When users are facing network or data constraints, Data Saver may reduce data use by up to 90% and load pages two times faster, and by making pages load faster, a larger fraction of pages actually finish loading on slow networks. Now, we are securely extending performance improvements beyond HTTP pages to HTTPS pages and providing direct feedback to the developers who want it.
To show users when a page has been optimized, Chrome now shows in the URL bar that a Lite version of the page is being displayed.All of this is pretty neat but I think the name Lite Pages is a little confusing as it’s in no way related to AMP and Tim Kadlec makes that clear in his notes about the new feature:Lite pages are also in no way related to AMP. AMP is a framework you have to build your site in to reap any benefit from. Lite pages are optimizations and interventions that get applied to your current site. Google’s servers are still involved, by as a proxy service forwarding the initial request along. Your URL’s aren’t tampered with in any way.A quick glance at this seems great! We don’t have to give up ownership of our URLs, like with AMP, and we don’t have to develop with a proprietary technology — we can let Chrome be Chrome and do any performance things that it wants to do without turning anything on or off or adding JavaScript.
But wait! What kind of optimizations does a Lite Page make and how do they affect our sites? So far, it can disable scripts, replace images with placeholders and stop the loading of certain resources, although this is all subject to change in the future, I guess.
The optimizations only take effect when the loading experience for users is particularly bad, as the announcement blog post states:…they are applied when the network’s effective connection type is “2G” or “slow-2G,” or when Chrome estimates the page load will take more than 5 seconds to reach first contentful paint given current network conditions and device capabilities.It’s probably important to remember that the reason why Google is doing this isn’t to break our designs or mess with our websites — they’re doing this because there are serious performance concerns with the web, and those concerns aren’t limited to developing nations.Share this:

Have you seen Local by Flywheel? It’s a native app for helping set up local WordPress developer environments. I absolutely love it and use it to do all my local WordPress development work. It brings a lovingly designed GUI to highly technical tasks in a way that I think works very well. Plus it just works, which wins all the awards with me. Need to spin up a new site locally? Click a few buttons. Working on your site? All your sites are right there and you can flip them on with the flick of a toggle.
Local by Flywheel is useful no matter where your WordPress production site is hosted. But it really shines when paired with Flywheel itself, which is fabulous WordPress hosting that has all the same graceful combination of power and ease as Local does.
Just recently, we moved ShopTalkShow.com over to Local and it couldn’t have been easier.Running locally.
Setting up a new local site (which you would do even if it’s a long-standing site and you’re just getting it set up on Flywheel) is just a few clicks. That’s one of the most satisfying parts. You know all kinds of complex things are happening behind the scenes, like containers being spun up, proper software being installed, etc, but you don’t have to worry about any of it.
(Local is free, by the way.)
The Cross-platform-ness is nice.
I work on ShopTalk with Dave Rupert, who’s on Windows. Not a problem. Local works on Windows also, so Dave can spin up site in the exact same way I can.
Setting up Flywheel hosting is just as clean and easy as Local is.
If you’ve used Local, you’ll recognize the clean font, colors, and design when using the Flywheel website to get your hosting set up. Just a few clicks and I had that going:
Things that are known to be a pain the butt are painless on Local, like making sure SSL (HTTPS) is active and a CDN is helping with assets.
You get a subdomain to start, so you can make sure your site is working perfectly before pointing a production domain at it.
I didn’t just have to put files into place on the new hosting, move the database, and cross my fingers I did it all right when re-pointing the DNS. I could get the site up and running at the subdomain first, make sure it is, then do the DNS part.
But the moving of files and all that… it’s trivial because of Local!
The best part is that shooting a site up to Flywheel from Local is also just a click away.
All the files and the database head right up after you’ve connected Local to Flywheel.
All I did was make sure I had my local site to be a 100% perfect copy of production. All the theme and plugins and stuff were already that way because I was already doing local development, and I pulled the entire database down easily with WP DB Migrate Pro.
I think I went from “I should get around to setting up this site on Flywheel.” do “Well that’s done.” in less than an hour. Now Dave and I both have a local development environment and a path to production.Share this:

A little while back, I was in the process of adding focus styles to An Event Apart’s web site. Part of that was applying different focus effects in different areas of the design, like white rings in the header and footer and orange rings in the main text. But in one place, I wanted rings that were more obvious—something like stacking two borders on top of each other, in order to create unusual shapes that would catch the eye.I toyed with the idea of nesting elements with borders and some negative margins to pull one border on top of another, or nesting a border inside an outline and then using negative margins to keep from throwing off the layout. But none of that felt satisfying.
It turns out there are a number of tricks to create the effect of stacking one border atop another by combining a border with some other CSS effects, or even without actually requiring the use of any borders at all. Let’s explore, shall we?
Outline and box-shadow
If the thing to be multi-bordered is a rectangle—you know, like pretty much all block elements—then mixing an outline and a spread-out hard box shadow may be just the thing.
Let’s start with the box shadow. You’re probably used to box shadows like this:.drop-me {
background: #AEA;
box-shadow: 10px 12px 0.5rem rgba(0,0,0,0.5);
}That gets you a blurred shadow below and to the right of the element. Drop shadows, so last millennium! But there’s room, and support, for a fourth length value in box-shadow that defines a spread distance. This increases the size of the shadow’s shape in all directions by the given length, and then it’s blurred. Assuming there’s a blur, that is.
So if we give a box shadow no offset, no blur, and a bit of spread, it will draw itself all around the element, looking like a solid border without actually being a border..boxborder-me {
box-shadow: 0 0 0 5px firebrick;
}This box-shadow “border” is being drawn just outside the outer border edge of the element. That’s the same place outlines get drawn around block boxes, so all we have to do now is draw an outline over the shadow. Something like this:.boxborder-me {
box-shadow: 0 0 0 5px firebrick;
outline: dashed 5px darkturquoise;
}Bingo. A multicolor “border” that, in this case, doesn’t even throw off layout size, because shadows and outlines are drawn after element size is computed. The outline, which sits on top, can use pretty much any outline style, which is the same as the list of border styles. Thus, dotted and double outlines are possibilities. (So are all the other styles, but they don’t have any transparent parts, so the solid shadow could only be seen through translucent colors.)
If you want a three-tone effect in the border, multiple box shadows can be created using a comma-separated list, and then an outline put over top that. For example:.boxborder-me {
box-shadow: 0 0 0 1px darkturquoise,
0 0 0 3px firebrick,
0 0 0 5px orange,
0 0 0 6px darkturquoise;
outline: dashed 6px darkturquoise;
}Taking it back to simpler effects, combining a dashed outline over a spread box shadow with a solid border of the same color as the box shadow creates yet another effect:.boxborder-me {
box-shadow: 0 0 0 5px firebrick;
outline: dashed 5px darkturquoise;
border: solid 5px darkturquoise;
}The extra bonus here is that even though a box shadow is being used, it doesn’t fill in the element’s background, so you can see the backdrop through it. This is how box shadows always behave: they are only drawn outside the outer border edge. The “rest of the shadow,” the part you may assume is always behind the element, doesn’t exist. It’s never drawn. So you get results like this:
This is the result of explicit language in the CSS Background and Borders Module, Level 3, section 7.1.1:An outer box-shadow casts a shadow as if the border-box of the element were opaque. Assuming a spread distance of zero, its perimeter has the exact same size and shape as the border box. The shadow is drawn outside the border edge only: it is clipped inside the border-box of the element.
(Emphasis added.)Border and box-shadow
Speaking of borders, maybe there’s a way to combine borders and box shadows. After all, box shadows can be more than just drop shadows. They can also be inset. So what if we turned the previous shadow inward, and dropped a border over top of it?.boxborder-me {
box-shadow: 0 0 0 5px firebrick inset;
border: dashed 5px darkturquoise;
}That’s… not what we were after. But this is how inset shadows work: they are drawn inside the outer padding edge (also known as the inner border edge), and clipped beyond that:An inner box-shadow casts a shadow as if everything outside the padding edge were opaque. Assuming a spread distance of zero, its perimeter has the exact same size and shape as the padding box. The shadow is drawn inside the padding edge only: it is clipped outside the padding box of the element.
(Ibid; emphasis added.)So we can’t stack a border on top of an inset box-shadow. Maybe we could stack a border on top of something else…?
Border and multiple backgrounds
Inset shadows may be restricted to the outer padding edge, but backgrounds are not. An element’s background will, by default, fill the area out to the outer border edge. Fill an element background with solid color, give it a thick dashed border, and you’ll see the background color between the visible pieces of the border.
So what if we stack some backgrounds on top of each other, and thus draw the solid color we want behind the border? Here’s step one:.multibg-me {
border: 5px dashed firebrick;
background:
linear-gradient(to right, darkturquoise, 5px, transparent 5px);
background-origin: border-box;
}We can see, there on the left side, the blue background visible through the transparent parts of the dashed red border. Add three more like that, one for each edge of the element box, and:.multibg-me {
border: 5px dashed firebrick;
background:
linear-gradient(to top, darkturquoise, 5px, transparent 5px),
linear-gradient(to right, darkturquoise, 5px, transparent 5px),
linear-gradient(to bottom, darkturquoise, 5px, transparent 5px),
linear-gradient(to left, darkturquoise, 5px, transparent 5px);
background-origin: border-box;
}In each case, the background gradient runs for five pixels as a solid dark turquoise background, and then has a color stop which transitions instantly to transparent. This lets the “backdrop” show through the element while still giving us a “stacked border.”
One major advantage here is that we aren’t limited to solid linear gradients—we can use any gradient of any complexity, just to spice things up a bit. Take this example, where the dashed border has been made mostly transparent so we can see the four different gradients in their entirety:.multibg-me {
border: 15px dashed rgba(128,0,0,0.1);
background:
linear-gradient(to top, darkturquoise, red 15px, transparent 15px),
linear-gradient(to right, darkturquoise, red 15px, transparent 15px),
linear-gradient(to bottom, darkturquoise, red 15px, transparent 15px),
linear-gradient(to left, darkturquoise, red 15px, transparent 15px);
background-origin: border-box;
}If you look at the corners, you’ll see that the background gradients are rectangular, and overlap each other. They don’t meet up neatly, the way border corners do. This can be a problem if your border has transparent parts in the corners, as would be the case with border-style: double.Also, if you just want a solid color behind the border, this is a fairly clumsy way to stitch together that effect. Surely there must be a better approach?
Border and background clipping
Yes, there is! It involves changing the clipping boxes for two different layers of the element’s background. The first thing that might spring to mind is something like this:.multibg-me {
border: 5px dashed firebrick;
background: #EEE, darkturquoise;
background-clip: padding-box, border-box;
}But that does not work, because CSS requires that only the last (and thus lowest) background be set to a <color> value. Any other background layer must be an image.
So we replace that very-light-gray background color with a gradient from that color to that color: this works because gradients are images. In other words:.multibg-me {
border: 5px dashed firebrickred;
background: linear-gradient(to top, #EEE, #EEE), darkturquoise;
background-clip: padding-box, border-box;
}The light gray “gradient” fills the entire background area, but is clipped to the padding box using background-clip. The dark turquoise fills the entire area and is clipped to the border box, as backgrounds always have been by default. We can alter the gradient colors and direction to anything we like, creating an actual visible gradient or shifting it to all-white or whatever other linear effect we would like.
The downside here is that there’s no way to make that padding-area background transparent such that the element’s backdrop can be seen through the element. If the linear gradient is made transparent, then the whole element background will be filled with dark turquoise. Or, more precisely, we’ll be able to see the dark turquoise that was always there.
In a lot of cases, it won’t matter that the element background isn‘t see-through, but it’s still a frustrating limitation. Isn’t there any way to get the effect of stacked borders without wacky hacks and lost capabilities?
Border images
In fact, what if we could take an image of the stacked border we want to see in the world, slice it up, and use that as the border? Like, say, this image becomes this border?
Here’s the code to do exactly that:.borderimage-me {
border: solid 5px;
border-image: url(triple-stack-border.gif) 15 / 15px round;
}First, we set a solid border with some width. We could also set a color for fallback purposes, but it’s not really necessary. Then we point to an image URL, define the slice inset(s) at 15 and width of the border to be 15px, and finally the repeat pattern of round.
There are more options for border images, which are a little too complex to get into here, but the upshot is that you can take an image, define nine slices of it using offset values, and have those images used to synthesize a complete border around an image. That’s done by defining offsets from the edges of the image itself, which in this case is 15. Since the image is a GIF and thus pixel-based, the offsets are in pixels, so the “slice lines” are set 15 pixels inward from the edges of the image. (In the case of an SVG, the offsets are measured in terms of the SVG’s coordinate system.) It looks like this:
Each slice is assigned to the corner or side of the element box that corresponds to itself; i.e., the bottom right corner slice is placed in the bottom right corner of the element, the top (center) slice is used along the top edge of the element, and so on.
If one of the edge slices is smaller than the edge of the element is long—which almost always happens, and is certainly true here—then the slice is repeated in one of a number of ways. I chose round, which fills in as many repeats as it can and then scales them all up just enough to fill out the edge. So with a 70-pixel-long slice, if the edge is 1,337 pixels long, there will be 19 repetitions of the slice, each of which is scaled to be 70.3 pixels wide. Or, more likely, the browser generates a single image containing 19 repetitions that’s 1,330 pixels wide, and then stretches that image the extra 7 pixels.
You might think the drawback here is browser support, but that turns out not to be the case.Desktop
Chrome
Opera
Firefox
IE
Edge
Safari
56
43
50
11
12
9.1Mobile / Tablet
iOS Safari
Opera Mobile
Opera Mini
Android
Android Chrome
Android Firefox
9.3
46
all*
67
71
64Just watch out for the few bugs (really, implementation limits) that linger around a couple of implementations, and you’ll be fine.
Conclusion
While it might be a rare circumstance where you want to combine multiple “border” effects, or stack them atop each other, it’s good to know that CSS provides a number of ways to get the job done, and that most of them are already widely supported. And who knows? Maybe one day there will be a simple way to achieve these kinds of effects through a single property, instead of by mixing several together. Until then, happy border stacking!Share this:

In our last article, we discussed the Web Components specifications (custom elements, shadow DOM, and HTML templates) at a high-level. In this article, and the three to follow, we will put these technologies to the test and examine them in greater detail and see how we can use them in production today. To do this, we will be building a custom modal dialog from the ground up to see how the various technologies fit together.Article Series:
An Introduction to Web Components
Crafting Reusable HTML Templates (This post)
Creating a Custom Element from Scratch (Coming soon!)
Encapsulating Style and Structure with Shadow DOM (Coming soon!)
Advanced Tooling for Web Components (Coming soon!)HTML templates
One of the least recognized, but most powerful features of the Web Components specification is the <template> element. In the first article of this series, we defined the template element as, “user-defined templates in HTML that aren’t rendered until called upon.” In other words, a template is HTML that the browser ignores until told to do otherwise.
These templates then can be passed around and reused in a lot of interesting ways. For the purposes of this article, we will look at creating a template for a dialog that will eventually be used in a custom element.
Defining our template
As simple as it might sound, a <template> is an HTML element, so the most basic form of a template with content would be:<template>
<h1>Hello world</h1>
</template>Running this in a browser would result in an empty screen as the browser doesn’t render the template element’s contents. This becomes incredibly powerful because it allows us to define content (or a content structure) and save it for later — instead of writing HTML in JavaScript.
In order to use the template, we will need JavaScriptconst template = document.querySelector('template');
const node = document.importNode(template.content, true);
document.body.appendChild(node);The real magic happens in the document.importNode method. This function will create a copy of the template’s content and prepare it to be inserted into another document (or document fragment). The first argument to the function grabs the template’s content and the second argument tells the browser to do a deep copy of the element’s DOM subtree (i.e. all of its children).
We could have used the template.content directly, but in so doing we would have removed the content from the element and appended to the document’s body later. Any DOM node can only be connected in one location, so subsequent uses of the template’s content would result in an empty document fragment (essentially a null value) because the content had previously been moved. Using document.importNode allows us to reuse instances of the same template content in multiple locations.
That node is then appended into the document.body and rendered for the user. This ultimately allows us to do interesting things, like providing our users (or consumers of our programs) templates for creating content, similar to the following demo, which we covered in the first article:
See the Pen Template example by Caleb Williams (@calebdwilliams)on CodePen.
In this example, we have provided two templates to render the same content — authors and books they’ve written. As the form changes, we choose to render the template associated with that value. Using that same technique will allow us eventually create a custom element that will consume a template to be defined at a later time.
The versatility of template
One of the interesting things about templates is that they can contain any HTML. That includes script and style elements. A very simple example would be a template that appends a button that alerts us when it is clicked.<button id="click-me">Log click event</button>Let’s style it up:button {
all: unset;
background: tomato;
border: 0;
border-radius: 4px;
color: white;
font-family: Helvetica;
font-size: 1.5rem;
padding: .5rem 1rem;
}…and call it with a really simple script:const button = document.getElementById('click-me');
button.addEventListener('click', event => alert(event));Of course, we can put all of this together using HTML’s <style> and <script> tags directly in the template rather than in separate files:<template id="template">
<script>
const button = document.getElementById('click-me');
button.addEventListener('click', event => alert(event));
</script>
<style>
#click-me {
all: unset;
background: tomato;
border: 0;
border-radius: 4px;
color: white;
font-family: Helvetica;
font-size: 1.5rem;
padding: .5rem 1rem;
}
</style>
<button id="click-me">Log click event</button>
</template>Once this element is appended to the DOM, we will have a new button with ID #click-me, a global CSS selector targeted to the button’s ID, and a simple event listener that will alert the element’s click event.
For our script, we simply append the content using document.importNode and we have a mostly-contained template of HTML that can be moved around from page to page.
See the Pen Template with script and styles demo by Caleb Williams (@calebdwilliams)on CodePen.
Creating the template for our dialog
Getting back to our task of making a dialog element, we want to define our template’s content and styles.<template id="one-dialog">
<script>
document.getElementById('launch-dialog').addEventListener('click', () => {
const wrapper = document.querySelector('.wrapper');
const closeButton = document.querySelector('button.close');
const wasFocused = document.activeElement;
wrapper.classList.add('open');
closeButton.focus();
closeButton.addEventListener('click', () => {
wrapper.classList.remove('open');
wasFocused.focus();
});
});
</script>
<style>
.wrapper {
opacity: 0;
transition: visibility 0s, opacity 0.25s ease-in;
}
.wrapper:not(.open) {
visibility: hidden;
}
.wrapper.open {
align-items: center;
display: flex;
justify-content: center;
height: 100vh;
position: fixed;
top: 0;
left: 0;
right: 0;
bottom: 0;
opacity: 1;
visibility: visible;
}
.overlay {
background: rgba(0, 0, 0, 0.8);
height: 100%;
position: fixed;
top: 0;
right: 0;
bottom: 0;
left: 0;
width: 100%;
}
.dialog {
background: #ffffff;
max-width: 600px;
padding: 1rem;
position: fixed;
}
button {
all: unset;
cursor: pointer;
font-size: 1.25rem;
position: absolute;
top: 1rem;
right: 1rem;
}
button:focus {
border: 2px solid blue;
}
</style>
<div class="wrapper">
<div class="overlay"></div>
<div class="dialog" role="dialog" aria-labelledby="title" aria-describedby="content">
<button class="close" aria-label="Close">&#x2716;&#xfe0f;</button>
<h1 id="title">Hello world</h1>
<div id="content" class="content">
<p>This is content in the body of our modal</p>
</div>
</div>
</div>
</template>This code will serve as the foundation for our dialog. Breaking it down briefly, we have a global close button, a heading and some content. We have also added in a bit of behavior to visually toggle our dialog (although it isn’t yet accessible). In our next article, we will put custom elements to use and create one of our own that consumes this template in real-time.
See the Pen Dialog with template with script by Caleb Williams (@calebdwilliams)on CodePen.Article Series:
An Introduction to Web Components
Crafting Reusable HTML Templates (This post)
Creating a Custom Element from Scratch (Coming soon!)
Encapsulating Style and Structure with Shadow DOM (Coming soon!)
Advanced Tooling for Web Components (Coming soon!)Share this:

A spreadsheet has always been a strong (if fairly literal) analogy for a database. A database has tables, which is like a single spreadsheet. Imagine a spreadsheet for tracking RSVPs for a wedding. Across the top, column titles like First Name, Last Name, Address, and Attending?. Those titles are also columns in a database table. Then each person in that spreadsheet is literally a row, and that’s also a row in a database table (or an entry, item, or even tuple if you’re really a nerd).
It’s been getting more and more common that this doesn’t have to be an analogy. We can quite literally use a spreadsheet UI to be our actual database. That’s meaningful in that it’s not just viewing database data as a spreadsheet, but making spreadsheet-like features first-class citizens of the app right alongside database-like features.With a spreadsheet, the point might be viewing the thing as a whole and understanding things that way. Browsing, sorting, entering and editing data directly in the UI, and making visual output that is useful.
With a database, you don’t really look right at it — you query it and use the results. Entering and editing data is done through code and APIs.
That’s not to say you can’t look directly at a database. Database tools like Sequel Pro (and many others!) offer an interface for looking at tables in a spreadsheet-like format:
What’s nice is that the idea of spreadsheets and databases can co-exist, offering the best of both worlds at once. At least, on a certain scale.
We’ve talked about Airtable before here on CSS-Tricks and it’s a shining example of this.
Airtable calls them bases, and while you can view the data inside them in all sorts of useful ways (a calendar! a gallery! a kanban!), perhaps the primary view is that of a spreadsheet:
If all you ever do with Airtable is use it as a spreadsheet, it’s still very nice. The UI is super well done. Things like filtering and sorting feel like true first-class citizens in a way that it’s almost weird that other spreadsheet technology doesn’t. Even the types of fields feel practical and modern.
Plus with all the different views in a base, and even cooler, all the “blocks” they offer to make the views more dashboard-like, it’s a powerful tool.
[embedded content]
But the point I’m trying to make here is that you can use your Airtable base like a database as well, since you automatically have read/write API access to your base.
So cool that these API docs use data from your own base to demonstrate the API.
I talked about this more in my article How To Use Airtable as a Front End Developer. This API access is awesome from a read data perspective, to do things like use it as a data source for a blog. Robin yanked in data to build his own React-powered interface. I dig that there is a GraphQL interface, if it is third-party.
The write access is arguably even more useful. We use it at CodePen to do CRM-ish stuff by sending data into an Airtable base with all the information we need, then use Airtable directly to visualize things and do the things we want.
Airtable alternatives?
There used to be Fieldbook, but that shut down.
RowShare looks weirdly similar (although a bit lighter on features) but it doesn’t look like it has an API, so it doesn’t quite fit the bill for that database/spreadsheet gap spanning.
Zoho Creator does have an API and interesting visualization stuff built in, which actually looks pretty darn cool. It looks like some of their marketing is based around the idea that if you need to build a CRUD app, you can do that with this with zero coding — and I think they are right that it’s a compelling sell.
Actiondesk looks interesting in that it’s in the category of a modern take on the power of spreadsheets.
While it’s connected to a database in that it looks like it can yank in data from something like MySQL or PostgreSQL, it doesn’t look like it has database-like read/write APIs.
Can we just use Google Sheets?
The biggest spreadsheet tool in the sky is, of course, the Google one, as it’s pretty good, free, and familiar. It’s more like a port of Excel to the browser, so I might argue it’s more tied to the legacy of number-nerds than it is any sort of fresh take on a spreadsheet or data storage tool.
Google Sheets has an API. They take it fairly seriously as it’s in v4 and has a bunch of docs and guides. Check out a practical little tutorial about writing to it from Slack. The problem, as I understand it, is that the API is weird and complicated and hard, like Sheets itself. Call me a wimp, but this quick start is a little eye-glazing.
What looks like the most compelling route here, assuming you want to keep all your data in Google Sheets and use it like a database, is Sheetsu. It deals with the connection/auth to the sheet on its end, then gives you API endpoints to the data that are clean and palatable.
Plus there are some interesting features, like giving you a form UI for possibly easier (or more public) data entry than dealing with the spreadsheet itself.
There is also Sheetrock.js, an open source library helping out with that API access to a sheet, but it hasn’t been touched in a few years so I’m unsure the status there.
I ain’t trying to tell you this idea entirely replaces traditional databases.
For one thing, the relational part of databases, like MySQL, is a super important aspect that I don’t think spreadsheets always handle particularly well.
Say you have an employee table in your database, and for each row in that table, it lists the department they work for.ID Name Department
-- -- --
1 Chris Coyier Front-End Developer
2 Barney Butterscotch Human ResourcesIn a spreadsheet, perhaps those department names are just strings. But in a database, at a certain scale, that’s probably not smart. Instead, you’d have another table of departments, and relate the two tables with a foreign key. That’s exactly what is described in this classic explainer doc:To find the name of a particular employee’s department, there is no need to put the name of the employee’s department into the employee table. Instead, the employee table contains a column holding the department ID of the employee’s department. This is called a foreign key to the department table. A foreign key references a particular row in the table containing the corresponding primary key.ID Name Department
-- -- --
1 Chris Coyier 1
2 Barney Butterscotch 2ID Department Manager
-- -- --
1 Front-End Developers Akanya Borbio
2 Human Resources Susan SnowrinkleTo be fair, spreadsheets can have relational features too (Airtable does), but perhaps it isn’t a fundamental first-class citizen like some databases treat it.
Perhaps more importantly, databases, largely being open source technology, are supported by a huge ecosystem of technology. You can host your PostgreSQL or MySQL database (or whatever all the big database players are) on all sorts of different hosting platforms and hardware. There are all sorts of tools for monitoring it, securing it, optimizing it, and backing it up. Plus, if you’re anywhere near breaking into the tens of thousands of rows point of scale, I’d think a spreadsheet has been outscaled.
Choosing a proprietary host of data is largely for convenience and fancy UX at a somewhat small scale. I kinda love it though.Share this:

Front-end development moves at a break-neck pace. This is made evident by the myriad articles, tutorials, and Twitter threads bemoaning the state of what once was a fairly simple tech stack. In this article, I’ll discuss why Web Components are a great tool to deliver high-quality user experiences without complicated frameworks or build steps and that don’t run the risk of becoming obsolete. In subsequent articles of this five-part series, we will dive deeper into each of the specifications.
This series assumes a basic understanding of HTML, CSS, and JavaScript. If you feel weak in one of those areas, don’t worry, building a custom element actually simplifies many complexities in front-end development.Article Series:
An Introduction to Web Components (This post)
Crafting Reusable HTML Templates (Coming soon!)
Creating a Custom Element from Scratch (Coming soon!)
Encapsulating Style and Structure with Shadow DOM (Coming soon!)
Advanced Tooling for Web Components (Coming soon!)What are Web Components, anyway?
Web Components consist of three separate technologies that are used together:
Custom Elements. Quite simply, these are fully-valid HTML elements with custom templates, behaviors and tag names (e.g. <one-dialog>) made with a set of JavaScript APIs. Custom Elements are defined in the HTML Living Standard specification.
Shadow DOM. Capable of isolating CSS and JavaScript, almost like an <iframe>. This is defined in the Living Standard DOM specification.
HTML templates. User-defined templates in HTML that aren’t rendered until called upon. The <template> tag is defined in the HTML Living Standard specification.
These are what make up the Web Components specification.
HTML Imports is likely to be the fourth technology in the stack, but it has yet to be implemented in any of the big four browsers. The Chrome team has announced it an intent to implement them in a future release.
Web Components are generally available in all of the major browsers with the exception of Microsoft Edge and Internet Explorer 11, but polyfills exist to fill in those gaps.
Referring to any of these as Web Components is technically accurate because the term itself is a bit overloaded. As a result, each of the technologies can be used independently or combined with any of the others. In other words, they are not mutually exclusive.
Let’s take a quick look at each of those first three. We’ll dive deeper into them in other articles in this series.
Custom elements
As the name implies, custom elements are HTML elements, like <div>, <section> or <article>, but something we can name ourselves that are defined via a browser API. Custom elements are just like those standard HTML elements — names in angle brackets — except they always have a dash in them, like <news-slider> or <bacon-cheeseburger>. Going forward, browser vendors have committed not to create new built-in elements containing a dash in their names to prevent conflicts.
Custom elements contain their own semantics, behaviors, markup and can be shared across frameworks and browsers.class MyComponent extends HTMLElement {
connectedCallback() {
this.innerHTML = `<h1>Hello world</h1>`;
}
}
customElements.define('my-component', MyComponent);See the Pen Custom elements demo by Caleb Williams (@calebdwilliams)on CodePen.
In this example, we define <my-component>, our very own HTML element. Admittedly, it doesn’t do much, however this is the basic building block of a custom element. All custom elements must in some way extend an HTMLElement in order to be registered with the browser.
Custom elements exist without third-party frameworks and the browser vendors are dedicated to the continued backward compatibility of the spec, all but guaranteeing that components written according to the specifications will not suffer from breaking API changes. What’s more, these components can generally be used out-of-the-box with today’s most popular frameworks, including Angular, React, Vue, and others with minimal effort.
Shadow DOM
The shadow DOM is an encapsulated version of the DOM. This allows authors to effectively isolate DOM fragments from one another, including anything that could be used as a CSS selector and the styles associated with them. Generally, any content inside of the document’s scope is referred to as the light DOM, and anything inside a shadow root is referred to as the shadow DOM.
When using the light DOM, an element can be selected by using document.querySelector('selector') or by targeting any element’s children by using element.querySelector('selector'); in the same way, a shadow root’s children can be targeted by calling shadowRoot.querySelector where shadowRoot is a reference to the document fragment — the difference being that the shadow root’s children will not be select-able from the light DOM. For example, If we have a shadow root with a <button> inside of it, calling shadowRoot.querySelector('button') would return our button, but no invocation of the document’s query selector will return that element because it belongs to a different DocumentOrShadowRoot instance. Style selectors work in the same way.
In this respect, the shadow DOM works sort of like an <iframe> where the content is cut off from the rest of the document; however, when we create a shadow root, we still have total control over that part of our page, but scoped to a context. This is what we call encapsulation.
If you’ve ever written a component that reuses the same id or relies on either CSS-in-JS tools or CSS naming strategies (like BEM), shadow DOM has the potential to improve your developer experience.
Imagine the following scenario:<div>
<div id="example">
<!-- Pseudo-code used to designate a shadow root -->
<#shadow-root>
<style>
button {
background: tomato;
color: white;
}
</style>
<button id="button">This will use the CSS background tomato</button>
</#shadow-root>
</div>
<button id="button">Not tomato</button>
</div>Aside from the pseudo-code of <#shadow-root> (which is used here to demarcate the shadow boundary which has no HTML element), the HTML is fully valid. To attach a shadow root to the node above, we would run something like:const shadowRoot = document.getElementById('example').attachShadow({ mode: 'open' });
shadowRoot.innerHTML = `<style>
button {
color: tomato;
}
</style>
<button id="button">This will use the CSS color tomato <slot></slot></button>`;A shadow root can also include content from its containing document by using the <slot> element. Using a slot will drop user content from the outer document at a designated spot in your shadow root.
See the Pen Shadow DOM style encapsulation demo by Caleb Williams (@calebdwilliams)on CodePen.
HTML templates
The aptly-named HTML <template> element allows us to stamp out re-usable templates of code inside a normal HTML flow that won’t be immediately rendered, but can be used at a later time.<template id="book-template">
<li><span class="title"></span> &mdash; <span class="author"></span></li>
</template><ul id="books"></ul>The example above wouldn’t render any content until a script has consumed the template, instantiated the code and told the browser what to do with it.const fragment = document.getElementById('book-template');
const books = [
{ title: 'The Great Gatsby', author: 'F. Scott Fitzgerald' },
{ title: 'A Farewell to Arms', author: 'Ernest Hemingway' },
{ title: 'Catch 22', author: 'Joseph Heller' }
];books.forEach(book => {
// Create an instance of the template content
const instance = document.importNode(fragment.content, true);
// Add relevant content to the template
instance.querySelector('.title').innerHTML = book.title;
instance.querySelector('.author').innerHTML = book.author;
// Append the instance ot the DOM
document.getElementById('books').appendChild(instance);
});Notice that this example creates a template (<template id="book-template">) without any other Web Components technology, illustrating again that the three technologies in the stack can be used independently or collectively.
Ostensibly, the consumer of a service that utilizes the template API could write a template of any shape or structure that could be created at a later time. Another page on a site might use the same service, but structure the template this way:<template id="book-template">
<li><span class="author"></span>'s classic novel <span class="title"></span></li>
</template><ul id="books"></ul>See the Pen Template example by Caleb Williams (@calebdwilliams)on CodePen.
That wraps up our introduction to Web Components
As web development continues to become more and more complicated, it will begin to make sense for developers like us to begin deferring more and more development to the web platform itself which has continued to mature. The Web Components specifications are a set of low-level APIs that will continue to grow and evolve as our needs as developers evolve.
In the next article, we will take a deeper look at the HTML templates part of this. Then, we’ll follow that up with a discussion of custom elements and shadow DOM. Finally, we’ll wrap it all up by looking at higher-level tooling and incorporation with today’s popular libraries and frameworks.Article Series:
An Introduction to Web Components (This post)
Crafting Reusable HTML Templates (Coming soon!)
Creating a Custom Element from Scratch (Coming soon!)
Encapsulating Style and Structure with Shadow DOM (Coming soon!)
Advanced Tooling for Web Components (Coming soon!)Share this:

Jen Simmons has been coining the term intrinsic design, referring to a new era in web layout where the sizing of content has gone beyond fluid columns and media query breakpoints and into, I dunno, something a bit more exotic. For example, columns that are sized more by content and guidelines than percentages. And not always columns, but more like appropriate placement, however that needs to be done.One thing is for sure, people are playing with the possibilities a lot right now. In the span of 10 days I’ve gathered these links:Share this:

In my experience working with design systems, I’ve found that I have to sacrifice my portfolio to do it well. Unlike a lot of other design work where it’s relatively easy to present Dribbble-worthy interfaces and designs, I fear that systems are quite a bit trickier than that.
You could make things beautiful, but the best work that happens on a design systems team often isn’t beautiful. In fact, a lot of the best work isn’t even visible.For example, most days I’m pairing up with folks on my team to help them understand how our system works; from the CSS architecture, to the font stack, to the UI Kit to how a component can be manipulated to solve a specific problem, to many things in between. I’m trying as best as I can to help other designers understand what would be hard to build and what would be easy, as well as when to change their designs based on technical or other design constraints.
Further, there’s a lot of hard and diligent work that goes into projects that have no visible impact on the system at all. Last week, I noticed a weird thing with our checkboxes. Our Checkbox React component would output HTML like this:<div class="checkbox">
<label for="ch-1">
<input id="ch-1" type="checkbox" class="checkbox" />
</label>
</div>We needed to wrap the checkbox with a <div> for styling purposes and, from a quick glance, there’s nothing wrong with this markup. However, the <div> and the <input> both have a class of .checkbox and there were confusing styles in the CSS file that styled the <div> first and then un-did those styles to fix the <input> itself.
The fix for this is a pretty simple one: all we need to do is make sure that the class names are specific so that we can safely refactor any confusing CSS:<div class="checkbox-wrapper">
<label for="ch-1">
<input id="ch-1" type="checkbox" class="checkbox" />
</label>
</div>The thing is that this work took more than a week to ship because we had to refactor a ton of checkboxes in our app to behave in the same way and make sure that they were all using the same component. These checkboxes are one of those things that are now significantly better and less confusing, but it’s difficult to make it look sexy in a portfolio. I can’t simply drop them into a big iPhone mockup and rotate it as part of a fancy portfolio post if I wanted to write about my work or show it to someone else.
Take another example: I spent an entire day making an audit of our illustrations to help our team get an understanding of how we use them in our application. I opened up Figma and took dozens of screenshots:
It’s sort of hard to take credit for this work because the heavy lifting is really moderating a discussion and helping the team plan. It’s important work! But I feel like it’s hard to show that this work is valuable and to show the effects of it in a large org. “Things are now less confusing,” isn’t exactly a great accomplishment – but it really should be. These boring, methodical changes are vital for the health of a good design system.
Also… it’s kind of weird to putm “I wrote documentation” in a portfolio as much as it is to say, “I paired with designers and engineers for three years.” It’s certainly less satisfying than a big, glossy JPEG of a cool interface you designed. And I’m not sure if this is the same everywhere, but only about 10% of the work I do is visual and worthy of showing off.
My point is that building new components like this RadioCard I designed a while back is extraordinarily rare and accounts for a tiny amount of the useful work that I do:
See the Pen Gusto App – RadioCard Prototype by Robin Rendle (@robinrendle)on CodePen.
I’d love to see how you’re dealing with this problem though. How do you show off your front-end and design systems work? How do you make it visible and valuable in your organization? Let me know in the comments!Share this:

When I first started learning web development I thought hiding content was simple: slap display: none; onto your hidden element and call it a day. Since then I’ve learned about screen readers, ARIA attributes, the HTML5 hidden attribute, and more!
It’s important to ensure our websites are accessible to everyone, regardless of whether or not they use a screen reader, but with this myriad of options, how do we know when to use what?
There are four main scenarios where you may wish to hide content:1. Hiding content for everyone, regardless of whether they use a screen reader2. Hiding content for screen readers while showing it to other users3. Showing additional content for screen readers while hiding it from other users4. Hiding content at specific screen sizes
Let’s dive deeper into each of those scenarios to learn how to handle them.Hiding Content for EveryoneWhen hiding content for all users we can take advantage of HTML5’s hidden attribute. The hidden attribute signals that content should not be rendered, regardless of medium or screen reader use. In supported browsers it also hides the content from view, similar to display: none;.
It may feel odd to be handling display in your HTML instead of your CSS, but there’s a good reason for it! All devices should respect the hidden attribute, including browsers, screen readers, and printers, even if they don’t load your stylesheets.
This technique is most often used when a site is dynamically showing and hiding content, like a popup or accordion. You may need to combine the hidden attribute with a CSS class to allow for transitions. In that case, just make sure you update the hidden attribute whenever you change visibility by another means.
There’s one extra wrinkle when using hidden. It’s not supported in Internet Explorer 10 and below so if you do use hidden you should also set display: none; !important in CSS to ensure the content is hidden in all browsers.<div class="example" hidden></div><style>
.example[hidden],{
display:none !important;
}
</style>This can also be set as a global style using attribute selectors.[hidden] {
display: none !important;
}Hiding Content for Screen ReadersSome content is not important for understanding a web page, but is added to make the design more visually appealing. For example, icons and glyphs can provide a nice visual polish, but tend to be unhelpful — and sometimes downright distracting — for screen reader users. In this scenario we’ll want to hide the content from screen readers while showing it to everyone else.
In this case we’ll use the aria-hidden attribute. aria-hidden is a boolean attribute so it can be set to true or false. Setting the attribute to false is the same as not including it at all, so you’ll generally want to set it to true and use it like this:<div class="my-glyph" aria-hidden="true"></div>aria-hidden="true" should not be confused with role="presentation" which strips the semantic meaning of an element from the accessibility tree. Here’s a helpful article outlining the difference between the two.Showing Additional Content for Screen ReadersA good web page design often uses visual clues to convey information to the viewer. It’s important to structure your page so that screen reader users get these same clues from your text. For example, pagination may be obvious when laid out visually, but might read as a meaningless list of numbers over a screen reader. In these scenarios it’s helpful to include extra information for screen readers without cluttering up your visual design.
Setting display: none; hides the content but also removes it from the accessibility tree so screen readers won’t read it. Because of that it’s best to fall back to other CSS tools to hide the content while keeping it in the accessibility tree.
The 18f site has a great solution to hide content visually while keeping it in the accessibility tree for screen readers:.sr-only {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
}In the comments of this post Edward Martin pointed out that the CSS clip property is deprecated and that we should be using clip-path. clip-path isn’t fully supported yet, so for now it’s best to include both clip and clip-path.
In addition, Kimblim pointed out that this technique can cause screen readers to skip the spaces in between words and suggested adding white-space: nowrap; to avoid this.
Following this advice leaves us with this more robust class:.sr-only {
border: 0;
clip: rect(0 0 0 0);
clip-path: polygon(0px 0px, 0px 0px, 0px 0px);
-webkit-clip-path: polygon(0px 0px, 0px 0px, 0px 0px);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px;
white-space: nowrap;
}The aria-label attribute can also be used to provide additional information to screen readers though generally the CSS solution is preferable. It’s worth learning both techniques and knowing when to use one or the other.Hiding Content at Specific Screen SizesWhen building responsive web pages designers often choose to display content at certain screen sizes but not others using media queries. In these scenarios the content should generally be included in the accessibility tree for screen readers, so hidden and aria-hidden are not necessary.Bringing it All TogetherNow we’ve got a lot of tools in our toolbox. We can hide content for all users, for screen readers, for users not using screen readers, and for specific screen sizes. Learning how to properly hide content in an accessible way is a valuable skill for anyone touching the front-end, and your users will appreciate it!

The popularity of CSS-in-JS has mostly come from the React community, and indeed many CSS-in-JS libraries are React-specific. However, Emotion, the most popular library in terms of npm downloads, is framework agnostic.
Using the shadow DOM is common when creating custom elements, but there’s no requirement to do so. Not all use cases require that level of encapsulation. While it’s also possible to style custom elements with CSS in a regular stylesheet, we’re going to look at using Emotion.We start with an install:npm i emotionEmotion offers the css function:import {css} from 'emotion';css is a tagged template literal. It accepts standard CSS syntax but adds support for Sass-style nesting.const buttonStyles = css`
color: white;
font-size: 16px;
background-color: blue;&:hover {
background-color: purple;
}
`Once some styles have been defined, they need to be applied. Working with custom elements can be somewhat cumbersome. Libraries — like Stencil and LitElement — compile to web components, but offer a friendlier API than what we’d get right out of the box.
So, we’re going to define styles with Emotion and take advantage of both Stencil and LitElement to make working with web components a little easier.
Applying styles for Stencil
Stencil makes use of the bleeding-edge JavaScript decorators feature. An @Component decorator is used to provide metadata about the component. By default, Stencil won’t use shadow DOM, but I like to be explicit by setting shadow: false inside the @Component decorator:@Component({
tag: 'fancy-button',
shadow: false
})Stencil uses JSX, so the styles are applied with a curly bracket ({}) syntax:export class Button {
render() {
return <div><button class={buttonStyles}><slot/></button></div>
}
}Here’s how a simple example component would look in Stencil:import { css, injectGlobal } from 'emotion';
import {Component} from '@stencil/core';const buttonStyles = css`
color: white;
font-size: 16px;
background-color: blue;
&:hover {
background-color: purple;
}
`
@Component({
tag: 'fancy-button',
shadow: false
})
export class Button {
render() {
return <div><button class={buttonStyles}><slot/></button></div>
}
}Applying styles for LitElement
LitElement, on the other hand, use shadow DOM by default. When creating a custom element with LitElement, the LitElement class is extended. LitElement has a createRenderRoot() method, which creates and opens a shadow DOM:createRenderRoot() {
return this.attachShadow({mode: 'open'});
}Don’t want to make use of shadow DOM? That requires re-implementing this method inside the component class:class Button extends LitElement {
createRenderRoot() {
return this;
}
}Inside the render function, we can reference the styles we defined using a template literal:render() {
return html`<button class=${buttonStyles}>hello world!</button>`
}It’s worth noting that when using LitElement, we can only use a slot element when also using shadow DOM (Stencil does not have this problem).
Put together, we end up with:import {LitElement, html} from 'lit-element';
import {css, injectGlobal} from 'emotion';
const buttonStyles = css`
color: white;
font-size: 16px;
background-color: blue;
&:hover {
background-color: purple;
}
`class Button extends LitElement {
createRenderRoot() {
return this;
}
render() {
return html`<button class=${buttonStyles}>hello world!</button>`
}
}customElements.define('fancy-button', Button);Understanding Emotion
We don’t have to stress over naming our button — a random class name will be generated by Emotion.
We could make use of CSS nesting and attach a class only to a parent element. Alternatively, we can define styles as separate tagged template literals:const styles = {
heading: css`
font-size: 24px;
`,
para: css`
color: pink;
`
}And then apply them separately to different HTML elements (this example uses JSX):render() {
return <div>
<h2 class={styles.heading}>lorem ipsum</h2>
<p class={styles.para}>lorem ipsum</p>
</div>
}Styling the container
So far, we’ve styled the inner contents of the custom element. To style the container itself, we need another import from Emotion.import {css, injectGlobal} from 'emotion';injectGlobal injects styles into the “global scope” (like writing regular CSS in a traditional stylesheet — rather than generating a random class name). Custom elements are display: inline by default (a somewhat odd decision from spec authors). In almost all cases, I change this default with a style applied to all instances of the component. Below are the buttonStyles which is how we can change that up, making use of injectGlobal:injectGlobal`
fancy-button {
display: block;
}
`Why not just use shadow DOM?
If a component could end up in any codebase, then shadow DOM may well be a good option. It’s ideal for third party widgets — any CSS that’s applied to the page simply won’t break the the component, thanks to the isolated nature of shadow DOM. That’s why it’s used by Twitter embeds, to take one example. However, the vast majority of us make components for for a particular site or app and nowhere else. In that situation, shadow DOM can arguably add complexity with limited benefit.Share this:

I had so much fun at An Event Apart Seattle! There is something nice about sitting back and basking in the messages from a variety of such super smart people.
I didn’t take comprehensive notes of each talk, but I did jot down little moments that flickered my brain. I’ll post them here! Blogging is fun! Again, note that these moments weren’t necessarily the main point of the speaker’s presentation or reflective of the whole journey of the topic — they are little micro-memorable moments that stuck out to me.Jeffrey Zeldman brought up the reading apps Instapaper (still around!) and Readability (not around… but the technology is what seeped into native browser tech). He called them a vanguard (cool word!) meaning they were warning us that our practices were pushing users away. This turned out to be rather true, as they still exist and are now joined by new technologies, like AMP and native reader mode, which are fighting the same problems.
Margot Bloomstein made a point about inconsistency eroding our ability to evaluate and build trust. Certainly applicable to websites, but also to a certain President of the United States.
President Flip Flops
Sarah Parmenter shared a powerful moment where she, through the power of email, reached out to Bloom and Wild, a flower mail delivery service, to tell them a certain type of email they were sending she found to be, however unintentionally, very insensitive. Sarah was going to use this as an example anyway, but the day before, Bloom and Wild actually took her advice and implemented a specialized opt-out system.
This not only made Sarah happy that a company could actually change their systems to be more sensitive to their customers, but it made a whole ton of people happy, as evidenced by an outpouring of positive tweets after it happened. Turns out your customers like it when you, ya know, think of them.
Eric Meyer covered one of the more inexplicable things about pseudo-elements: if you content: url(/icons/icon.png); you literally can’t control the width/height. There are ways around it, notably by using a background image instead, but it is a bit baffling that there is a way to add an image to a page with no possible way to resize it.
Literally, the entire talk was about pseudo-elements, which I found kinda awesome as I did that same thing eight years ago. If you’re looking for some nostalgia (and are OK with some cringe-y moments), here’s the PDF.
Eric also showed a demo that included a really neat effect that looks like a border going from thick to thin to thick again, which isn’t really something easily done on the web. He used a pseudo, but here it is as an <hr> element:
See the Pen CSS Thick-Thin-Thick Line by Chris Coyier (@chriscoyier)on CodePen.
Rachel Andrew had an interesting way of talking about flexbox. To paraphrase:Flexbox isn’t the layout method you think it is. Flexbox looks at some disparate things and returns some kind of reasonable layout. Now that grid is here it’s a lot more common to use that to be more much explict about what we are doing with layout. Not that flexbox isn’t extremely useful.Rachel regularly pointed out that we don’t know how tall things are in web design, which is just so, so true. It’s always been true. The earlier we embrace that, the better off we’ll be. So much of our job is dealing with overflow.
Rachel brought up a concept that was new to me, in the sense that it has an official name. The concept is “data loss” through CSS. For example, aligning something a certain way might cause some content to become visually hidden and totally unreachable. Imagine some boxes like this, set in flexbox, with center alignment:
No “data loss” there because we can read everything. But let’s say we have more content in some of them. We can never know heights!
If that element was along the top of a page, for example, no scrollbar will be triggered because it’s opposite the scroll direction. We’d have “data loss” of that text:
We now alignment keywords that help with this. Like, we can still attempt to center, but we can save ourselves by using safe center (unsafe center being the default):
Rachel also mentioned overlapping as a thing that grid does better. Here’s a kinda bad recreation of what she showed:
See the Pen Overlapping Figure with CSS Grid by Chris Coyier (@chriscoyier)on CodePen.
I was kinda hoping to be able to do that without being as explicit as I am being there, but that’s as close as I came.
Jen Simmons showed us a ton of different scenarios involving both grid and flexbox. She made a very clear point that a grid item can be a flexbox container (and vice versa).
Perhaps the most memorable part is how honest Jen was about how we arrive at the layouts were shooting for. It’s a ton of playing with the possible values and employing a little trial and error. Happy accidents abound! But there is a lot to know about the different sizing values and placement possibilties of grid, so the more you know the more you can play. While playing, the layout stuff in Firefox DevTools is your best bet.
Flexbox with gap is gonna be sweet.
There was a funny moment in Una Kravets’ talk about brainstorming the worst possible ideas.
The idea is that even though brainstorm sessions are supposed to be judgment-free, they never are. Bad ideas are meant to be bad, so the worst you can do is have a good idea. Even better, starting with good ideas is problematic in that it’s easy to get attached to an idea too early, whereas bad ideas allow more freedom to jump through ideation space and land on better ideas.
Scott Jehl mentioned a fascinating idea where you can get the benefits of inlining code and caching files at the same time. That’s useful for stuff we’ve gotten used to seeing inlined, like critical CSS. But you know what else is awesome to inline? SVG icon systems. Scott covered the idea in his talk, but I wanted to see if it I could give it a crack myself.
The idea is that a fresh page visit inlines the icons, but also tosses them in cache. Then other pages can <svg><use> them out of the cache.
Here’s my demo page. It’s not really production-ready. For example, you’d probably want to do another pass where you Ajax for the icons and inject them by replacing the <use> so that everywhere is actually using inline <svg> the same way. Plus, a server-side system would be ideal to display them either way depending on whether the cache is present or not.
Jeremy Keith mentioned the incredible prime number shitting bear, which is, as you might suspect, computationally expensive. He mentioned it in the context of web workers, which is essentially JavaScript that runs in a separate thread, so it won’t/can’t slow down the operation of the current page. I see that same idea has crossed other people’s minds.
I’m sad that I didn’t get to see every single talk because I know they were all amazing. There are plenty of upcoming shows with some of the same folks!Share this:

I want you to take a second and think about Twitter, and think about it in terms of scale. Twitter has 326 million users. Collectively, we create ~6,000 tweets every second. Every minute, that’s 360,000 tweets created. That sums up to nearly 200 billion tweets a year. Now, what if the creators of Twitter had been paralyzed by how to scale and they didn’t even begin?
That’s me on every single startup idea I’ve ever had, which is why I love serverless so much: it handles the issues of scaling leaving me to build the next Twitter!Live metrics with Application Insights
As you can see in the above, we scaled from one to seven servers in a matter of seconds, as more user requests come in. You can scale that easily, too.
So let’s build an API that will scale instantly as more and more users come in and our workload increases. We’re going to do that is by answering the following questions:
How do I create a new serverless project?
With every new technology, we need to figure out what tools are available for us and how we can integrate them into our existing tool set. When getting started with serverless, we have a few options to consider.
First, we can use the good old browser to create, write and test functions. It’s powerful, and it enables us to code wherever we are; all we need is a computer and a browser running. The browser is a good starting point for writing our very first serverless function.
Serverless in the browser
Next, as you get more accustomed to the new concepts and become more productive, you might want to use your local environment to continue with your development. Typically you’ll want support for a few things:
Writing code in your editor of choice
Tools that do the heavy lifting and generate the boilerplate code for you
Run and debug code locally
Support for quickly deploying your code
Microsoft is my employer and I’ve mostly built serverless applications using Azure Functions so for the rest of this article I’ll continue using them as an example. With Azure Functions, you’ll have support for all these features when working with the Azure Functions Core Tools which you can install from npm.npm install -g azure-functions-core-toolsNext, we can initialize a new project and create new functions using the interactive CLI:
func CLI
If your editor of choice happens to be VS Code, then you can use it to write serverless code too. There’s actually a great extension for it.
Once installed, a new icon will be added to the left-hand sidebar — this is where we can access all our Azure-related extensions! All related functions can to be grouped under the same project (also known as a function app). This is like a folder for grouping functions that should scale together and that we want to manage and monitor at the same time. To initialize a new project using VS Code, click on the Azure icon and then the folder icon.
Create new Azure Functions project
This will generate a few files that help us with global settings. Let’s go over those now.
host.json
We can configure global options for all functions in the project directly in the host.json file.
In it, our function app is configured to use the latest version of the serverless runtime (currently 2.0). We also configure functions to timeout after ten minutes by setting the functionTimeout property to 00:10:00 — the default value for that is currently five minutes (00:05:00).
In some cases, we might want to control the route prefix for our URLs or even tweak settings, like the number of concurrent requests. Azure Functions even allows us to customize other features like logging, healthMonitor and different types of extensions.
Here’s an example of how I’ve configured the file:// host.json
{
"version": "2.0",
"functionTimeout": "00:10:00",
"extensions": {
"http": {
"routePrefix": "tacos",
"maxOutstandingRequests": 200,
"maxConcurrentRequests": 100,
"dynamicThrottlesEnabled": true
}
}
}Application settings
Application settings are global settings for managing runtime, language and version, connection strings, read/write access, and ZIP deployment, among others. Some are settings that are required by the platform, like FUNCTIONS_WORKER_RUNTIME, but we can also define custom settings that we’ll use in our application code, like DB_CONN which we can use to connect to a database instance.
While developing locally, we define these settings in a file named local.settings.json and we access them like any other environment variable.
Again, here’s an example snippet that connects these points:// local.settings.json
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "your_key_here",
"FUNCTIONS_WORKER_RUNTIME": "node",
"WEBSITE_NODE_DEFAULT_VERSION": "8.11.1",
"FUNCTIONS_EXTENSION_VERSION": "~2",
"APPINSIGHTS_INSTRUMENTATIONKEY": "your_key_here",
"DB_CONN": "your_key_here",
}
}Azure Functions Proxies
Azure Functions Proxies are implemented in the proxies.json file, and they enable us to expose multiple function apps under the same API, as well as modify requests and responses. In the code below we’re publishing two different endpoints under the same URL.// proxies.json
{
"$schema": "http://json.schemastore.org/proxies",
"proxies": {
"read-recipes": {
"matchCondition": {
"methods": ["POST"],
"route": "/api/recipes"
},
"backendUri": "https://tacofancy.azurewebsites.net/api/recipes"
},
"subscribe": {
"matchCondition": {
"methods": ["POST"],
"route": "/api/subscribe"
},
"backendUri": "https://tacofancy-users.azurewebsites.net/api/subscriptions"
}
}
}Create a new function by clicking the thunder icon in the extension.
Create a new Azure Function
The extension will use predefined templates to generate code, based on the selections we made — language, function type, and authorization level.
We use function.json to configure what type of events our function listens to and optionally to bind to specific data sources. Our code runs in response to specific triggers which can be of type HTTP when we react to HTTP requests — when we run code in response to a file being uploaded to a storage account. Other commonly used triggers can be of type queue, to process a message uploaded on a queue or time triggers to run code at specified time intervals. Function bindings are used to read and write data to data sources or services like databases or send emails.
Here, we can see that our function is listening to HTTP requests and we get access to the actual request through the object named req.// function.json
{
"disabled": false,
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": ["get"],
"route": "recipes"
},
{
"type": "http",
"direction": "out",
"name": "res"
}
]
}index.js is where we implement the code for our function. We have access to the context object, which we use to communicate to the serverless runtime. We can do things like log information, set the response for our function as well as read and write data from the bindings object. Sometimes, our function app will have multiple functions that depend on the same code (i.e. database connections) and it’s good practice to extract that code into a separate file to reduce code duplication.//Index.js
module.exports = async function (context, req) {
context.log('JavaScript HTTP trigger function processed a request.');if (req.query.name || (req.body && req.body.name)) {
context.res = {
// status: 200, /* Defaults to 200 */
body: "Hello " + (req.query.name || req.body.name)
};
}
else {
context.res = {
status: 400,
body: "Please pass a name on the query string or in the request body"
};
}
};Who’s excited to give this a run?
How do I run and debug Serverless functions locally?
When using VS Code, the Azure Functions extension gives us a lot of the setup that we need to run and debug serverless functions locally. When we created a new project using it, a .vscode folder was automatically created for us, and this is where all the debugging configuration is contained. To debug our new function, we can use the Command Palette (Ctrl+Shift+P) by filtering on Debug: Select and Start Debugging, or typing debug.
Debugging Serverless Functions
One of the reasons why this is possible is because the Azure Functions runtime is open-source and installed locally on our machine when installing the azure-core-tools package.
How do I install dependencies?
Chances are you already know the answer to this, if you’ve worked with Node.js. Like in any other Node.js project, we first need to create a package.json file in the root folder of the project. That can done by running npm init -y — the -y will initialize the file with default configuration.
Then we install dependencies using npm as we would normally do in any other project. For this project, let’s go ahead and install the MongoDB package from npm by running:npm i mongodbThe package will now be available to import in all the functions in the function app.
How do I connect to third-party services?
Serverless functions are quite powerful, enabling us to write custom code that reacts to events. But code on its own doesn’t help much when building complex applications. The real power comes from easy integration with third-party services and tools.
So, how do we connect and read data from a database? Using the MongoDB client, we’ll read data from an Azure Cosmos DB instance I have created in Azure, but you can do this with any other MongoDB database.//Index.js
const MongoClient = require('mongodb').MongoClient;// Initialize authentication details required for database connection
const auth = {
user: process.env.user,
password: process.env.password
};// Initialize global variable to store database connection for reuse in future calls
let db = null;
const loadDB = async () => {
// If database client exists, reuse it
if (db) {
return db;
}
// Otherwise, create new connection
const client = await MongoClient.connect(
process.env.url,
{
auth: auth
}
);
// Select tacos database
db = client.db('tacos');
return db;
};module.exports = async function(context, req) {
try {
// Get database connection
const database = await loadDB();
// Retrieve all items in the Recipes collection
let recipes = await database
.collection('Recipes')
.find()
.toArray();
// Return a JSON object with the array of recipes
context.res = {
body: { items: recipes }
};
} catch (error) {
context.log(`Error code: ${error.code} message: ${error.message}`);
// Return an error message and Internal Server Error status code
context.res = {
status: 500,
body: { message: 'An error has occurred, please try again later.' }
};
}
};One thing to note here is that we’re reusing our database connection rather than creating a new one for each subsequent call to our function. This shaves off ~300ms of every subsequent function call. I call that a win!
Where can I save connection strings?
When developing locally, we can store our environment variables, connection strings, and really anything that’s secret into the local.settings.json file, then access it all in the usual manner, using process.env.yourVariableName.local.settings.json
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "",
"FUNCTIONS_WORKER_RUNTIME": "node",
"user": "your-db-user",
"password": "your-db-password",
"url": "mongodb://your-db-user.documents.azure.com:10255/?ssl=true"
}
}In production, we can configure the application settings on the function’s page in the Azure portal.
However, another neat way to do this is through the VS Code extension. Without leaving your IDE, we can add new settings, delete existing ones or upload/download them to the cloud.
Debugging Serverless Functions
How do I customize the URL path?
With the REST API, there are a couple of best practices around the format of the URL itself. The one I settled on for our Recipes API is:
GET /recipes: Retrieves a list of recipes
GET /recipes/1: Retrieves a specific recipe
POST /recipes: Creates a new recipe
PUT /recipes/1: Updates recipe with ID 1
DELETE /recipes/1: Deletes recipe with ID 1
The URL that is made available by default when creating a new function is of the form http://host:port/api/function-name. To customize the URL path and the method that we listen to, we need to configure them in our function.json file:// function.json
{
"disabled": false,
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": ["get"],
"route": "recipes"
},
{
"type": "http",
"direction": "out",
"name": "res"
}
]
}Moreover, we can add parameters to our function’s route by using curly braces: route: recipes/{id}. We can then read the ID parameter in our code from the req object:const recipeId = req.params.id;How can I deploy to the cloud?
Congratulations, you’ve made it to the last step! 🎉 Time to push this goodness to the cloud. As always, the VS Code extension has your back. All it really takes is a single right-click we’re pretty much done.
Deployment using VS Code
The extension will ZIP up the code with the Node modules and push them all to the cloud.
While this option is great when testing our own code or maybe when working on a small project, it’s easy to overwrite someone else’s changes by accident — or even worse, your own.Don’t let friends right-click deploy! — every DevOps engineer out thereA much healthier option is setting up on GitHub deployment which can be done in a couple of steps in the Azure portal, via the Deployment Center tab.
Github deployment
Are you ready to make Serverless APIs?
This has been a thorough introduction to the world of Servless APIs. However, there’s much, much more than what we’ve covered here. Serverless enables us to solve problems creatively and at a fraction of the cost we usually pay for using traditional platforms.
Chris has mentioned it in other posts here on CSS-Tricks, but he created this excellent website where you can learn more about serverless and find both ideas and resources for things you can build with it. Definitely check it out and let me know if you have other tips or advice scaling with serverless.Share this:

You don’t need to change your existing optimization plugin, image optimization is just a small part of what we do, if you are happy with ShortPixel for e.g, feel free to continue to use it, OptiMole would then take care only of serving your image at the RIGHT size, advanced cropping and smart lazy-loading.

I like this point that Jonathan Snook made on Twitter and I’ve been thinking about it non-stop because it describes something that’s really hard about writing CSS:I feel like that tweet sounds either very shallow or very deep depending on how you look at it but in reality, I don’t think any system, framework, or library really take this into consideration—especially in the context of maintainability.
— Snook (@snookca) February 26, 2019In fact, I reckon this is the hardest thing about writing maintainable CSS in a large codebase. It’s an enormous problem in my day-to-day work and I reckon it’s what most technical debt in CSS eventually boils down to.Let’s imagine we’re styling a checkbox, for example – that checkbox probably has a position on the page, some margins, and maybe other positioning styles, too. And that checkbox might be green but turns blue when you click it.
I think we can distinguish between these two types of styles as layout and appearance.
But writing good CSS requires keeping those two types of styles separated. That way, the checkbox styles can be reused time and time again without having to worry about how those positioning styles (like margin, padding or width) might come back to bite you.
At Gusto, we use Bootstrap’s grid system which is great because we can write HTML like this that explicitly separates these concerns like so:<div class="row">
<div class="col-6">
<!-- Checkbox goes here -->
</div>
<div class="col-6">
<!-- Another element can be placed here -->
</div>
</div>Otherwise, you might end up writing styles like this, which will end up with a ton of issues if those checkbox styles are reused in the future:.checkbox {
width: 40%;
margin-bottom: 60px;
/* Other checkbox styles */
}When I see code like this, my first thought is, “Why is the width 40% – and 40% of what?” All of a sudden, I can see that this checkbox class is now dependent on some other bit of code that I don’t understand.
So I’ve begun to think about all CSS as fitting into one of those two buckets: appearance and layout. That’s important because I believe those two types of styles should almost never be baked into one set of styles or one class. Layout on the page should be one set of styles, and appearance (like what the checkbox actually looks like) should be in another. And whether you do that with HTML or a separate CSS class is up for debate.
The reason why this is an issue is that folks will constantly be trying to overwrite layout styles and that’s when we eventually wind up with a codebase that resembles a spaghetti monster. I think this distinction is super important to writing great CSS that scales properly. What do you think? Add a comment below!Share this:

There are many ways to build a layout in HTML & CSS, and every developer has his own way of doing it. Getting to know the thinking process inside a Front End Developer mind is extremely useful, as it gives us different perspectives on how to solve certain problems.
In this article, I will dive into the journey of building some of the components in a project I called “Nadros”, and write down my thought process, from the perspective of a designer and a developer.
Nadros is an imaginative idea for an online video courses platform. In this article, I’ll focus on building the header component.
Click here to view the full design.
Design Components
I extracted all components into one page so I could look at them together. This would make it easy to find inconsistencies between UI elements. Also, having all of them at once can lead to ways for unifying some UI components, or to create a variety of specific components.
Check the full components listCheck out the final demo.
Most of the time, the header is the first component that I start thinking about in each Front End project. Usually, it takes a good amount of time to build it perfectly across viewport sizes.
Different designers have different ways of thinking. One might provide only one state of the header without considering how it would look like on smaller/bigger sizes.
But for a developer, there are a lot of details to work on while building it in HTML and CSS. The below illustrates how a hybrid person (Designer & Developer) might think in this case.
HTML Markup<header>
<div class="container">
<a href="#"><img src="img/nadros.svg" alt="Nadros"></a>
<nav><!-- nav elements --></nav>
<form><!-- Search --></form>
<a href="link-to-profile/">
<img src="img/shadeed.jpg" alt="Ahmad Shadeed">
</a>
</div>
</header>I prefer to start with the markup first. This can help in writing semantic elements before going into the design and CSS details. Here is the header without CSS:
Icongraphy
The icons will be added as below:
Extract them as SVGs.
Combine all icons into <symbols> in one SVG and to reuse them across the page with the <use> element.
I wrote an article about that topic in details.
General Layout
I reset some elements styles and added the below CSS:.site-header__wrapper {
display: flex;
flex-wrap: wrap;
}.main-nav ul {
display: flex;
}The header started to take shape, and while thinking about placing the search and user avatar at the right side, I asked myself: What if I wanted to have a “Get Started” button there?
To account for that, I need to encapsulate the form and user avatar in a <div> that will be positioned at the right side. It’s possible to add whatever needed inside that element.
I did the below quick mockups to show some possible scenarios on the right side: It could have a notification, messages.
It could have secondary links.
It could have a “Switch account” button.<header class="site-header">
<div class="container">
<!-- Logo and Navigation -->
<div class="site-header__section">
<form><!-- Search --></form>
<a href="link-to-profile/" class="user-avatar">
<img src="img/shadeed.jpg" alt="Ahmad Shadeed">
</a>
</div>
</div>
</header>Since the header is flex container, it’s possible to control the child items. In our case, using margin-left: auto for the form and user avatar element will push it to the far right..site-header__section {
margin-left: auto;
}Logo and Navigation Items
For this part of the header, it should be aligned and consistent as per the design mockup.
The first thing is the logo, I added the below styles to make it look better..logo {
display: flex;
align-items: center;
margin-right: 16px;img {
position: relative;
top: -3px;
}
}I used Flexbox and positioning to align the logo vertically. Unfortunately, top: -3px has to be used as this is the best approach I know (This depends on the logo itself, so if the logo is different it might be removed).
Next are the navigation items, and here is what I did:
Aligned the icon using vertical-align: middle which is really handy in this case.
Added padding to the navigation items. It’s important to add padding for the <a> elements and not the <li> items. As the first will make the whole area clickable, and the latter will only make the text clickable.
Incorrect: Padding on the <li> element
Correct: Padding on the <a> element
Current Result
Search Form
When starting with a new component, I often add outline: solid 1px red to make sure that the CSS I want to add will be applied to the correct component.
I added the below basic styles:.search-form {
position: relative;
width: 350px;
outline: solid 1px red;
}.search-form__button {
position: absolute;
right: 0;
top: 0;
}Let’s outline the important parts that should be considered while building the search component:
The <input> should have a <label> attached to it. I didn’t add that in the initial HTML, but it’s important to add for accessibility reasons.<form class="search-form">
<label for="search">What do you want to learn?</label>
<input type="search" name="" id="search" placeholder="What do you want to learn?">
<button class="search-form__button">Search</button>
</form>The <button> will be represented as the search icon on the right side. In our case, the “Search” word should be replaced with an icon. To do it correctly, the text should be hidden visually from the document.<button class="search-form__button"><span class="visually-hidden">Search</span></button>.visually-hidden {
position: absolute;
overflow: hidden;
clip: rect(0 0 0 0);
height: 1px;
width: 1px;
margin: -1px;
padding: 0;
border: 0;
}Check out CSS Tricks for more details.
After completing the style for the search form, here is the final result.
What I did is the following:
Positioned the button absolutely to the right side.
Added the background, padding.
Setting the font size to 16px. This is important for iOS, as if the font size is less or greater than that, it will cause the page to zoom.
While testing it, I noticed two issues. Let’s have a look at them.
The padding on the right side should be equal to the space occupied by the search icon. Fixed by making it 38px which is equal to the width of the search button.
On focus, the search icon color should be changed, as the grey looks odd on the blue background.
How to make the icon color different when the input is focused? Well, there are two options.
Using CSS Adjacent Sibling Combinatorinput:focus + .search-form__button .search-form__icon {
fill: #fff;
}I didn’t like how much nesting in this approach.
Or Using CSS :focus-within.search-form:focus-within .search-form__icon {
fill: #fff;
}This is much better, it’s more intuitive and straight forward. For example, I can make the search form width bigger on focus..search-form:focus-within {
width: 400px;
}Avatar
Start by adding the width and height of 38px. Then, add space between the avatar and the search form without adding the margin to any of them. Below is an example:.avatar {
margin-left: 8px; /* Incorrect way */
}This is an incorrect and not a future proof solution. I want to add the spacing dynamically without specifying the element for it. Here are some cases where the above CSS will fail:
Having other components in the right section of the header.
Removing the search form and keeping the avatar only. The space won’t have a benefit.
Once again, CSS Adjacent Sibling Combinator to the rescue!.site-header__section > * + * {
margin-left: 8px;
}In that case, the spacing is added dynamically and only if there is more than one element. To proof that, I will copy the avatar to the left of the search form for testing purposes.
Pill Component
It’s important to ensure that it works with short and long text. For example, it could be used as a badge for numbers and not text only..pill {
background: var(--color-brand-primary);
color: #fff;
text-transform: uppercase;
font-size: 13px;
letter-spacing: 0.5px;
border-radius: 20px;
font-weight: 700;
padding: 3px 7px;
}The above demonstrates how the component will look in different cases. To make that work as expected, padding should be added to set a minimum size for it.
In case the padding was incorrect, like padding: 2px 4px, it will make the component look odd for short text.
Mobile Layout
Reduce the padding for the navigation items
To get more items fit when the viewport width is getting smaller, I reduced the padding for the links to 16px 12px instead of 20px 24px.a {
/*Other styles*/
padding: 16px 12px;@media (min-width: 1250px) {
padding: 20px 24px;
}
}After that, I thought about the idea of showing the icon only when the navigation item is active. I liked it! Since the icon is mandatory in our case, it’s ok to hide it in small views..main-nav__item svg {
display: none;
/**Other styles**/
}/**Icon is always visible for bigger viewports (greater than 1150px)**/
@media (min-width: 1150px) {
.main-nav__item svg {
display: inline-block;
}
}/**Show the icon for the active element, when the width of viewport is less than 1150px**/
@media (max-width: 1150px) {
.main-nav__item.is-active svg {
display: inline-block;
}
}And here is the current state of the header.
But, there is an issue. Since the icon is only visible for the active navigation item, this caused the other items text to be a bit off than the active one. See below screenshot:
Embarrassing, right? 😀
This is because the active page has an icon, while the others don’t, so the icon caused the text to be pushed to the bottom. A workaround is to add a negative margin to the icon.@media (max-width: 1150px) {
.main-nav__item.is-active svg {
display: inline-block;
margin-top: -4px;
}
}Responsive Logo
While I’m thinking about the next thing to shrink or make smaller, I got the idea to have a responsive logo. I heard about HTML <picture> but never used it.
I jumped to Adobe XD and extracted two versions of the logo, one with an icon only, and the other with both icon and text. The HTML below shows the full logo only when the viewport width is equal or more than 880 pixels, and the icon logo is shown by default for smaller viewports.<picture class="logo">
<source srcset="img/logo-full.svg" media="(min-width: 880px)">
<img src="img/logo-small.svg" />
</picture>Search Form on Small Viewports
Until this point, I didn’t alter the search form. To get more space, the form should be hidden and replaced with a button that will toggle it once clicked.
This is the current state of the header. Notice how the form jumps to a new line.
What about getting the use of Flexbox to make the wrapper span to the available space? A video shows the header Flex wrapper using Firefox Flexbox tool.
I added flex: 1 to .site-header__section element it was a good enhancement. Now it doesn’t move to a new line unless I wanted that.
Ok, the next step is to hide the form and add a button with the search icon. How the search will look once it’s active? I got a couple of thoughts.To have it as a popup with an arrow pointing to the search toggle. Even though this option makes the search form attached to the toggle button, I see that it will fail if I decided to a suggestions list while the user is typing.To have it as a full width input, below the header. This option is good, I can easily add a suggestions list while typing, it will also look good on different mobile sizes.To show the search as an overlay on the whole page. For the current context, this sounds like an exaggerated solution. If there is a suggestions list, it could be a great one.Result: I will go with the 2nd solution.
How to position the search form directly below the header? The first thought I got is to position the form absolutely to the header element, and then to make it full width..site-header {
position: relative;
}.search-form {
position: absolute;
left: 0;
right: 0;
top: 100%;
padding: 8px;
background: #06aed5;
}The next step is to put the search form away and hide it. I need to add a toggle button in the header and then to write the needed JavaScript code for that.var searchToggle = document.querySelector('.search-form__toggle');
var searchForm = document.querySelector('.search-form');searchToggle.addEventListener('click', function(){
this.classList.toggle('is-active');
searchForm.classList.toggle('is-active');
});For now, the form reveals without animation and it doesn’t look natural for me. What about adding a sliding animation?
The first thought that came is to translate the form out of the viewport, and it will slide from top to bottom. In that cases, I usually jump to the DevTools and start playing around with CSS to achieve what I need.
Checkout the below video for the process of adding the animation.
[embedded content]@media (max-width: 700px) {
.search-form {
/**Position: absolute.. etc**/
z-index: -1;
visibility: hidden;
transform: translateY(-100%);
}
.search-form.is-active {
visibility: visible;
transform: translateY(0);
}
}Final Result
Mobile Navigation
After hiding the navigation, I need to add the mobile navigation toggle which will activate the menu. The real question is, how the navigation will look once it’s opened? Ok, I got a couple of thoughts.To have it as an overlay that is covering the whole page. I feel that this concept is interesting from a visual perspective, but for our case, there’re 4 navigation items only, and it’s expected to end up with an empty space below them.As an off-canvas sliding menu. Once the toggle button is clicked, the navigation will slide from left to right. This concept is great and used a lot, but for our case, it’s not worth it. We’ll end up with empty space.In page navigation. The concept of this navigation is similar to how the search form works. It could slide from top to bottom.I’m leaning towards concept #3 since it’s more consistent with the search form functionality, and will save a good amount of space. Finally, it’s more suitable for the little navigation items I have.
At first, I need to hide the mobile menu. The correct way for that is to hide the <ul> element only and not the <nav> element, so screen reader users can find the navigation easily. Check out this article for more details on that.
When the <ul> is hidden, the header is looking crowded. The reason is because I depended on the top and bottom padding for the navigation links. Since they’re hidden, the spacing is gone..site-header__wrapper {
@media (max-width: 570px) {
padding: 0.5rem 0; /*Adding top and bottom padding to give some breathing space.*/
justify-content: space-between;
}
}Since there is a Flex wrapper for the header, I thought about reordering the items. By adding order: -1 to the .main-nav element, it will be the first item from the left. The toggle button for the navigation is placed inside the <nav> element.<nav class="main-nav">
<button class="nav__toggle">
<span class="visually-hidden">Menu</span>
</button>
<ul>..</ul> <!-- This is hidden -->
</nav>.main-nav {
@media (max-width: 570px) {
order: -1; /*Reordering the main-nav element to make it the first one from the left*/
}
}Also, it’s important to override some styles for the right wrapper..site-header__section {
@media (max-width: 570px) {
margin-left: initial; /*Initially, this value was auto.*/
flex: initial; /*It was 1*/
}
}After adding all needed CSS, here is the current look of the header in mobile.
Next, I will work on the navigation items in mobile and build the In Page option I picked.
Check out the below video to see how I edited the navigation in DevTools. This is the result. Once all is ok, I added all the needed styles for the navigation with the animation on toggle.
[embedded content]Since this is mostly a decoration for me, or let’s say a low priority thing, I’ll work on it now. The top border is a gradient from #07859D to #079EBA.
How can I add that to the header without creating a separated HTML element for it, or even without using a pseudo element?
CSS Backgrounds to the rescue!.site-header {
background-color: #fff;
background-image: linear-gradient(to right, #07859D, #079EBA);
background-size: 100% 5px;
background-repeat: no-repeat;
padding-top: 5px;
}Navigation Toggle Button
As know as the hamburger button. I need to have 3 stacking lines and once toggled, it should morph to an “x” shape.
What do I need to have 3 stacking lines? Is it possible to do it with 1 HTML element?
It turned out that I can use a span element with its pseudo elements. As a result, I have 3 stacking lines.<button class="main-nav__toggle" aria-label="Menu">
<span class="main-nav__toggle__icon"></span>
</button>Since the button don’t have text label, it’s important to add aria-label to make it accessible. Otherwise, screen reader won’t announce that this is a menu button..main-nav__toggle__icon {
display: block;
height: 2px;
background: #858585;
border-radius: 2px;&:after,
&:before {
content: "";
position: relative;
display: block;
height: 2px;
background: #858585;
border-radius: 2px;
transition: 0.2s ease-out;
}&:after {
top: 4px;
}&:before {
top: -6px;
}
}The Result
Now that I have the button with it’s final look, I need to morph it to an “X” shape once clicked. I used a combination of CSS transforms for that..main-nav__toggle {
/*Other styles*/
&.is-active {
background-color: var(--color-brand-primary);.main-nav__toggle__icon {
background: transparent; /*Hide the middle line*/&:before,
&:after {
background-color: #fff;
}&:before {
transform-origin: right top;
transform: rotate(-45deg) translate(-2px, -4px);
}&:after {
transform-origin: left bottom;
transform: rotate(45deg) translate(-8px, -11px);
}
}
}
}Testing
Once all the core functionalities of the header are there, I started to randomly test everything.
Catch #1
When the search is activated and I open the navigation menu, the search stays there. Same happens for the other way around. Only one of them should be active.
To fix this, we need to close the search when the navigation is opened, and to close the navigation when the search is active.searchToggle.addEventListener('click', function(){
this.classList.toggle('is-active');
searchForm.classList.toggle('is-active');
navList.classList.remove('is-active');
});navToggle.addEventListener('click', function () {
navList.classList.toggle('is-active');
searchForm.classList.remove('is-active');
});Catch 2
When hovering on the nav items, the border looks a bit off for the items without icon. This is due to one item having an icon, and the rest is not. To fix the issue, I added a min-height: 56px to the nav item.
Fun Facts
CodeKit total actions for this component only is 600, which is the number of times I hit CMD + S to save changes.
Conclusion
And that’s a wrap. It was a great journey to document every step of building the header component. Actually, I didn’t expect all of that details, since I got used to work on it in real projects.
Next step is to build the other sections in the page, stay tuned for other parts of the article.
Hope you enjoyed it and thanks for reading! Do you have any feedback? Please let me know on Twitter @shadeed9
Thanks and Credits:
The one and only, Kholoud. She created the awesome illustrations and read the article for a million times.
Mohammad S. Jaber: for his help in editing and making the article better.
Elisabeth Irgens: for providing feedback on the very first draft of the article.
Freepik for the logo and the “Designer” and “Developer” illustrations.