Service workers allow developers to build web applications that can run offline and support push messaging and background data synchronization.

A service worker is a JavaScript execution context that acts as a target for the fetch, sync, push, and other functional events. It remains active even when the user is no longer browsing the website, the web browser is closed, or there is no network access.

Service workers allow web applications to intercept individual network requests, including those to the URLs that are used to open the app itself. This is mediated by the fetch events that are dispatched to the service worker and let the app generate custom network responses. The application can choose to return resources that have been previously cached and thereby allow itself to load in the absence of network connectivity.

The background sync technology allows web apps to register deferred background syncs when there is no network access. When the device is back online, a sync event is dispatched to the service worker for each pending background sync, allowing the application to use the network to finish its tasks.

Push communication enables application back ends to send messages to the clients that are subscribed to push endpoints. This form of messaging involves the push events getting dispatched to the service worker that can use the information in the push messages to display notifications to the user or perform other tasks.

There are a number of service worker libraries that abstract away common tasks and integrate with build processes. The most commonly used one is Google's workbox, the successor to sw-precache and sw-toolbox.

All social media platforms automatically generate inline previews for links shared by users in their posts. Often, the previews display unrelated images or text that looks out of place. This happens to websites that have not been optimized for social media.

Social media referrals account for almost one third of all referral traffic. By optimizing their websites for social sharing platforms, webmasters can provide richer experience, increase engagement with the link shares, and bring more people to their websites.

How link previews are constructed

When someone shares a link to Twitter, Facebook or some other platform, the crawler scrapes the HTML of that URL. The scraper first looks for the meta tags containing the title, description, headline, preview image, and other information needed to construct the link preview box:

Depending on the platform, these boxes can be called article previews, link previews, (rich) cards, (rich) snippets, or rich pins.

If the page doesn't have the correct meta tags, the crawler uses its internal heuristics to extract the page information from the content. Most of the time, this won't guarantee the most optimal results.

Meta tags for major social media networks

Each platform offers guidelines about which meta tags should be used. They are often similar to each other and can be used together.

Below are the snippets of meta tags required by major social media platforms. Copy and paste these snippets into the head tag of HTML templates, replacing the text with your own content.

Debugging link previews

Making meta tags work can require some tweaking. Major platforms provide debugging tools to let webmasters test how the pages are seen by the scrapers. The tools recrawl the URLs provided and show which meta tags are found and if there are any errors or warnings.

Other important considerations

Preview images

The og:image and twitter:image should use images of size 800x418 or 1600x836. If the image dimensions are known upfront, they should be specified using the og:image:width and og:image:height tags. This will ensure that the image loads properly the first time the link is shared on Facebook, but may have no effect with other platforms.

The images should be either PNGs or JPGs. Most platforms can also understand GIFs, but only display the first frame of the animated image.

Pinterest has support for multiple og:image tags. When multiple high-resolution images are listed, the user has the ability to choose any of them when they save the link to their boards. Note that only up to 6 images can be used.

Facebook and Twitter process the linked images asynchronously. This means that the image may not be displayed on the card immediately after the link is posted on the platform for the first time.

Link content caching

When a link is shared for the first time, the crawler scrapes and caches the metadata from the URL. When changes are made to the meta content, subsequent attempts to share the page won't display the changes straight away. It can take as much as 30 days for the link details to update after the change. The cache can be invalidated manually using the Facebook Sharing Debugger and LinkedIn Post Inspector or programmatically (for Facebook only) using the Graph API.

Single page applications

All social sharing platforms inspect only the initial HTML content returned by the server. Single page applications need to make sure that the meta tags are server-side rendered so that they are visible to the crawlers.

Looking forward

Meta tags are key to the success of visitor acquisition through social media. Stay tuned to learn more about metadata and semantic markup, and how webmasters can leverage those techniques to help drive traffic to their websites.

Our names are very important to us. Each name has a story related to the person's cultural and familial background.

Pronunciation of names provokes constant uncertainty. For instance, how do you pronounce J. K. Rowling's surname? We know how she pronounces it — /ˈroʊlɪŋ/ — and that it rhymes with "bowling". However, her name is very often mispronounced as /ˈraʊlɪŋ/ so that it rhymes with "howling".

As we become a more multicultural society, names as simple as John will become less common.

The correct pronunciation of your name is whatever you decide. With Cofactor My Name, you can record the correct version of your name and present it to the world. My Name will then make it easy for others to register and remember the right pronunciation.

In biology and medicine, each species, genus, or family of organisms has a standard Latin name such as Escherichia coli, Saccharomyces cerevisiae, Phytophthora, and Rosaceae. As of 2018, there are about 1.8 million species described—including both extinct and extant species—each called a unique scientific name.

Even though biological Latin is primarily a written language, these names do occur in speech. Our language is about both written and verbal communication after all. Knowing how to pronounce these names is important for being able to communicate our ideas clearly and effectively.

There are no hard and fast pronunciation rules for taxonomic names in English—English being the de facto language of science. Moreover, these names often derive from personal or geographical names. One such name is Scythris worcesterensis—the species of moth named after the Worcester area in South Africa. Pronunciation of "worcesterensis" will clearly be influenced by the pronunciation of the original geographical location, /ˈwʊstər/ in this case.

Cofactor Ora's goal is to collect pronunciations of all systematic names found in the Google Knowledge Graph. By encouraging scientists to share their preferred pronunciations for Latin names and other terms that they regularly use in their speech, Cofactor Ora seeks to become the first crowdsourced pronunciation guide to taxonomic names and medical terms.

You are probably using Google on a daily basis. And you've probably noticed that the results often include entities relevant to your search displayed in the panels on the right-hand side of the page.

These entities come from the Google Knowledge Graph — an extensive collection of things ranging from sports teams, notable people, and rock bands to local businesses, streets, and bus stops.

Cofactor Ora is based on the very same Knowledge Graph. In fact, each page in the Cofactor Ora dictionary (Karangahape Road or Aotearoa for example) shows pronunciations for a single Knowledge Graph entity.

Ora's purpose is really to engage with the communities of native speakers to have the entire Knowledge Graph pronounced. Cofactor supports the infrastructure and provides the means for you to view and contribute pronunciations for those entities.

Cofactor Google Chrome plugin

The Google Knowledge Graph has over 1 billion entities. We've always understood that navigating through so many entities in Ora is hard and wanted to make the crowdsourced data more accessible to the end user who is looking for pronunciations.

The new Cofactor plugin allows you to view and listen to Ora's pronunciations without leaving Google Search or Google Maps. It simply integrates with the knowledge panels, allowing you to peek into Ora directly in place.

Here's how it works. Suppose you are searching Google for Aotearoa:

Then the suggested entity for your search will be displayed on the right-hand side of the page with the new speaker button:

which opens a preview of the dictionary page. You can view, listen to, and contribute pronunciations in the same window:

Any real-world thing — a person, place, organization, work of art — is an example of a named entity. Named entities are found everywhere.

Correct pronunciation of named entities is required from many systems — for example, applications like Google Maps that synthesize navigation instructions for drivers using text-to-speech.

Pronunciation of named entities is one of the biggest challenges for speech technologies. Due to their large number, named entities are often excluded from pronunciation lexicons. When processing out-of-vocabulary named entities, G2P engines will often output erroneous transcriptions. As a result, synthetic speech simply mispronounces names.

What makes named entity pronunciation difficult? Firstly, names can be of very diverse etymological origin and can surface in another language without having undergone the process of assimilation. Some street names are good examples of this: Karangahape Road, Tangihua Street, Ngaoho Place. Secondly, name pronunciation is known to be idiosyncratic; there are many pronunciations contradicting common phonological patterns. Consider English city names such as Leicester and Worcester. Thirdly, it's not uncommon for certain names to have different pronunciations when they refer to different things. A famous example of this is the pronunciation of Houston Street in NY vs. Houston, TX.

For most text-to-speech systems, no guess ensures the correct pronunciation better than a direct hit in a pronunciation dictionary. The Cofactor Ora pronunciation lexicon is based on the Google Knowledge Graph. This corpus provides far better coverage of names than any other dictionary.

Knowing how words are pronounced is a vital part of most speech recognition and speech synthesis systems. The pronunciation component forms the core of such systems, making their overall performance rely on the coverage and quality of the pronunciation model.

Automatic speech recognition and text-to-speech systems normally use handcrafted word-pronunciation dictionaries. The dictionary maps each word to one or more phonetic transcriptions and usually has a large but finite vocabulary.

Such a static list can never cover all possible words in a language and is usually accompanied by a grapheme-to-phoneme (G2P) engine that can automatically generate pronunciations for out-of-dictionary words.

A G2P converts an input word (a sequence of characters or graphemes) to a corresponding prounciation (a string of phones). For example, given the word "computer" a G2P should output /kəmˈpjuːtər/.

There are different types of G2P algorithms. Unlike the less-common rule-based G2Ps, data-driven G2P methods automatically learn from a set of word-pronunciation pairs (the ground truth). The underlying conversion rules are captured implicitly which also makes the implementation language-independent. Various data-driven models use tree classifiers, hidden Markov models, and neural networks. Recurrent neural networks (RNNs) with long short term memory cells (LSTMs) show good accuracy while being very easy to use — they simply learn from the training data.

G2P conversion can be viewed as a (neural) machine translation problem in which spelling (orthography) is being translated into pronunciation (phonology). The performance and quality of G2Ps is usually judged by their phoneme error rate (PER) which is similar to the word error rate (WER) metric used in machine translation.

G2P algorithms generalize from their training data and typically mispronounce non-standard words or foreign names. For example, they might pronounce the Māori name "Onehunga" as /wʌnˈhʌŋə/ which is far from the correct local pronunciation /ˌɒnɪˈhʌŋə/.

The existence of homographs also complicates things. Unlike Spanish or German where pronunciation of a word can be inferred from its spelling, English is full of words that have the same spelling but are pronounced differently depending on meaning. The examples include words such as "dove" which can be pronounced as /ˈdʌv/ or /ˈdoʊv/ depending on what you are talking about. A more complex example is the name "Houston" which is pronounced as /ˈhjuːstən/ when it refers to the city in Texas and /ˈhaʊstən/ in the name of the Houston Street in New York which highlights the importance of the Cofactor Ora pronunciation knowledge base.

Since most G2P conversion algorithms require clean training data, G2P models are rarely available for underresourced languages such as Māori. Building a manually annotated pronunciation dictionary is the most straightforward way to contribute to the efficient development of G2P converters for Māori. Cofactor Ora collects Māori pronunciations in a systematic and structured way, enabling the development of Māori speech technologies.